I0721 00:29:38.416674 6 e2e.go:224] Starting e2e run "427532f8-cae9-11ea-86e4-0242ac110009" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1595291377 - Will randomize all specs Will run 201 of 2164 specs Jul 21 00:29:38.591: INFO: >>> kubeConfig: /root/.kube/config Jul 21 00:29:38.593: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jul 21 00:29:38.607: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jul 21 00:29:38.641: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jul 21 00:29:38.642: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jul 21 00:29:38.642: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jul 21 00:29:38.684: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jul 21 00:29:38.684: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jul 21 00:29:38.684: INFO: e2e test version: v1.13.12 Jul 21 00:29:38.685: INFO: kube-apiserver version: v1.13.12 S ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:29:38.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services Jul 21 00:29:38.772: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-76b5l STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-76b5l to expose endpoints map[] Jul 21 00:29:38.862: INFO: Get endpoints failed (16.894978ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jul 21 00:29:39.866: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-76b5l exposes endpoints map[] (1.020358237s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-76b5l STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-76b5l to expose endpoints map[pod1:[80]] Jul 21 00:29:44.306: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-76b5l exposes endpoints map[pod1:[80]] (4.433906952s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-76b5l STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-76b5l to expose endpoints map[pod1:[80] pod2:[80]] Jul 21 00:29:48.604: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-76b5l exposes endpoints map[pod1:[80] pod2:[80]] (4.293698039s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-76b5l STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-76b5l to expose endpoints map[pod2:[80]] Jul 21 00:29:48.693: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-76b5l exposes endpoints map[pod2:[80]] (75.420254ms elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-76b5l STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-76b5l to expose endpoints map[] Jul 21 00:29:49.809: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-76b5l exposes endpoints map[] (1.112849001s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:29:49.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-76b5l" for this suite. Jul 21 00:30:11.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:30:11.911: INFO: namespace: e2e-tests-services-76b5l, resource: bindings, ignored listing per whitelist Jul 21 00:30:12.037: INFO: namespace e2e-tests-services-76b5l deletion completed in 22.144289633s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:33.352 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:30:12.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Jul 21 00:30:12.184: INFO: Waiting up to 5m0s for pod "pod-56dbe6e3-cae9-11ea-86e4-0242ac110009" in namespace "e2e-tests-emptydir-kqkdk" to be "success or failure" Jul 21 00:30:12.187: INFO: Pod "pod-56dbe6e3-cae9-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.153038ms Jul 21 00:30:14.457: INFO: Pod "pod-56dbe6e3-cae9-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2728617s Jul 21 00:30:16.461: INFO: Pod "pod-56dbe6e3-cae9-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.277010841s STEP: Saw pod success Jul 21 00:30:16.461: INFO: Pod "pod-56dbe6e3-cae9-11ea-86e4-0242ac110009" satisfied condition "success or failure" Jul 21 00:30:16.464: INFO: Trying to get logs from node hunter-worker2 pod pod-56dbe6e3-cae9-11ea-86e4-0242ac110009 container test-container: STEP: delete the pod Jul 21 00:30:16.669: INFO: Waiting for pod pod-56dbe6e3-cae9-11ea-86e4-0242ac110009 to disappear Jul 21 00:30:16.738: INFO: Pod pod-56dbe6e3-cae9-11ea-86e4-0242ac110009 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:30:16.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-kqkdk" for this suite. Jul 21 00:30:22.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:30:22.807: INFO: namespace: e2e-tests-emptydir-kqkdk, resource: bindings, ignored listing per whitelist Jul 21 00:30:22.830: INFO: namespace e2e-tests-emptydir-kqkdk deletion completed in 6.088301943s • [SLOW TEST:10.793 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:30:22.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-5d464e36-cae9-11ea-86e4-0242ac110009 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-5d464e36-cae9-11ea-86e4-0242ac110009 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:30:29.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-mkjsb" for this suite. Jul 21 00:30:55.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:30:55.162: INFO: namespace: e2e-tests-configmap-mkjsb, resource: bindings, ignored listing per whitelist Jul 21 00:30:55.177: INFO: namespace e2e-tests-configmap-mkjsb deletion completed in 26.138778028s • [SLOW TEST:32.346 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:30:55.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jul 21 00:30:55.310: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 21 00:30:55.315: INFO: Waiting for terminating namespaces to be deleted... Jul 21 00:30:55.317: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Jul 21 00:30:55.322: INFO: kube-proxy-8wnps from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded) Jul 21 00:30:55.322: INFO: Container kube-proxy ready: true, restart count 0 Jul 21 00:30:55.322: INFO: kindnet-2w5m4 from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded) Jul 21 00:30:55.322: INFO: Container kindnet-cni ready: true, restart count 0 Jul 21 00:30:55.322: INFO: rally-82b0c52c-we8ahjnz-nn795 from c-rally-82b0c52c-eqvxi283 started at 2020-07-21 00:30:31 +0000 UTC (1 container statuses recorded) Jul 21 00:30:55.322: INFO: Container rally-82b0c52c-we8ahjnz ready: true, restart count 0 Jul 21 00:30:55.322: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Jul 21 00:30:55.327: INFO: kindnet-hpnvh from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded) Jul 21 00:30:55.328: INFO: Container kindnet-cni ready: true, restart count 0 Jul 21 00:30:55.328: INFO: rally-82b0c52c-we8ahjnz-t4j7m from c-rally-82b0c52c-eqvxi283 started at 2020-07-21 00:30:31 +0000 UTC (1 container statuses recorded) Jul 21 00:30:55.328: INFO: Container rally-82b0c52c-we8ahjnz ready: true, restart count 0 Jul 21 00:30:55.328: INFO: kube-proxy-b6f6s from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded) Jul 21 00:30:55.328: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-731be389-cae9-11ea-86e4-0242ac110009 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-731be389-cae9-11ea-86e4-0242ac110009 off the node hunter-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-731be389-cae9-11ea-86e4-0242ac110009 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:31:03.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-nwgc9" for this suite. Jul 21 00:31:23.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:31:23.747: INFO: namespace: e2e-tests-sched-pred-nwgc9, resource: bindings, ignored listing per whitelist Jul 21 00:31:23.796: INFO: namespace e2e-tests-sched-pred-nwgc9 deletion completed in 20.10581608s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:28.618 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:31:23.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 21 00:31:24.682: INFO: Waiting up to 5m0s for pod "downwardapi-volume-820eb762-cae9-11ea-86e4-0242ac110009" in namespace "e2e-tests-downward-api-75k8f" to be "success or failure" Jul 21 00:31:24.734: INFO: Pod "downwardapi-volume-820eb762-cae9-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 52.604287ms Jul 21 00:31:26.979: INFO: Pod "downwardapi-volume-820eb762-cae9-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.296891943s Jul 21 00:31:28.983: INFO: Pod "downwardapi-volume-820eb762-cae9-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.301319548s STEP: Saw pod success Jul 21 00:31:28.983: INFO: Pod "downwardapi-volume-820eb762-cae9-11ea-86e4-0242ac110009" satisfied condition "success or failure" Jul 21 00:31:28.986: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-820eb762-cae9-11ea-86e4-0242ac110009 container client-container: STEP: delete the pod Jul 21 00:31:29.020: INFO: Waiting for pod downwardapi-volume-820eb762-cae9-11ea-86e4-0242ac110009 to disappear Jul 21 00:31:29.026: INFO: Pod downwardapi-volume-820eb762-cae9-11ea-86e4-0242ac110009 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:31:29.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-75k8f" for this suite. Jul 21 00:31:35.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:31:35.171: INFO: namespace: e2e-tests-downward-api-75k8f, resource: bindings, ignored listing per whitelist Jul 21 00:31:35.213: INFO: namespace e2e-tests-downward-api-75k8f deletion completed in 6.184415873s • [SLOW TEST:11.417 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:31:35.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-886e65de-cae9-11ea-86e4-0242ac110009 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:31:41.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-hsm8t" for this suite. Jul 21 00:32:03.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:32:03.454: INFO: namespace: e2e-tests-configmap-hsm8t, resource: bindings, ignored listing per whitelist Jul 21 00:32:03.501: INFO: namespace e2e-tests-configmap-hsm8t deletion completed in 22.135326768s • [SLOW TEST:28.287 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:32:03.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0721 00:32:13.758201 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 21 00:32:13.758: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:32:13.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-55fxs" for this suite. Jul 21 00:32:25.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:32:25.901: INFO: namespace: e2e-tests-gc-55fxs, resource: bindings, ignored listing per whitelist Jul 21 00:32:25.906: INFO: namespace e2e-tests-gc-55fxs deletion completed in 12.145431702s • [SLOW TEST:22.405 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:32:25.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-zssgh Jul 21 00:32:32.129: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-zssgh STEP: checking the pod's current state and verifying that restartCount is present Jul 21 00:32:32.131: INFO: Initial restart count of pod liveness-http is 0 Jul 21 00:32:49.008: INFO: Restart count of pod e2e-tests-container-probe-zssgh/liveness-http is now 1 (16.877398954s elapsed) Jul 21 00:33:06.149: INFO: Restart count of pod e2e-tests-container-probe-zssgh/liveness-http is now 2 (34.017946725s elapsed) Jul 21 00:33:22.790: INFO: Restart count of pod e2e-tests-container-probe-zssgh/liveness-http is now 3 (50.658602804s elapsed) Jul 21 00:33:44.829: INFO: Restart count of pod e2e-tests-container-probe-zssgh/liveness-http is now 4 (1m12.698279126s elapsed) Jul 21 00:34:51.165: INFO: Restart count of pod e2e-tests-container-probe-zssgh/liveness-http is now 5 (2m19.033870946s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:34:51.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-zssgh" for this suite. Jul 21 00:34:57.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:34:57.427: INFO: namespace: e2e-tests-container-probe-zssgh, resource: bindings, ignored listing per whitelist Jul 21 00:34:57.487: INFO: namespace e2e-tests-container-probe-zssgh deletion completed in 6.248194274s • [SLOW TEST:151.580 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:34:57.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 21 00:34:57.616: INFO: Waiting up to 5m0s for pod "downwardapi-volume-00f84eb8-caea-11ea-86e4-0242ac110009" in namespace "e2e-tests-downward-api-vlcgs" to be "success or failure" Jul 21 00:34:57.630: INFO: Pod "downwardapi-volume-00f84eb8-caea-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 13.997344ms Jul 21 00:34:59.634: INFO: Pod "downwardapi-volume-00f84eb8-caea-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01816097s Jul 21 00:35:01.638: INFO: Pod "downwardapi-volume-00f84eb8-caea-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021927251s Jul 21 00:35:03.641: INFO: Pod "downwardapi-volume-00f84eb8-caea-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025608152s STEP: Saw pod success Jul 21 00:35:03.641: INFO: Pod "downwardapi-volume-00f84eb8-caea-11ea-86e4-0242ac110009" satisfied condition "success or failure" Jul 21 00:35:03.644: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-00f84eb8-caea-11ea-86e4-0242ac110009 container client-container: STEP: delete the pod Jul 21 00:35:03.667: INFO: Waiting for pod downwardapi-volume-00f84eb8-caea-11ea-86e4-0242ac110009 to disappear Jul 21 00:35:03.672: INFO: Pod downwardapi-volume-00f84eb8-caea-11ea-86e4-0242ac110009 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:35:03.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-vlcgs" for this suite. Jul 21 00:35:09.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:35:09.713: INFO: namespace: e2e-tests-downward-api-vlcgs, resource: bindings, ignored listing per whitelist Jul 21 00:35:09.757: INFO: namespace e2e-tests-downward-api-vlcgs deletion completed in 6.081917019s • [SLOW TEST:12.271 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:35:09.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 21 00:35:10.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-bks2c' Jul 21 00:35:13.474: INFO: stderr: "" Jul 21 00:35:13.474: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jul 21 00:35:18.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-bks2c -o json' Jul 21 00:35:18.623: INFO: stderr: "" Jul 21 00:35:18.623: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-07-21T00:35:13Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-bks2c\",\n \"resourceVersion\": \"1909375\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-bks2c/pods/e2e-test-nginx-pod\",\n \"uid\": \"0a716e7c-caea-11ea-b2c9-0242ac120008\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-8dfp9\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-8dfp9\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-8dfp9\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-21T00:35:13Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-21T00:35:17Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-21T00:35:17Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-21T00:35:13Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://dd36d77c1ae06ef8d8aa7c797b3edf205ed82d2ae6b571c0a502b6537d5f0374\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-07-21T00:35:16Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.2\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.126\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-07-21T00:35:13Z\"\n }\n}\n" STEP: replace the image in the pod Jul 21 00:35:18.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-bks2c' Jul 21 00:35:18.977: INFO: stderr: "" Jul 21 00:35:18.977: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Jul 21 00:35:18.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-bks2c' Jul 21 00:35:27.432: INFO: stderr: "" Jul 21 00:35:27.433: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:35:27.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bks2c" for this suite. Jul 21 00:35:33.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:35:33.559: INFO: namespace: e2e-tests-kubectl-bks2c, resource: bindings, ignored listing per whitelist Jul 21 00:35:33.580: INFO: namespace e2e-tests-kubectl-bks2c deletion completed in 6.144620379s • [SLOW TEST:23.823 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:35:33.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 21 00:35:33.675: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:35:38.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-7b6t7" for this suite. Jul 21 00:36:18.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:36:18.140: INFO: namespace: e2e-tests-pods-7b6t7, resource: bindings, ignored listing per whitelist Jul 21 00:36:18.185: INFO: namespace e2e-tests-pods-7b6t7 deletion completed in 40.130187202s • [SLOW TEST:44.605 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:36:18.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Jul 21 00:36:18.654: INFO: Waiting up to 5m0s for pod "pod-314c9924-caea-11ea-86e4-0242ac110009" in namespace "e2e-tests-emptydir-lflx4" to be "success or failure" Jul 21 00:36:18.674: INFO: Pod "pod-314c9924-caea-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 20.008987ms Jul 21 00:36:20.677: INFO: Pod "pod-314c9924-caea-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023672302s Jul 21 00:36:22.682: INFO: Pod "pod-314c9924-caea-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02798839s STEP: Saw pod success Jul 21 00:36:22.682: INFO: Pod "pod-314c9924-caea-11ea-86e4-0242ac110009" satisfied condition "success or failure" Jul 21 00:36:22.685: INFO: Trying to get logs from node hunter-worker2 pod pod-314c9924-caea-11ea-86e4-0242ac110009 container test-container: STEP: delete the pod Jul 21 00:36:22.845: INFO: Waiting for pod pod-314c9924-caea-11ea-86e4-0242ac110009 to disappear Jul 21 00:36:22.996: INFO: Pod pod-314c9924-caea-11ea-86e4-0242ac110009 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:36:22.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-lflx4" for this suite. Jul 21 00:36:29.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:36:29.057: INFO: namespace: e2e-tests-emptydir-lflx4, resource: bindings, ignored listing per whitelist Jul 21 00:36:29.095: INFO: namespace e2e-tests-emptydir-lflx4 deletion completed in 6.09521626s • [SLOW TEST:10.910 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:36:29.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-3799c0e0-caea-11ea-86e4-0242ac110009 STEP: Creating a pod to test consume configMaps Jul 21 00:36:29.244: INFO: Waiting up to 5m0s for pod "pod-configmaps-379c57ec-caea-11ea-86e4-0242ac110009" in namespace "e2e-tests-configmap-lcq5f" to be "success or failure" Jul 21 00:36:29.266: INFO: Pod "pod-configmaps-379c57ec-caea-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 21.621617ms Jul 21 00:36:31.590: INFO: Pod "pod-configmaps-379c57ec-caea-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.345884368s Jul 21 00:36:33.625: INFO: Pod "pod-configmaps-379c57ec-caea-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.380926788s STEP: Saw pod success Jul 21 00:36:33.625: INFO: Pod "pod-configmaps-379c57ec-caea-11ea-86e4-0242ac110009" satisfied condition "success or failure" Jul 21 00:36:33.659: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-379c57ec-caea-11ea-86e4-0242ac110009 container configmap-volume-test: STEP: delete the pod Jul 21 00:36:33.882: INFO: Waiting for pod pod-configmaps-379c57ec-caea-11ea-86e4-0242ac110009 to disappear Jul 21 00:36:34.116: INFO: Pod pod-configmaps-379c57ec-caea-11ea-86e4-0242ac110009 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:36:34.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-lcq5f" for this suite. Jul 21 00:36:40.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:36:40.194: INFO: namespace: e2e-tests-configmap-lcq5f, resource: bindings, ignored listing per whitelist Jul 21 00:36:40.210: INFO: namespace e2e-tests-configmap-lcq5f deletion completed in 6.088768038s • [SLOW TEST:11.114 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:36:40.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jul 21 00:36:47.101: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-3e953a26-caea-11ea-86e4-0242ac110009,GenerateName:,Namespace:e2e-tests-events-trwwq,SelfLink:/api/v1/namespaces/e2e-tests-events-trwwq/pods/send-events-3e953a26-caea-11ea-86e4-0242ac110009,UID:3ea7e2a4-caea-11ea-b2c9-0242ac120008,ResourceVersion:1909742,Generation:0,CreationTimestamp:2020-07-21 00:36:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 934669128,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mhcx4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mhcx4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-mhcx4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00165d550} {node.kubernetes.io/unreachable Exists NoExecute 0xc00165d570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 00:36:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 00:36:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 00:36:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 00:36:41 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.109,StartTime:2020-07-21 00:36:41 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-07-21 00:36:44 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://528dac52908b5607f68ec9f92c92ceff78776d278ab7cd08868a13f2be4c03f4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Jul 21 00:36:49.106: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jul 21 00:36:51.110: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:36:51.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-trwwq" for this suite. Jul 21 00:37:29.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:37:29.311: INFO: namespace: e2e-tests-events-trwwq, resource: bindings, ignored listing per whitelist Jul 21 00:37:29.354: INFO: namespace e2e-tests-events-trwwq deletion completed in 38.080083194s • [SLOW TEST:49.144 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:37:29.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jul 21 00:37:29.634: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:37:29.636: INFO: Number of nodes with available pods: 0 Jul 21 00:37:29.636: INFO: Node hunter-worker is running more than one daemon pod Jul 21 00:37:30.641: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:37:30.645: INFO: Number of nodes with available pods: 0 Jul 21 00:37:30.645: INFO: Node hunter-worker is running more than one daemon pod Jul 21 00:37:31.642: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:37:31.645: INFO: Number of nodes with available pods: 0 Jul 21 00:37:31.646: INFO: Node hunter-worker is running more than one daemon pod Jul 21 00:37:32.657: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:37:32.873: INFO: Number of nodes with available pods: 0 Jul 21 00:37:32.873: INFO: Node hunter-worker is running more than one daemon pod Jul 21 00:37:33.660: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:37:33.678: INFO: Number of nodes with available pods: 1 Jul 21 00:37:33.678: INFO: Node hunter-worker is running more than one daemon pod Jul 21 00:37:34.642: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:37:34.645: INFO: Number of nodes with available pods: 2 Jul 21 00:37:34.646: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jul 21 00:37:34.667: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:37:34.670: INFO: Number of nodes with available pods: 1 Jul 21 00:37:34.670: INFO: Node hunter-worker2 is running more than one daemon pod Jul 21 00:37:35.675: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:37:35.677: INFO: Number of nodes with available pods: 1 Jul 21 00:37:35.677: INFO: Node hunter-worker2 is running more than one daemon pod Jul 21 00:37:36.675: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:37:36.709: INFO: Number of nodes with available pods: 1 Jul 21 00:37:36.709: INFO: Node hunter-worker2 is running more than one daemon pod Jul 21 00:37:37.687: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:37:37.690: INFO: Number of nodes with available pods: 1 Jul 21 00:37:37.690: INFO: Node hunter-worker2 is running more than one daemon pod Jul 21 00:37:38.675: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:37:38.678: INFO: Number of nodes with available pods: 1 Jul 21 00:37:38.678: INFO: Node hunter-worker2 is running more than one daemon pod Jul 21 00:37:39.687: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:37:39.690: INFO: Number of nodes with available pods: 1 Jul 21 00:37:39.690: INFO: Node hunter-worker2 is running more than one daemon pod Jul 21 00:37:40.681: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:37:40.689: INFO: Number of nodes with available pods: 1 Jul 21 00:37:40.689: INFO: Node hunter-worker2 is running more than one daemon pod Jul 21 00:37:41.676: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:37:41.679: INFO: Number of nodes with available pods: 1 Jul 21 00:37:41.679: INFO: Node hunter-worker2 is running more than one daemon pod Jul 21 00:37:42.675: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:37:42.678: INFO: Number of nodes with available pods: 1 Jul 21 00:37:42.678: INFO: Node hunter-worker2 is running more than one daemon pod Jul 21 00:37:43.676: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:37:43.679: INFO: Number of nodes with available pods: 1 Jul 21 00:37:43.679: INFO: Node hunter-worker2 is running more than one daemon pod Jul 21 00:37:44.676: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:37:44.679: INFO: Number of nodes with available pods: 1 Jul 21 00:37:44.679: INFO: Node hunter-worker2 is running more than one daemon pod Jul 21 00:37:45.674: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:37:45.676: INFO: Number of nodes with available pods: 1 Jul 21 00:37:45.676: INFO: Node hunter-worker2 is running more than one daemon pod Jul 21 00:37:46.676: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:37:46.679: INFO: Number of nodes with available pods: 1 Jul 21 00:37:46.679: INFO: Node hunter-worker2 is running more than one daemon pod Jul 21 00:37:47.676: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:37:47.679: INFO: Number of nodes with available pods: 1 Jul 21 00:37:47.679: INFO: Node hunter-worker2 is running more than one daemon pod Jul 21 00:37:48.676: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:37:48.679: INFO: Number of nodes with available pods: 1 Jul 21 00:37:48.679: INFO: Node hunter-worker2 is running more than one daemon pod Jul 21 00:37:49.676: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:37:49.679: INFO: Number of nodes with available pods: 1 Jul 21 00:37:49.679: INFO: Node hunter-worker2 is running more than one daemon pod Jul 21 00:37:50.676: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:37:50.679: INFO: Number of nodes with available pods: 1 Jul 21 00:37:50.679: INFO: Node hunter-worker2 is running more than one daemon pod Jul 21 00:37:51.741: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:37:51.744: INFO: Number of nodes with available pods: 2 Jul 21 00:37:51.744: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-48c9x, will wait for the garbage collector to delete the pods Jul 21 00:37:51.803: INFO: Deleting DaemonSet.extensions daemon-set took: 5.228051ms Jul 21 00:37:52.103: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.23747ms Jul 21 00:38:07.608: INFO: Number of nodes with available pods: 0 Jul 21 00:38:07.608: INFO: Number of running nodes: 0, number of available pods: 0 Jul 21 00:38:07.613: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-48c9x/daemonsets","resourceVersion":"1910069"},"items":null} Jul 21 00:38:07.615: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-48c9x/pods","resourceVersion":"1910069"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:38:07.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-48c9x" for this suite. Jul 21 00:38:13.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:38:14.341: INFO: namespace: e2e-tests-daemonsets-48c9x, resource: bindings, ignored listing per whitelist Jul 21 00:38:14.348: INFO: namespace e2e-tests-daemonsets-48c9x deletion completed in 6.678875285s • [SLOW TEST:44.994 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:38:14.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-766b567e-caea-11ea-86e4-0242ac110009 STEP: Creating a pod to test consume secrets Jul 21 00:38:15.025: INFO: Waiting up to 5m0s for pod "pod-secrets-7680d33c-caea-11ea-86e4-0242ac110009" in namespace "e2e-tests-secrets-tcctk" to be "success or failure" Jul 21 00:38:15.067: INFO: Pod "pod-secrets-7680d33c-caea-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 42.486178ms Jul 21 00:38:17.071: INFO: Pod "pod-secrets-7680d33c-caea-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046661489s Jul 21 00:38:19.075: INFO: Pod "pod-secrets-7680d33c-caea-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049725643s STEP: Saw pod success Jul 21 00:38:19.075: INFO: Pod "pod-secrets-7680d33c-caea-11ea-86e4-0242ac110009" satisfied condition "success or failure" Jul 21 00:38:19.077: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-7680d33c-caea-11ea-86e4-0242ac110009 container secret-volume-test: STEP: delete the pod Jul 21 00:38:19.138: INFO: Waiting for pod pod-secrets-7680d33c-caea-11ea-86e4-0242ac110009 to disappear Jul 21 00:38:19.153: INFO: Pod pod-secrets-7680d33c-caea-11ea-86e4-0242ac110009 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:38:19.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-tcctk" for this suite. Jul 21 00:38:25.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:38:25.272: INFO: namespace: e2e-tests-secrets-tcctk, resource: bindings, ignored listing per whitelist Jul 21 00:38:25.280: INFO: namespace e2e-tests-secrets-tcctk deletion completed in 6.124243201s • [SLOW TEST:10.932 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:38:25.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-7d0e724b-caea-11ea-86e4-0242ac110009 STEP: Creating a pod to test consume secrets Jul 21 00:38:25.777: INFO: Waiting up to 5m0s for pod "pod-secrets-7d11d532-caea-11ea-86e4-0242ac110009" in namespace "e2e-tests-secrets-gx4pv" to be "success or failure" Jul 21 00:38:25.816: INFO: Pod "pod-secrets-7d11d532-caea-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 38.70016ms Jul 21 00:38:27.879: INFO: Pod "pod-secrets-7d11d532-caea-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101838007s Jul 21 00:38:29.883: INFO: Pod "pod-secrets-7d11d532-caea-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105605269s Jul 21 00:38:31.887: INFO: Pod "pod-secrets-7d11d532-caea-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.109887301s STEP: Saw pod success Jul 21 00:38:31.887: INFO: Pod "pod-secrets-7d11d532-caea-11ea-86e4-0242ac110009" satisfied condition "success or failure" Jul 21 00:38:31.890: INFO: Trying to get logs from node hunter-worker pod pod-secrets-7d11d532-caea-11ea-86e4-0242ac110009 container secret-volume-test: STEP: delete the pod Jul 21 00:38:31.914: INFO: Waiting for pod pod-secrets-7d11d532-caea-11ea-86e4-0242ac110009 to disappear Jul 21 00:38:31.937: INFO: Pod pod-secrets-7d11d532-caea-11ea-86e4-0242ac110009 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:38:31.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-gx4pv" for this suite. Jul 21 00:38:38.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:38:38.062: INFO: namespace: e2e-tests-secrets-gx4pv, resource: bindings, ignored listing per whitelist Jul 21 00:38:38.090: INFO: namespace e2e-tests-secrets-gx4pv deletion completed in 6.149345854s • [SLOW TEST:12.810 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:38:38.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jul 21 00:38:42.728: INFO: Successfully updated pod "annotationupdate8476fd27-caea-11ea-86e4-0242ac110009" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:38:44.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-6w69q" for this suite. Jul 21 00:39:07.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:39:07.087: INFO: namespace: e2e-tests-projected-6w69q, resource: bindings, ignored listing per whitelist Jul 21 00:39:07.456: INFO: namespace e2e-tests-projected-6w69q deletion completed in 22.652083521s • [SLOW TEST:29.366 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:39:07.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 21 00:39:08.124: INFO: Creating deployment "test-recreate-deployment" Jul 21 00:39:08.202: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jul 21 00:39:08.219: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Jul 21 00:39:10.590: INFO: Waiting deployment "test-recreate-deployment" to complete Jul 21 00:39:10.592: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730888748, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730888748, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730888748, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730888748, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 21 00:39:13.011: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730888748, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730888748, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730888748, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730888748, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 21 00:39:14.999: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jul 21 00:39:15.005: INFO: Updating deployment test-recreate-deployment Jul 21 00:39:15.005: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jul 21 00:39:16.897: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-dgbx2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-dgbx2/deployments/test-recreate-deployment,UID:9650cfe8-caea-11ea-b2c9-0242ac120008,ResourceVersion:1910405,Generation:2,CreationTimestamp:2020-07-21 00:39:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-07-21 00:39:16 +0000 UTC 2020-07-21 00:39:16 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-07-21 00:39:16 +0000 UTC 2020-07-21 00:39:08 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Jul 21 00:39:16.901: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-dgbx2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-dgbx2/replicasets/test-recreate-deployment-589c4bfd,UID:9affb5b7-caea-11ea-b2c9-0242ac120008,ResourceVersion:1910404,Generation:1,CreationTimestamp:2020-07-21 00:39:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 9650cfe8-caea-11ea-b2c9-0242ac120008 0xc0019cb68f 0xc0019cb6a0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jul 21 00:39:16.901: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jul 21 00:39:16.901: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-dgbx2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-dgbx2/replicasets/test-recreate-deployment-5bf7f65dc,UID:965f2e0a-caea-11ea-b2c9-0242ac120008,ResourceVersion:1910392,Generation:2,CreationTimestamp:2020-07-21 00:39:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 9650cfe8-caea-11ea-b2c9-0242ac120008 0xc0019cb760 0xc0019cb761}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jul 21 00:39:17.087: INFO: Pod "test-recreate-deployment-589c4bfd-bbkzr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-bbkzr,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-dgbx2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dgbx2/pods/test-recreate-deployment-589c4bfd-bbkzr,UID:9b08b301-caea-11ea-b2c9-0242ac120008,ResourceVersion:1910407,Generation:0,CreationTimestamp:2020-07-21 00:39:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 9affb5b7-caea-11ea-b2c9-0242ac120008 0xc000dde90f 0xc000dde920}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-t5pwn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t5pwn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-t5pwn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000dde990} {node.kubernetes.io/unreachable Exists NoExecute 0xc000dde9b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 00:39:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 00:39:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 00:39:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 00:39:16 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-07-21 00:39:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:39:17.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-dgbx2" for this suite. Jul 21 00:39:25.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:39:25.795: INFO: namespace: e2e-tests-deployment-dgbx2, resource: bindings, ignored listing per whitelist Jul 21 00:39:25.799: INFO: namespace e2e-tests-deployment-dgbx2 deletion completed in 8.709612477s • [SLOW TEST:18.343 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:39:25.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Jul 21 00:39:25.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fggc2' Jul 21 00:39:26.248: INFO: stderr: "" Jul 21 00:39:26.248: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Jul 21 00:39:27.253: INFO: Selector matched 1 pods for map[app:redis] Jul 21 00:39:27.253: INFO: Found 0 / 1 Jul 21 00:39:28.252: INFO: Selector matched 1 pods for map[app:redis] Jul 21 00:39:28.252: INFO: Found 0 / 1 Jul 21 00:39:29.253: INFO: Selector matched 1 pods for map[app:redis] Jul 21 00:39:29.253: INFO: Found 0 / 1 Jul 21 00:39:30.253: INFO: Selector matched 1 pods for map[app:redis] Jul 21 00:39:30.253: INFO: Found 0 / 1 Jul 21 00:39:31.253: INFO: Selector matched 1 pods for map[app:redis] Jul 21 00:39:31.253: INFO: Found 1 / 1 Jul 21 00:39:31.253: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 21 00:39:31.255: INFO: Selector matched 1 pods for map[app:redis] Jul 21 00:39:31.255: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jul 21 00:39:31.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-vnm8f redis-master --namespace=e2e-tests-kubectl-fggc2' Jul 21 00:39:31.363: INFO: stderr: "" Jul 21 00:39:31.363: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 21 Jul 00:39:29.404 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 21 Jul 00:39:29.404 # Server started, Redis version 3.2.12\n1:M 21 Jul 00:39:29.404 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 21 Jul 00:39:29.404 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jul 21 00:39:31.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-vnm8f redis-master --namespace=e2e-tests-kubectl-fggc2 --tail=1' Jul 21 00:39:31.464: INFO: stderr: "" Jul 21 00:39:31.464: INFO: stdout: "1:M 21 Jul 00:39:29.404 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jul 21 00:39:31.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-vnm8f redis-master --namespace=e2e-tests-kubectl-fggc2 --limit-bytes=1' Jul 21 00:39:31.563: INFO: stderr: "" Jul 21 00:39:31.563: INFO: stdout: " " STEP: exposing timestamps Jul 21 00:39:31.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-vnm8f redis-master --namespace=e2e-tests-kubectl-fggc2 --tail=1 --timestamps' Jul 21 00:39:31.809: INFO: stderr: "" Jul 21 00:39:31.809: INFO: stdout: "2020-07-21T00:39:29.40485462Z 1:M 21 Jul 00:39:29.404 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jul 21 00:39:34.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-vnm8f redis-master --namespace=e2e-tests-kubectl-fggc2 --since=1s' Jul 21 00:39:34.441: INFO: stderr: "" Jul 21 00:39:34.441: INFO: stdout: "" Jul 21 00:39:34.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-vnm8f redis-master --namespace=e2e-tests-kubectl-fggc2 --since=24h' Jul 21 00:39:34.551: INFO: stderr: "" Jul 21 00:39:34.551: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 21 Jul 00:39:29.404 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 21 Jul 00:39:29.404 # Server started, Redis version 3.2.12\n1:M 21 Jul 00:39:29.404 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 21 Jul 00:39:29.404 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Jul 21 00:39:34.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-fggc2' Jul 21 00:39:34.681: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 21 00:39:34.681: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jul 21 00:39:34.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-fggc2' Jul 21 00:39:34.797: INFO: stderr: "No resources found.\n" Jul 21 00:39:34.797: INFO: stdout: "" Jul 21 00:39:34.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-fggc2 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 21 00:39:34.901: INFO: stderr: "" Jul 21 00:39:34.901: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:39:34.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fggc2" for this suite. Jul 21 00:39:59.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:39:59.174: INFO: namespace: e2e-tests-kubectl-fggc2, resource: bindings, ignored listing per whitelist Jul 21 00:39:59.215: INFO: namespace e2e-tests-kubectl-fggc2 deletion completed in 24.310722352s • [SLOW TEST:33.416 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:39:59.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-6jd7t STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 21 00:39:59.353: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 21 00:40:29.728: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.115:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-6jd7t PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 21 00:40:29.728: INFO: >>> kubeConfig: /root/.kube/config I0721 00:40:29.754528 6 log.go:172] (0xc000675970) (0xc001902640) Create stream I0721 00:40:29.754555 6 log.go:172] (0xc000675970) (0xc001902640) Stream added, broadcasting: 1 I0721 00:40:29.757018 6 log.go:172] (0xc000675970) Reply frame received for 1 I0721 00:40:29.757055 6 log.go:172] (0xc000675970) (0xc001ac43c0) Create stream I0721 00:40:29.757065 6 log.go:172] (0xc000675970) (0xc001ac43c0) Stream added, broadcasting: 3 I0721 00:40:29.757866 6 log.go:172] (0xc000675970) Reply frame received for 3 I0721 00:40:29.757896 6 log.go:172] (0xc000675970) (0xc001b5c000) Create stream I0721 00:40:29.757907 6 log.go:172] (0xc000675970) (0xc001b5c000) Stream added, broadcasting: 5 I0721 00:40:29.758707 6 log.go:172] (0xc000675970) Reply frame received for 5 I0721 00:40:29.841576 6 log.go:172] (0xc000675970) Data frame received for 5 I0721 00:40:29.841636 6 log.go:172] (0xc001b5c000) (5) Data frame handling I0721 00:40:29.841666 6 log.go:172] (0xc000675970) Data frame received for 3 I0721 00:40:29.841681 6 log.go:172] (0xc001ac43c0) (3) Data frame handling I0721 00:40:29.841711 6 log.go:172] (0xc001ac43c0) (3) Data frame sent I0721 00:40:29.841739 6 log.go:172] (0xc000675970) Data frame received for 3 I0721 00:40:29.841758 6 log.go:172] (0xc001ac43c0) (3) Data frame handling I0721 00:40:29.843398 6 log.go:172] (0xc000675970) Data frame received for 1 I0721 00:40:29.843425 6 log.go:172] (0xc001902640) (1) Data frame handling I0721 00:40:29.843441 6 log.go:172] (0xc001902640) (1) Data frame sent I0721 00:40:29.843572 6 log.go:172] (0xc000675970) (0xc001902640) Stream removed, broadcasting: 1 I0721 00:40:29.843656 6 log.go:172] (0xc000675970) (0xc001902640) Stream removed, broadcasting: 1 I0721 00:40:29.843675 6 log.go:172] (0xc000675970) (0xc001ac43c0) Stream removed, broadcasting: 3 I0721 00:40:29.843683 6 log.go:172] (0xc000675970) (0xc001b5c000) Stream removed, broadcasting: 5 Jul 21 00:40:29.843: INFO: Found all expected endpoints: [netserver-0] I0721 00:40:29.844007 6 log.go:172] (0xc000675970) Go away received Jul 21 00:40:29.847: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.137:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-6jd7t PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 21 00:40:29.847: INFO: >>> kubeConfig: /root/.kube/config I0721 00:40:29.872065 6 log.go:172] (0xc0000ebd90) (0xc001ac4640) Create stream I0721 00:40:29.872102 6 log.go:172] (0xc0000ebd90) (0xc001ac4640) Stream added, broadcasting: 1 I0721 00:40:29.874218 6 log.go:172] (0xc0000ebd90) Reply frame received for 1 I0721 00:40:29.874250 6 log.go:172] (0xc0000ebd90) (0xc001838000) Create stream I0721 00:40:29.874262 6 log.go:172] (0xc0000ebd90) (0xc001838000) Stream added, broadcasting: 3 I0721 00:40:29.875027 6 log.go:172] (0xc0000ebd90) Reply frame received for 3 I0721 00:40:29.875067 6 log.go:172] (0xc0000ebd90) (0xc001902780) Create stream I0721 00:40:29.875081 6 log.go:172] (0xc0000ebd90) (0xc001902780) Stream added, broadcasting: 5 I0721 00:40:29.875796 6 log.go:172] (0xc0000ebd90) Reply frame received for 5 I0721 00:40:29.929182 6 log.go:172] (0xc0000ebd90) Data frame received for 3 I0721 00:40:29.929215 6 log.go:172] (0xc001838000) (3) Data frame handling I0721 00:40:29.929238 6 log.go:172] (0xc001838000) (3) Data frame sent I0721 00:40:29.929258 6 log.go:172] (0xc0000ebd90) Data frame received for 3 I0721 00:40:29.929271 6 log.go:172] (0xc001838000) (3) Data frame handling I0721 00:40:29.929497 6 log.go:172] (0xc0000ebd90) Data frame received for 5 I0721 00:40:29.929530 6 log.go:172] (0xc001902780) (5) Data frame handling I0721 00:40:29.930984 6 log.go:172] (0xc0000ebd90) Data frame received for 1 I0721 00:40:29.931024 6 log.go:172] (0xc001ac4640) (1) Data frame handling I0721 00:40:29.931051 6 log.go:172] (0xc001ac4640) (1) Data frame sent I0721 00:40:29.931063 6 log.go:172] (0xc0000ebd90) (0xc001ac4640) Stream removed, broadcasting: 1 I0721 00:40:29.931157 6 log.go:172] (0xc0000ebd90) (0xc001ac4640) Stream removed, broadcasting: 1 I0721 00:40:29.931187 6 log.go:172] (0xc0000ebd90) (0xc001838000) Stream removed, broadcasting: 3 I0721 00:40:29.931207 6 log.go:172] (0xc0000ebd90) (0xc001902780) Stream removed, broadcasting: 5 Jul 21 00:40:29.931: INFO: Found all expected endpoints: [netserver-1] I0721 00:40:29.931278 6 log.go:172] (0xc0000ebd90) Go away received [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:40:29.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-6jd7t" for this suite. Jul 21 00:40:56.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:40:56.400: INFO: namespace: e2e-tests-pod-network-test-6jd7t, resource: bindings, ignored listing per whitelist Jul 21 00:40:56.414: INFO: namespace e2e-tests-pod-network-test-6jd7t deletion completed in 26.478888552s • [SLOW TEST:57.199 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:40:56.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jul 21 00:40:56.513: INFO: PodSpec: initContainers in spec.initContainers Jul 21 00:41:50.391: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-d6eb7b7d-caea-11ea-86e4-0242ac110009", GenerateName:"", Namespace:"e2e-tests-init-container-cbw7w", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-cbw7w/pods/pod-init-d6eb7b7d-caea-11ea-86e4-0242ac110009", UID:"d6f7e017-caea-11ea-b2c9-0242ac120008", ResourceVersion:"1911073", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63730888856, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"513631895"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-h9kk6", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001140540), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-h9kk6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-h9kk6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-h9kk6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001210398), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00079b620), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001210420)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001210440)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001210448), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00121044c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730888857, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730888857, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730888857, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730888856, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.2", PodIP:"10.244.1.138", StartTime:(*v1.Time)(0xc0018c4e80), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0002c6e00)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0002c6e70)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://c84bc6df6369b21f52542204b43668181e998fb015971013e8951f9f05cf8f1a"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0018c4ec0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0018c4ea0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:41:50.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-cbw7w" for this suite. Jul 21 00:42:14.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:42:14.444: INFO: namespace: e2e-tests-init-container-cbw7w, resource: bindings, ignored listing per whitelist Jul 21 00:42:14.495: INFO: namespace e2e-tests-init-container-cbw7w deletion completed in 24.099242164s • [SLOW TEST:78.081 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:42:14.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-xw26h STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-xw26h STEP: Deleting pre-stop pod Jul 21 00:42:27.734: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:42:27.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-xw26h" for this suite. Jul 21 00:43:08.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:43:08.102: INFO: namespace: e2e-tests-prestop-xw26h, resource: bindings, ignored listing per whitelist Jul 21 00:43:08.105: INFO: namespace e2e-tests-prestop-xw26h deletion completed in 40.334273536s • [SLOW TEST:53.610 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:43:08.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-2572a038-caeb-11ea-86e4-0242ac110009 STEP: Creating a pod to test consume configMaps Jul 21 00:43:08.282: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2573187a-caeb-11ea-86e4-0242ac110009" in namespace "e2e-tests-projected-hcf5d" to be "success or failure" Jul 21 00:43:08.304: INFO: Pod "pod-projected-configmaps-2573187a-caeb-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 21.946656ms Jul 21 00:43:10.308: INFO: Pod "pod-projected-configmaps-2573187a-caeb-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026046393s Jul 21 00:43:12.312: INFO: Pod "pod-projected-configmaps-2573187a-caeb-11ea-86e4-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.03011177s Jul 21 00:43:14.317: INFO: Pod "pod-projected-configmaps-2573187a-caeb-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034705059s STEP: Saw pod success Jul 21 00:43:14.317: INFO: Pod "pod-projected-configmaps-2573187a-caeb-11ea-86e4-0242ac110009" satisfied condition "success or failure" Jul 21 00:43:14.319: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-2573187a-caeb-11ea-86e4-0242ac110009 container projected-configmap-volume-test: STEP: delete the pod Jul 21 00:43:14.423: INFO: Waiting for pod pod-projected-configmaps-2573187a-caeb-11ea-86e4-0242ac110009 to disappear Jul 21 00:43:14.436: INFO: Pod pod-projected-configmaps-2573187a-caeb-11ea-86e4-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:43:14.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hcf5d" for this suite. Jul 21 00:43:20.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:43:20.483: INFO: namespace: e2e-tests-projected-hcf5d, resource: bindings, ignored listing per whitelist Jul 21 00:43:20.528: INFO: namespace e2e-tests-projected-hcf5d deletion completed in 6.087910362s • [SLOW TEST:12.422 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:43:20.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 21 00:43:20.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Jul 21 00:43:20.776: INFO: stderr: "" Jul 21 00:43:20.776: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-07-10T10:25:27Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Jul 21 00:43:20.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-k5wjh' Jul 21 00:43:21.095: INFO: stderr: "" Jul 21 00:43:21.095: INFO: stdout: "replicationcontroller/redis-master created\n" Jul 21 00:43:21.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-k5wjh' Jul 21 00:43:21.440: INFO: stderr: "" Jul 21 00:43:21.440: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Jul 21 00:43:22.444: INFO: Selector matched 1 pods for map[app:redis] Jul 21 00:43:22.444: INFO: Found 0 / 1 Jul 21 00:43:23.577: INFO: Selector matched 1 pods for map[app:redis] Jul 21 00:43:23.577: INFO: Found 0 / 1 Jul 21 00:43:24.444: INFO: Selector matched 1 pods for map[app:redis] Jul 21 00:43:24.444: INFO: Found 0 / 1 Jul 21 00:43:25.444: INFO: Selector matched 1 pods for map[app:redis] Jul 21 00:43:25.444: INFO: Found 0 / 1 Jul 21 00:43:26.444: INFO: Selector matched 1 pods for map[app:redis] Jul 21 00:43:26.444: INFO: Found 1 / 1 Jul 21 00:43:26.444: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 21 00:43:26.446: INFO: Selector matched 1 pods for map[app:redis] Jul 21 00:43:26.446: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 21 00:43:26.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-x8264 --namespace=e2e-tests-kubectl-k5wjh' Jul 21 00:43:26.558: INFO: stderr: "" Jul 21 00:43:26.558: INFO: stdout: "Name: redis-master-x8264\nNamespace: e2e-tests-kubectl-k5wjh\nPriority: 0\nPriorityClassName: \nNode: hunter-worker2/172.18.0.2\nStart Time: Tue, 21 Jul 2020 00:43:21 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.142\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://c9f3a31d14f611866d1e71087484115480f16519d3e947cae60bef61fccaa4d0\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 21 Jul 2020 00:43:25 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-dz7dn (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-dz7dn:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-dz7dn\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5s default-scheduler Successfully assigned e2e-tests-kubectl-k5wjh/redis-master-x8264 to hunter-worker2\n Normal Pulled 4s kubelet, hunter-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, hunter-worker2 Created container\n Normal Started 1s kubelet, hunter-worker2 Started container\n" Jul 21 00:43:26.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-k5wjh' Jul 21 00:43:26.685: INFO: stderr: "" Jul 21 00:43:26.685: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-k5wjh\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: redis-master-x8264\n" Jul 21 00:43:26.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-k5wjh' Jul 21 00:43:26.802: INFO: stderr: "" Jul 21 00:43:26.802: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-k5wjh\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.105.185.82\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.142:6379\nSession Affinity: None\nEvents: \n" Jul 21 00:43:26.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' Jul 21 00:43:26.928: INFO: stderr: "" Jul 21 00:43:26.928: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 10 Jul 2020 10:22:18 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 21 Jul 2020 00:43:18 +0000 Fri, 10 Jul 2020 10:22:13 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 21 Jul 2020 00:43:18 +0000 Fri, 10 Jul 2020 10:22:13 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 21 Jul 2020 00:43:18 +0000 Fri, 10 Jul 2020 10:22:13 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 21 Jul 2020 00:43:18 +0000 Fri, 10 Jul 2020 10:23:08 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.8\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nSystem Info:\n Machine ID: 86b921187bcd42a69301f53c2d21b8f0\n System UUID: dbd65bbc-7a27-4b36-b69e-be53f27cba5c\n Boot ID: 11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version: 4.15.0-109-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-54ff9cd656-46fs4 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 10d\n kube-system coredns-54ff9cd656-gzt7d 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 10d\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10d\n kube-system kindnet-r4bfs 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 10d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 10d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 10d\n kube-system kube-proxy-4jv56 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 10d\n local-path-storage local-path-provisioner-674595c7-jw5rw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jul 21 00:43:26.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-k5wjh' Jul 21 00:43:27.042: INFO: stderr: "" Jul 21 00:43:27.042: INFO: stdout: "Name: e2e-tests-kubectl-k5wjh\nLabels: e2e-framework=kubectl\n e2e-run=427532f8-cae9-11ea-86e4-0242ac110009\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:43:27.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-k5wjh" for this suite. Jul 21 00:43:51.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:43:51.101: INFO: namespace: e2e-tests-kubectl-k5wjh, resource: bindings, ignored listing per whitelist Jul 21 00:43:51.143: INFO: namespace e2e-tests-kubectl-k5wjh deletion completed in 24.097548392s • [SLOW TEST:30.615 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:43:51.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-psm94 Jul 21 00:43:55.314: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-psm94 STEP: checking the pod's current state and verifying that restartCount is present Jul 21 00:43:55.317: INFO: Initial restart count of pod liveness-http is 0 Jul 21 00:44:11.427: INFO: Restart count of pod e2e-tests-container-probe-psm94/liveness-http is now 1 (16.110016817s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:44:11.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-psm94" for this suite. Jul 21 00:44:17.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:44:17.638: INFO: namespace: e2e-tests-container-probe-psm94, resource: bindings, ignored listing per whitelist Jul 21 00:44:17.657: INFO: namespace e2e-tests-container-probe-psm94 deletion completed in 6.161718229s • [SLOW TEST:26.513 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:44:17.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-4ee48d8e-caeb-11ea-86e4-0242ac110009 STEP: Creating a pod to test consume secrets Jul 21 00:44:17.817: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4ee6c8f3-caeb-11ea-86e4-0242ac110009" in namespace "e2e-tests-projected-9wn2b" to be "success or failure" Jul 21 00:44:17.821: INFO: Pod "pod-projected-secrets-4ee6c8f3-caeb-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.530474ms Jul 21 00:44:19.825: INFO: Pod "pod-projected-secrets-4ee6c8f3-caeb-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007519379s Jul 21 00:44:21.829: INFO: Pod "pod-projected-secrets-4ee6c8f3-caeb-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01132696s STEP: Saw pod success Jul 21 00:44:21.829: INFO: Pod "pod-projected-secrets-4ee6c8f3-caeb-11ea-86e4-0242ac110009" satisfied condition "success or failure" Jul 21 00:44:21.831: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-4ee6c8f3-caeb-11ea-86e4-0242ac110009 container projected-secret-volume-test: STEP: delete the pod Jul 21 00:44:22.097: INFO: Waiting for pod pod-projected-secrets-4ee6c8f3-caeb-11ea-86e4-0242ac110009 to disappear Jul 21 00:44:22.186: INFO: Pod pod-projected-secrets-4ee6c8f3-caeb-11ea-86e4-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:44:22.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9wn2b" for this suite. Jul 21 00:44:28.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:44:28.283: INFO: namespace: e2e-tests-projected-9wn2b, resource: bindings, ignored listing per whitelist Jul 21 00:44:28.384: INFO: namespace e2e-tests-projected-9wn2b deletion completed in 6.194046501s • [SLOW TEST:10.727 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:44:28.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Jul 21 00:44:28.506: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix095614791/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:44:28.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-pptdq" for this suite. Jul 21 00:44:34.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:44:34.665: INFO: namespace: e2e-tests-kubectl-pptdq, resource: bindings, ignored listing per whitelist Jul 21 00:44:34.715: INFO: namespace e2e-tests-kubectl-pptdq deletion completed in 6.135181639s • [SLOW TEST:6.331 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:44:34.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-590c21b8-caeb-11ea-86e4-0242ac110009 STEP: Creating a pod to test consume configMaps Jul 21 00:44:34.847: INFO: Waiting up to 5m0s for pod "pod-configmaps-590d6c35-caeb-11ea-86e4-0242ac110009" in namespace "e2e-tests-configmap-tffnz" to be "success or failure" Jul 21 00:44:34.851: INFO: Pod "pod-configmaps-590d6c35-caeb-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.793699ms Jul 21 00:44:36.902: INFO: Pod "pod-configmaps-590d6c35-caeb-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054477105s Jul 21 00:44:38.905: INFO: Pod "pod-configmaps-590d6c35-caeb-11ea-86e4-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.058106872s Jul 21 00:44:40.909: INFO: Pod "pod-configmaps-590d6c35-caeb-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.061882197s STEP: Saw pod success Jul 21 00:44:40.909: INFO: Pod "pod-configmaps-590d6c35-caeb-11ea-86e4-0242ac110009" satisfied condition "success or failure" Jul 21 00:44:40.912: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-590d6c35-caeb-11ea-86e4-0242ac110009 container configmap-volume-test: STEP: delete the pod Jul 21 00:44:40.943: INFO: Waiting for pod pod-configmaps-590d6c35-caeb-11ea-86e4-0242ac110009 to disappear Jul 21 00:44:40.947: INFO: Pod pod-configmaps-590d6c35-caeb-11ea-86e4-0242ac110009 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:44:40.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-tffnz" for this suite. Jul 21 00:44:46.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:44:47.024: INFO: namespace: e2e-tests-configmap-tffnz, resource: bindings, ignored listing per whitelist Jul 21 00:44:47.035: INFO: namespace e2e-tests-configmap-tffnz deletion completed in 6.084747205s • [SLOW TEST:12.320 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:44:47.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-60608378-caeb-11ea-86e4-0242ac110009 Jul 21 00:44:47.151: INFO: Pod name my-hostname-basic-60608378-caeb-11ea-86e4-0242ac110009: Found 0 pods out of 1 Jul 21 00:44:52.156: INFO: Pod name my-hostname-basic-60608378-caeb-11ea-86e4-0242ac110009: Found 1 pods out of 1 Jul 21 00:44:52.156: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-60608378-caeb-11ea-86e4-0242ac110009" are running Jul 21 00:44:52.159: INFO: Pod "my-hostname-basic-60608378-caeb-11ea-86e4-0242ac110009-cw2sk" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-21 00:44:47 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-21 00:44:50 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-21 00:44:50 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-21 00:44:47 +0000 UTC Reason: Message:}]) Jul 21 00:44:52.159: INFO: Trying to dial the pod Jul 21 00:44:57.172: INFO: Controller my-hostname-basic-60608378-caeb-11ea-86e4-0242ac110009: Got expected result from replica 1 [my-hostname-basic-60608378-caeb-11ea-86e4-0242ac110009-cw2sk]: "my-hostname-basic-60608378-caeb-11ea-86e4-0242ac110009-cw2sk", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:44:57.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-r8pgc" for this suite. Jul 21 00:45:03.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:45:03.275: INFO: namespace: e2e-tests-replication-controller-r8pgc, resource: bindings, ignored listing per whitelist Jul 21 00:45:03.275: INFO: namespace e2e-tests-replication-controller-r8pgc deletion completed in 6.099139422s • [SLOW TEST:16.240 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:45:03.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jul 21 00:45:13.690: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 21 00:45:13.717: INFO: Pod pod-with-prestop-http-hook still exists Jul 21 00:45:15.717: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 21 00:45:15.720: INFO: Pod pod-with-prestop-http-hook still exists Jul 21 00:45:17.717: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 21 00:45:17.721: INFO: Pod pod-with-prestop-http-hook still exists Jul 21 00:45:19.717: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 21 00:45:19.721: INFO: Pod pod-with-prestop-http-hook still exists Jul 21 00:45:21.717: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 21 00:45:21.722: INFO: Pod pod-with-prestop-http-hook still exists Jul 21 00:45:23.717: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 21 00:45:23.720: INFO: Pod pod-with-prestop-http-hook still exists Jul 21 00:45:25.717: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 21 00:45:26.298: INFO: Pod pod-with-prestop-http-hook still exists Jul 21 00:45:27.717: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 21 00:45:27.721: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:45:27.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-249sr" for this suite. Jul 21 00:45:49.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:45:49.954: INFO: namespace: e2e-tests-container-lifecycle-hook-249sr, resource: bindings, ignored listing per whitelist Jul 21 00:45:49.972: INFO: namespace e2e-tests-container-lifecycle-hook-249sr deletion completed in 22.240162253s • [SLOW TEST:46.697 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:45:49.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-85f71b96-caeb-11ea-86e4-0242ac110009 STEP: Creating a pod to test consume configMaps Jul 21 00:45:50.351: INFO: Waiting up to 5m0s for pod "pod-configmaps-860ebbdc-caeb-11ea-86e4-0242ac110009" in namespace "e2e-tests-configmap-8g6xh" to be "success or failure" Jul 21 00:45:50.411: INFO: Pod "pod-configmaps-860ebbdc-caeb-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 59.369245ms Jul 21 00:45:52.482: INFO: Pod "pod-configmaps-860ebbdc-caeb-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130935307s Jul 21 00:45:54.486: INFO: Pod "pod-configmaps-860ebbdc-caeb-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134768176s Jul 21 00:45:56.562: INFO: Pod "pod-configmaps-860ebbdc-caeb-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.210294836s STEP: Saw pod success Jul 21 00:45:56.562: INFO: Pod "pod-configmaps-860ebbdc-caeb-11ea-86e4-0242ac110009" satisfied condition "success or failure" Jul 21 00:45:56.565: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-860ebbdc-caeb-11ea-86e4-0242ac110009 container configmap-volume-test: STEP: delete the pod Jul 21 00:45:56.601: INFO: Waiting for pod pod-configmaps-860ebbdc-caeb-11ea-86e4-0242ac110009 to disappear Jul 21 00:45:56.608: INFO: Pod pod-configmaps-860ebbdc-caeb-11ea-86e4-0242ac110009 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:45:56.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-8g6xh" for this suite. Jul 21 00:46:02.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:46:02.648: INFO: namespace: e2e-tests-configmap-8g6xh, resource: bindings, ignored listing per whitelist Jul 21 00:46:02.693: INFO: namespace e2e-tests-configmap-8g6xh deletion completed in 6.081152643s • [SLOW TEST:12.721 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:46:02.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-x2gfq/secret-test-8da3d8bf-caeb-11ea-86e4-0242ac110009 STEP: Creating a pod to test consume secrets Jul 21 00:46:03.101: INFO: Waiting up to 5m0s for pod "pod-configmaps-8da60220-caeb-11ea-86e4-0242ac110009" in namespace "e2e-tests-secrets-x2gfq" to be "success or failure" Jul 21 00:46:03.129: INFO: Pod "pod-configmaps-8da60220-caeb-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 28.318203ms Jul 21 00:46:05.238: INFO: Pod "pod-configmaps-8da60220-caeb-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137293374s Jul 21 00:46:07.243: INFO: Pod "pod-configmaps-8da60220-caeb-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.141789727s STEP: Saw pod success Jul 21 00:46:07.243: INFO: Pod "pod-configmaps-8da60220-caeb-11ea-86e4-0242ac110009" satisfied condition "success or failure" Jul 21 00:46:07.246: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-8da60220-caeb-11ea-86e4-0242ac110009 container env-test: STEP: delete the pod Jul 21 00:46:07.533: INFO: Waiting for pod pod-configmaps-8da60220-caeb-11ea-86e4-0242ac110009 to disappear Jul 21 00:46:07.548: INFO: Pod pod-configmaps-8da60220-caeb-11ea-86e4-0242ac110009 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:46:07.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-x2gfq" for this suite. Jul 21 00:46:15.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:46:15.863: INFO: namespace: e2e-tests-secrets-x2gfq, resource: bindings, ignored listing per whitelist Jul 21 00:46:15.889: INFO: namespace e2e-tests-secrets-x2gfq deletion completed in 8.337253636s • [SLOW TEST:13.195 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:46:15.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 21 00:46:16.360: INFO: Waiting up to 5m0s for pod "downwardapi-volume-958b9769-caeb-11ea-86e4-0242ac110009" in namespace "e2e-tests-downward-api-4ltgq" to be "success or failure" Jul 21 00:46:16.635: INFO: Pod "downwardapi-volume-958b9769-caeb-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 274.969754ms Jul 21 00:46:18.638: INFO: Pod "downwardapi-volume-958b9769-caeb-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.278418307s Jul 21 00:46:20.643: INFO: Pod "downwardapi-volume-958b9769-caeb-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.282541678s Jul 21 00:46:22.646: INFO: Pod "downwardapi-volume-958b9769-caeb-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.286494348s STEP: Saw pod success Jul 21 00:46:22.647: INFO: Pod "downwardapi-volume-958b9769-caeb-11ea-86e4-0242ac110009" satisfied condition "success or failure" Jul 21 00:46:22.650: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-958b9769-caeb-11ea-86e4-0242ac110009 container client-container: STEP: delete the pod Jul 21 00:46:22.935: INFO: Waiting for pod downwardapi-volume-958b9769-caeb-11ea-86e4-0242ac110009 to disappear Jul 21 00:46:23.286: INFO: Pod downwardapi-volume-958b9769-caeb-11ea-86e4-0242ac110009 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:46:23.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-4ltgq" for this suite. Jul 21 00:46:29.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:46:29.350: INFO: namespace: e2e-tests-downward-api-4ltgq, resource: bindings, ignored listing per whitelist Jul 21 00:46:29.404: INFO: namespace e2e-tests-downward-api-4ltgq deletion completed in 6.113400387s • [SLOW TEST:13.515 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:46:29.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Jul 21 00:46:29.785: INFO: Waiting up to 5m0s for pod "client-containers-9d8a07ac-caeb-11ea-86e4-0242ac110009" in namespace "e2e-tests-containers-bvx94" to be "success or failure" Jul 21 00:46:29.826: INFO: Pod "client-containers-9d8a07ac-caeb-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 40.634892ms Jul 21 00:46:31.829: INFO: Pod "client-containers-9d8a07ac-caeb-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043733634s Jul 21 00:46:33.833: INFO: Pod "client-containers-9d8a07ac-caeb-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047852054s Jul 21 00:46:35.837: INFO: Pod "client-containers-9d8a07ac-caeb-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052245881s STEP: Saw pod success Jul 21 00:46:35.837: INFO: Pod "client-containers-9d8a07ac-caeb-11ea-86e4-0242ac110009" satisfied condition "success or failure" Jul 21 00:46:35.841: INFO: Trying to get logs from node hunter-worker2 pod client-containers-9d8a07ac-caeb-11ea-86e4-0242ac110009 container test-container: STEP: delete the pod Jul 21 00:46:35.886: INFO: Waiting for pod client-containers-9d8a07ac-caeb-11ea-86e4-0242ac110009 to disappear Jul 21 00:46:35.907: INFO: Pod client-containers-9d8a07ac-caeb-11ea-86e4-0242ac110009 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:46:35.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-bvx94" for this suite. Jul 21 00:46:41.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:46:41.983: INFO: namespace: e2e-tests-containers-bvx94, resource: bindings, ignored listing per whitelist Jul 21 00:46:42.003: INFO: namespace e2e-tests-containers-bvx94 deletion completed in 6.092245571s • [SLOW TEST:12.598 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:46:42.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-dhtr STEP: Creating a pod to test atomic-volume-subpath Jul 21 00:46:42.111: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-dhtr" in namespace "e2e-tests-subpath-bdpq4" to be "success or failure" Jul 21 00:46:42.161: INFO: Pod "pod-subpath-test-projected-dhtr": Phase="Pending", Reason="", readiness=false. Elapsed: 50.209271ms Jul 21 00:46:44.251: INFO: Pod "pod-subpath-test-projected-dhtr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140377768s Jul 21 00:46:46.341: INFO: Pod "pod-subpath-test-projected-dhtr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.230103942s Jul 21 00:46:48.345: INFO: Pod "pod-subpath-test-projected-dhtr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.233896924s Jul 21 00:46:50.348: INFO: Pod "pod-subpath-test-projected-dhtr": Phase="Running", Reason="", readiness=false. Elapsed: 8.237691166s Jul 21 00:46:52.351: INFO: Pod "pod-subpath-test-projected-dhtr": Phase="Running", Reason="", readiness=false. Elapsed: 10.240510449s Jul 21 00:46:54.355: INFO: Pod "pod-subpath-test-projected-dhtr": Phase="Running", Reason="", readiness=false. Elapsed: 12.244719648s Jul 21 00:46:56.359: INFO: Pod "pod-subpath-test-projected-dhtr": Phase="Running", Reason="", readiness=false. Elapsed: 14.248515904s Jul 21 00:46:58.363: INFO: Pod "pod-subpath-test-projected-dhtr": Phase="Running", Reason="", readiness=false. Elapsed: 16.252013559s Jul 21 00:47:00.367: INFO: Pod "pod-subpath-test-projected-dhtr": Phase="Running", Reason="", readiness=false. Elapsed: 18.256181332s Jul 21 00:47:02.431: INFO: Pod "pod-subpath-test-projected-dhtr": Phase="Running", Reason="", readiness=false. Elapsed: 20.320373153s Jul 21 00:47:04.434: INFO: Pod "pod-subpath-test-projected-dhtr": Phase="Running", Reason="", readiness=false. Elapsed: 22.32372487s Jul 21 00:47:06.438: INFO: Pod "pod-subpath-test-projected-dhtr": Phase="Running", Reason="", readiness=false. Elapsed: 24.327721291s Jul 21 00:47:08.754: INFO: Pod "pod-subpath-test-projected-dhtr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.64359898s STEP: Saw pod success Jul 21 00:47:08.754: INFO: Pod "pod-subpath-test-projected-dhtr" satisfied condition "success or failure" Jul 21 00:47:08.757: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-projected-dhtr container test-container-subpath-projected-dhtr: STEP: delete the pod Jul 21 00:47:08.911: INFO: Waiting for pod pod-subpath-test-projected-dhtr to disappear Jul 21 00:47:09.180: INFO: Pod pod-subpath-test-projected-dhtr no longer exists STEP: Deleting pod pod-subpath-test-projected-dhtr Jul 21 00:47:09.180: INFO: Deleting pod "pod-subpath-test-projected-dhtr" in namespace "e2e-tests-subpath-bdpq4" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:47:09.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-bdpq4" for this suite. Jul 21 00:47:15.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:47:15.257: INFO: namespace: e2e-tests-subpath-bdpq4, resource: bindings, ignored listing per whitelist Jul 21 00:47:15.345: INFO: namespace e2e-tests-subpath-bdpq4 deletion completed in 6.158727248s • [SLOW TEST:33.342 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:47:15.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jul 21 00:47:15.433: INFO: Waiting up to 5m0s for pod "downward-api-b8c50ccf-caeb-11ea-86e4-0242ac110009" in namespace "e2e-tests-downward-api-tw7ls" to be "success or failure" Jul 21 00:47:15.463: INFO: Pod "downward-api-b8c50ccf-caeb-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 30.333695ms Jul 21 00:47:17.683: INFO: Pod "downward-api-b8c50ccf-caeb-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.250107539s Jul 21 00:47:19.687: INFO: Pod "downward-api-b8c50ccf-caeb-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.253922321s STEP: Saw pod success Jul 21 00:47:19.687: INFO: Pod "downward-api-b8c50ccf-caeb-11ea-86e4-0242ac110009" satisfied condition "success or failure" Jul 21 00:47:19.690: INFO: Trying to get logs from node hunter-worker pod downward-api-b8c50ccf-caeb-11ea-86e4-0242ac110009 container dapi-container: STEP: delete the pod Jul 21 00:47:19.761: INFO: Waiting for pod downward-api-b8c50ccf-caeb-11ea-86e4-0242ac110009 to disappear Jul 21 00:47:19.976: INFO: Pod downward-api-b8c50ccf-caeb-11ea-86e4-0242ac110009 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:47:19.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tw7ls" for this suite. Jul 21 00:47:28.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:47:28.071: INFO: namespace: e2e-tests-downward-api-tw7ls, resource: bindings, ignored listing per whitelist Jul 21 00:47:28.142: INFO: namespace e2e-tests-downward-api-tw7ls deletion completed in 8.161939665s • [SLOW TEST:12.796 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:47:28.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-ljzzb I0721 00:47:28.419443 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-ljzzb, replica count: 1 I0721 00:47:29.469883 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0721 00:47:30.470124 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0721 00:47:31.470454 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0721 00:47:32.470606 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 21 00:47:32.736: INFO: Created: latency-svc-7cjz7 Jul 21 00:47:32.755: INFO: Got endpoints: latency-svc-7cjz7 [184.193657ms] Jul 21 00:47:32.798: INFO: Created: latency-svc-ls2vf Jul 21 00:47:32.827: INFO: Got endpoints: latency-svc-ls2vf [72.012343ms] Jul 21 00:47:32.947: INFO: Created: latency-svc-fj9sc Jul 21 00:47:33.002: INFO: Got endpoints: latency-svc-fj9sc [247.231969ms] Jul 21 00:47:33.206: INFO: Created: latency-svc-c8s6c Jul 21 00:47:33.383: INFO: Got endpoints: latency-svc-c8s6c [628.795894ms] Jul 21 00:47:33.891: INFO: Created: latency-svc-tc824 Jul 21 00:47:34.127: INFO: Got endpoints: latency-svc-tc824 [1.372339029s] Jul 21 00:47:34.579: INFO: Created: latency-svc-w5v27 Jul 21 00:47:34.620: INFO: Got endpoints: latency-svc-w5v27 [1.864865757s] Jul 21 00:47:35.102: INFO: Created: latency-svc-gpvj5 Jul 21 00:47:35.152: INFO: Got endpoints: latency-svc-gpvj5 [2.396893899s] Jul 21 00:47:35.239: INFO: Created: latency-svc-5bbj6 Jul 21 00:47:35.339: INFO: Got endpoints: latency-svc-5bbj6 [2.584144574s] Jul 21 00:47:35.474: INFO: Created: latency-svc-6nvbp Jul 21 00:47:35.534: INFO: Got endpoints: latency-svc-6nvbp [2.779198412s] Jul 21 00:47:35.641: INFO: Created: latency-svc-tdqck Jul 21 00:47:35.648: INFO: Got endpoints: latency-svc-tdqck [2.893379677s] Jul 21 00:47:35.703: INFO: Created: latency-svc-ttvcl Jul 21 00:47:35.717: INFO: Got endpoints: latency-svc-ttvcl [2.96214201s] Jul 21 00:47:35.797: INFO: Created: latency-svc-h72hq Jul 21 00:47:35.813: INFO: Got endpoints: latency-svc-h72hq [3.057918268s] Jul 21 00:47:35.847: INFO: Created: latency-svc-b67qp Jul 21 00:47:35.879: INFO: Got endpoints: latency-svc-b67qp [3.124531314s] Jul 21 00:47:36.306: INFO: Created: latency-svc-pdwj8 Jul 21 00:47:36.309: INFO: Got endpoints: latency-svc-pdwj8 [3.554258754s] Jul 21 00:47:36.406: INFO: Created: latency-svc-lv52m Jul 21 00:47:36.467: INFO: Got endpoints: latency-svc-lv52m [3.711869776s] Jul 21 00:47:36.477: INFO: Created: latency-svc-r67tf Jul 21 00:47:36.513: INFO: Got endpoints: latency-svc-r67tf [3.758036525s] Jul 21 00:47:36.635: INFO: Created: latency-svc-b7k4n Jul 21 00:47:36.637: INFO: Got endpoints: latency-svc-b7k4n [3.810553498s] Jul 21 00:47:36.692: INFO: Created: latency-svc-xfvh6 Jul 21 00:47:36.719: INFO: Got endpoints: latency-svc-xfvh6 [3.716962084s] Jul 21 00:47:36.815: INFO: Created: latency-svc-cj8zx Jul 21 00:47:36.821: INFO: Got endpoints: latency-svc-cj8zx [3.437266079s] Jul 21 00:47:36.844: INFO: Created: latency-svc-q8hll Jul 21 00:47:36.857: INFO: Got endpoints: latency-svc-q8hll [2.730073548s] Jul 21 00:47:36.892: INFO: Created: latency-svc-gbnjz Jul 21 00:47:36.906: INFO: Got endpoints: latency-svc-gbnjz [2.286190727s] Jul 21 00:47:36.971: INFO: Created: latency-svc-mxlkn Jul 21 00:47:36.984: INFO: Got endpoints: latency-svc-mxlkn [1.831925181s] Jul 21 00:47:37.022: INFO: Created: latency-svc-gfgpq Jul 21 00:47:37.045: INFO: Got endpoints: latency-svc-gfgpq [1.705501887s] Jul 21 00:47:37.120: INFO: Created: latency-svc-jptm4 Jul 21 00:47:37.123: INFO: Got endpoints: latency-svc-jptm4 [1.589287202s] Jul 21 00:47:37.185: INFO: Created: latency-svc-p48vc Jul 21 00:47:37.207: INFO: Got endpoints: latency-svc-p48vc [1.559191312s] Jul 21 00:47:37.290: INFO: Created: latency-svc-mqqml Jul 21 00:47:37.308: INFO: Got endpoints: latency-svc-mqqml [1.591100289s] Jul 21 00:47:37.353: INFO: Created: latency-svc-p66fm Jul 21 00:47:37.602: INFO: Got endpoints: latency-svc-p66fm [1.789599583s] Jul 21 00:47:37.857: INFO: Created: latency-svc-7gtgk Jul 21 00:47:37.872: INFO: Got endpoints: latency-svc-7gtgk [1.992186537s] Jul 21 00:47:37.894: INFO: Created: latency-svc-xlbxw Jul 21 00:47:37.914: INFO: Got endpoints: latency-svc-xlbxw [1.604832733s] Jul 21 00:47:37.949: INFO: Created: latency-svc-6ldrh Jul 21 00:47:38.018: INFO: Got endpoints: latency-svc-6ldrh [1.5511544s] Jul 21 00:47:38.029: INFO: Created: latency-svc-92zqq Jul 21 00:47:38.035: INFO: Got endpoints: latency-svc-92zqq [1.521590542s] Jul 21 00:47:38.067: INFO: Created: latency-svc-msg4b Jul 21 00:47:38.070: INFO: Got endpoints: latency-svc-msg4b [1.43308851s] Jul 21 00:47:38.103: INFO: Created: latency-svc-nbqkd Jul 21 00:47:38.114: INFO: Got endpoints: latency-svc-nbqkd [1.394836752s] Jul 21 00:47:38.163: INFO: Created: latency-svc-8z2l2 Jul 21 00:47:38.173: INFO: Got endpoints: latency-svc-8z2l2 [1.352140829s] Jul 21 00:47:38.199: INFO: Created: latency-svc-n52fq Jul 21 00:47:38.209: INFO: Got endpoints: latency-svc-n52fq [1.352026994s] Jul 21 00:47:38.241: INFO: Created: latency-svc-9lcxd Jul 21 00:47:38.317: INFO: Got endpoints: latency-svc-9lcxd [1.411434567s] Jul 21 00:47:38.321: INFO: Created: latency-svc-6nwwl Jul 21 00:47:38.336: INFO: Got endpoints: latency-svc-6nwwl [1.352134521s] Jul 21 00:47:38.373: INFO: Created: latency-svc-sw6f2 Jul 21 00:47:38.408: INFO: Got endpoints: latency-svc-sw6f2 [1.363602621s] Jul 21 00:47:38.463: INFO: Created: latency-svc-cx4b7 Jul 21 00:47:38.465: INFO: Got endpoints: latency-svc-cx4b7 [1.34134405s] Jul 21 00:47:38.511: INFO: Created: latency-svc-wk57q Jul 21 00:47:38.528: INFO: Got endpoints: latency-svc-wk57q [1.321028707s] Jul 21 00:47:38.559: INFO: Created: latency-svc-84v55 Jul 21 00:47:38.610: INFO: Got endpoints: latency-svc-84v55 [1.302141111s] Jul 21 00:47:38.625: INFO: Created: latency-svc-rzbg8 Jul 21 00:47:38.643: INFO: Got endpoints: latency-svc-rzbg8 [1.040188943s] Jul 21 00:47:38.673: INFO: Created: latency-svc-pkp2l Jul 21 00:47:38.784: INFO: Got endpoints: latency-svc-pkp2l [912.739043ms] Jul 21 00:47:38.787: INFO: Created: latency-svc-vbtwq Jul 21 00:47:38.805: INFO: Got endpoints: latency-svc-vbtwq [891.220095ms] Jul 21 00:47:38.835: INFO: Created: latency-svc-glrsc Jul 21 00:47:38.870: INFO: Got endpoints: latency-svc-glrsc [852.019194ms] Jul 21 00:47:38.958: INFO: Created: latency-svc-hq7bh Jul 21 00:47:38.990: INFO: Got endpoints: latency-svc-hq7bh [955.785453ms] Jul 21 00:47:39.026: INFO: Created: latency-svc-xx8fw Jul 21 00:47:39.045: INFO: Got endpoints: latency-svc-xx8fw [974.870676ms] Jul 21 00:47:39.147: INFO: Created: latency-svc-8t9l8 Jul 21 00:47:39.159: INFO: Got endpoints: latency-svc-8t9l8 [1.045602606s] Jul 21 00:47:39.258: INFO: Created: latency-svc-fcfzq Jul 21 00:47:39.262: INFO: Got endpoints: latency-svc-fcfzq [1.088462943s] Jul 21 00:47:39.333: INFO: Created: latency-svc-75mwt Jul 21 00:47:39.352: INFO: Got endpoints: latency-svc-75mwt [1.142295516s] Jul 21 00:47:39.408: INFO: Created: latency-svc-tch8z Jul 21 00:47:39.412: INFO: Got endpoints: latency-svc-tch8z [1.094414956s] Jul 21 00:47:39.458: INFO: Created: latency-svc-j8l6r Jul 21 00:47:39.472: INFO: Got endpoints: latency-svc-j8l6r [1.135710449s] Jul 21 00:47:39.494: INFO: Created: latency-svc-rmj8g Jul 21 00:47:39.558: INFO: Got endpoints: latency-svc-rmj8g [1.149132286s] Jul 21 00:47:39.597: INFO: Created: latency-svc-f7t7b Jul 21 00:47:39.617: INFO: Got endpoints: latency-svc-f7t7b [1.152167599s] Jul 21 00:47:39.638: INFO: Created: latency-svc-cwbfw Jul 21 00:47:39.701: INFO: Got endpoints: latency-svc-cwbfw [1.172061666s] Jul 21 00:47:39.711: INFO: Created: latency-svc-zkr8w Jul 21 00:47:39.746: INFO: Got endpoints: latency-svc-zkr8w [1.135473031s] Jul 21 00:47:39.844: INFO: Created: latency-svc-km2lg Jul 21 00:47:39.848: INFO: Got endpoints: latency-svc-km2lg [1.204977137s] Jul 21 00:47:39.933: INFO: Created: latency-svc-4wgxd Jul 21 00:47:40.004: INFO: Got endpoints: latency-svc-4wgxd [1.219142484s] Jul 21 00:47:40.061: INFO: Created: latency-svc-ksjt5 Jul 21 00:47:40.073: INFO: Got endpoints: latency-svc-ksjt5 [1.267942472s] Jul 21 00:47:40.119: INFO: Created: latency-svc-78gs8 Jul 21 00:47:40.123: INFO: Got endpoints: latency-svc-78gs8 [1.252609316s] Jul 21 00:47:40.185: INFO: Created: latency-svc-6skk5 Jul 21 00:47:40.199: INFO: Got endpoints: latency-svc-6skk5 [1.208574025s] Jul 21 00:47:40.275: INFO: Created: latency-svc-dhj6g Jul 21 00:47:40.299: INFO: Created: latency-svc-cr496 Jul 21 00:47:40.300: INFO: Got endpoints: latency-svc-dhj6g [1.25447755s] Jul 21 00:47:40.314: INFO: Got endpoints: latency-svc-cr496 [1.154096017s] Jul 21 00:47:40.424: INFO: Created: latency-svc-87zx4 Jul 21 00:47:40.458: INFO: Got endpoints: latency-svc-87zx4 [1.196223593s] Jul 21 00:47:40.654: INFO: Created: latency-svc-7jv26 Jul 21 00:47:40.659: INFO: Got endpoints: latency-svc-7jv26 [1.306982599s] Jul 21 00:47:40.821: INFO: Created: latency-svc-b9frt Jul 21 00:47:40.825: INFO: Got endpoints: latency-svc-b9frt [1.4135666s] Jul 21 00:47:40.916: INFO: Created: latency-svc-6qb6c Jul 21 00:47:41.006: INFO: Got endpoints: latency-svc-6qb6c [1.534368395s] Jul 21 00:47:41.228: INFO: Created: latency-svc-gbzb8 Jul 21 00:47:41.244: INFO: Got endpoints: latency-svc-gbzb8 [1.68674112s] Jul 21 00:47:41.443: INFO: Created: latency-svc-nt4k2 Jul 21 00:47:41.449: INFO: Got endpoints: latency-svc-nt4k2 [1.832045092s] Jul 21 00:47:41.524: INFO: Created: latency-svc-prbnv Jul 21 00:47:41.598: INFO: Got endpoints: latency-svc-prbnv [1.897524194s] Jul 21 00:47:41.601: INFO: Created: latency-svc-9hg5r Jul 21 00:47:41.622: INFO: Got endpoints: latency-svc-9hg5r [1.876321338s] Jul 21 00:47:41.668: INFO: Created: latency-svc-7zd6j Jul 21 00:47:41.682: INFO: Got endpoints: latency-svc-7zd6j [1.83470644s] Jul 21 00:47:41.754: INFO: Created: latency-svc-zwf2p Jul 21 00:47:41.760: INFO: Got endpoints: latency-svc-zwf2p [1.756540806s] Jul 21 00:47:41.805: INFO: Created: latency-svc-7wxx7 Jul 21 00:47:41.827: INFO: Got endpoints: latency-svc-7wxx7 [1.753369369s] Jul 21 00:47:41.910: INFO: Created: latency-svc-tcztz Jul 21 00:47:41.935: INFO: Got endpoints: latency-svc-tcztz [1.811788381s] Jul 21 00:47:41.992: INFO: Created: latency-svc-j45b5 Jul 21 00:47:42.001: INFO: Got endpoints: latency-svc-j45b5 [1.801769093s] Jul 21 00:47:42.055: INFO: Created: latency-svc-8ddh6 Jul 21 00:47:42.061: INFO: Got endpoints: latency-svc-8ddh6 [1.760698221s] Jul 21 00:47:42.105: INFO: Created: latency-svc-jk96w Jul 21 00:47:42.121: INFO: Got endpoints: latency-svc-jk96w [1.807326431s] Jul 21 00:47:42.143: INFO: Created: latency-svc-qwwg5 Jul 21 00:47:42.191: INFO: Got endpoints: latency-svc-qwwg5 [1.73331588s] Jul 21 00:47:42.207: INFO: Created: latency-svc-2mncl Jul 21 00:47:42.224: INFO: Got endpoints: latency-svc-2mncl [1.564993989s] Jul 21 00:47:42.251: INFO: Created: latency-svc-nsntr Jul 21 00:47:42.273: INFO: Got endpoints: latency-svc-nsntr [1.447053017s] Jul 21 00:47:42.342: INFO: Created: latency-svc-9kljc Jul 21 00:47:42.381: INFO: Got endpoints: latency-svc-9kljc [1.374719612s] Jul 21 00:47:42.382: INFO: Created: latency-svc-smjvb Jul 21 00:47:42.441: INFO: Got endpoints: latency-svc-smjvb [1.196205163s] Jul 21 00:47:42.569: INFO: Created: latency-svc-j4ghl Jul 21 00:47:42.573: INFO: Got endpoints: latency-svc-j4ghl [1.12420851s] Jul 21 00:47:42.604: INFO: Created: latency-svc-4244k Jul 21 00:47:42.621: INFO: Got endpoints: latency-svc-4244k [1.022493886s] Jul 21 00:47:42.652: INFO: Created: latency-svc-md554 Jul 21 00:47:42.725: INFO: Got endpoints: latency-svc-md554 [1.102382703s] Jul 21 00:47:42.727: INFO: Created: latency-svc-kbrw7 Jul 21 00:47:42.735: INFO: Got endpoints: latency-svc-kbrw7 [1.052570153s] Jul 21 00:47:42.766: INFO: Created: latency-svc-tqpql Jul 21 00:47:42.954: INFO: Created: latency-svc-97rzr Jul 21 00:47:42.958: INFO: Got endpoints: latency-svc-tqpql [1.19718509s] Jul 21 00:47:43.144: INFO: Got endpoints: latency-svc-97rzr [1.317353692s] Jul 21 00:47:43.146: INFO: Created: latency-svc-bgfkc Jul 21 00:47:43.148: INFO: Got endpoints: latency-svc-bgfkc [1.213352338s] Jul 21 00:47:43.774: INFO: Created: latency-svc-jnr6q Jul 21 00:47:43.970: INFO: Got endpoints: latency-svc-jnr6q [1.969184259s] Jul 21 00:47:44.180: INFO: Created: latency-svc-g5f2s Jul 21 00:47:44.198: INFO: Got endpoints: latency-svc-g5f2s [2.137412566s] Jul 21 00:47:44.378: INFO: Created: latency-svc-92zbk Jul 21 00:47:44.381: INFO: Got endpoints: latency-svc-92zbk [2.260149584s] Jul 21 00:47:44.435: INFO: Created: latency-svc-mdsrl Jul 21 00:47:44.450: INFO: Got endpoints: latency-svc-mdsrl [2.258502585s] Jul 21 00:47:44.564: INFO: Created: latency-svc-glzdg Jul 21 00:47:44.566: INFO: Got endpoints: latency-svc-glzdg [2.342562789s] Jul 21 00:47:44.611: INFO: Created: latency-svc-p25sv Jul 21 00:47:44.624: INFO: Got endpoints: latency-svc-p25sv [2.351497995s] Jul 21 00:47:44.644: INFO: Created: latency-svc-fx5zr Jul 21 00:47:44.661: INFO: Got endpoints: latency-svc-fx5zr [2.279680231s] Jul 21 00:47:44.737: INFO: Created: latency-svc-zvj59 Jul 21 00:47:44.739: INFO: Got endpoints: latency-svc-zvj59 [2.298497887s] Jul 21 00:47:44.764: INFO: Created: latency-svc-2lgtc Jul 21 00:47:44.781: INFO: Got endpoints: latency-svc-2lgtc [2.207592761s] Jul 21 00:47:45.067: INFO: Created: latency-svc-ftfnp Jul 21 00:47:45.123: INFO: Got endpoints: latency-svc-ftfnp [2.501878192s] Jul 21 00:47:45.155: INFO: Created: latency-svc-r228w Jul 21 00:47:45.293: INFO: Got endpoints: latency-svc-r228w [2.568487607s] Jul 21 00:47:45.325: INFO: Created: latency-svc-p5z5v Jul 21 00:47:45.339: INFO: Got endpoints: latency-svc-p5z5v [2.603874041s] Jul 21 00:47:45.360: INFO: Created: latency-svc-n2tfm Jul 21 00:47:45.375: INFO: Got endpoints: latency-svc-n2tfm [2.41742695s] Jul 21 00:47:45.443: INFO: Created: latency-svc-w2pxl Jul 21 00:47:45.446: INFO: Got endpoints: latency-svc-w2pxl [2.301866972s] Jul 21 00:47:45.485: INFO: Created: latency-svc-xcjxl Jul 21 00:47:45.514: INFO: Got endpoints: latency-svc-xcjxl [2.365898801s] Jul 21 00:47:45.623: INFO: Created: latency-svc-hv894 Jul 21 00:47:45.625: INFO: Got endpoints: latency-svc-hv894 [1.655056594s] Jul 21 00:47:45.959: INFO: Created: latency-svc-ll5rg Jul 21 00:47:46.029: INFO: Got endpoints: latency-svc-ll5rg [1.83111685s] Jul 21 00:47:46.031: INFO: Created: latency-svc-zzz7l Jul 21 00:47:46.065: INFO: Got endpoints: latency-svc-zzz7l [1.684036704s] Jul 21 00:47:46.288: INFO: Created: latency-svc-2cbll Jul 21 00:47:46.324: INFO: Got endpoints: latency-svc-2cbll [1.873855639s] Jul 21 00:47:46.588: INFO: Created: latency-svc-r57p6 Jul 21 00:47:46.638: INFO: Got endpoints: latency-svc-r57p6 [2.071141834s] Jul 21 00:47:46.725: INFO: Created: latency-svc-j59tq Jul 21 00:47:46.737: INFO: Got endpoints: latency-svc-j59tq [2.112654985s] Jul 21 00:47:46.769: INFO: Created: latency-svc-bcxmm Jul 21 00:47:46.815: INFO: Got endpoints: latency-svc-bcxmm [2.154171383s] Jul 21 00:47:46.926: INFO: Created: latency-svc-nb8t7 Jul 21 00:47:47.066: INFO: Got endpoints: latency-svc-nb8t7 [2.326839712s] Jul 21 00:47:47.069: INFO: Created: latency-svc-5p475 Jul 21 00:47:47.082: INFO: Got endpoints: latency-svc-5p475 [2.301112113s] Jul 21 00:47:47.129: INFO: Created: latency-svc-pmjsc Jul 21 00:47:47.152: INFO: Got endpoints: latency-svc-pmjsc [2.028755773s] Jul 21 00:47:47.218: INFO: Created: latency-svc-h9zvh Jul 21 00:47:47.247: INFO: Got endpoints: latency-svc-h9zvh [1.953967745s] Jul 21 00:47:47.285: INFO: Created: latency-svc-klj9z Jul 21 00:47:47.365: INFO: Got endpoints: latency-svc-klj9z [2.025977212s] Jul 21 00:47:47.404: INFO: Created: latency-svc-g8nkr Jul 21 00:47:47.428: INFO: Got endpoints: latency-svc-g8nkr [2.052629323s] Jul 21 00:47:47.458: INFO: Created: latency-svc-pwjvc Jul 21 00:47:47.515: INFO: Got endpoints: latency-svc-pwjvc [2.068915457s] Jul 21 00:47:47.549: INFO: Created: latency-svc-whvxk Jul 21 00:47:47.572: INFO: Got endpoints: latency-svc-whvxk [2.058149729s] Jul 21 00:47:47.602: INFO: Created: latency-svc-wshps Jul 21 00:47:47.658: INFO: Got endpoints: latency-svc-wshps [2.03293594s] Jul 21 00:47:47.662: INFO: Created: latency-svc-6tvj4 Jul 21 00:47:47.681: INFO: Got endpoints: latency-svc-6tvj4 [1.651183074s] Jul 21 00:47:47.823: INFO: Created: latency-svc-krq7m Jul 21 00:47:47.868: INFO: Got endpoints: latency-svc-krq7m [1.802358292s] Jul 21 00:47:47.871: INFO: Created: latency-svc-rxdjz Jul 21 00:47:47.897: INFO: Got endpoints: latency-svc-rxdjz [1.573509359s] Jul 21 00:47:47.969: INFO: Created: latency-svc-b6z85 Jul 21 00:47:48.005: INFO: Got endpoints: latency-svc-b6z85 [1.367516067s] Jul 21 00:47:48.047: INFO: Created: latency-svc-hnztf Jul 21 00:47:48.113: INFO: Got endpoints: latency-svc-hnztf [1.376471508s] Jul 21 00:47:48.162: INFO: Created: latency-svc-m5m9q Jul 21 00:47:48.179: INFO: Got endpoints: latency-svc-m5m9q [1.364120062s] Jul 21 00:47:48.270: INFO: Created: latency-svc-jccq6 Jul 21 00:47:48.281: INFO: Got endpoints: latency-svc-jccq6 [1.21496008s] Jul 21 00:47:48.323: INFO: Created: latency-svc-jgtbj Jul 21 00:47:48.354: INFO: Got endpoints: latency-svc-jgtbj [1.27131285s] Jul 21 00:47:48.444: INFO: Created: latency-svc-pglx9 Jul 21 00:47:48.456: INFO: Got endpoints: latency-svc-pglx9 [1.304027009s] Jul 21 00:47:48.522: INFO: Created: latency-svc-clxcm Jul 21 00:47:48.587: INFO: Got endpoints: latency-svc-clxcm [1.339251466s] Jul 21 00:47:48.600: INFO: Created: latency-svc-pbbxr Jul 21 00:47:48.618: INFO: Got endpoints: latency-svc-pbbxr [1.252980482s] Jul 21 00:47:48.648: INFO: Created: latency-svc-6wcdz Jul 21 00:47:48.672: INFO: Got endpoints: latency-svc-6wcdz [1.244165619s] Jul 21 00:47:48.817: INFO: Created: latency-svc-cz98k Jul 21 00:47:48.818: INFO: Got endpoints: latency-svc-cz98k [1.30317393s] Jul 21 00:47:48.908: INFO: Created: latency-svc-cf6dl Jul 21 00:47:49.019: INFO: Got endpoints: latency-svc-cf6dl [1.446146672s] Jul 21 00:47:49.027: INFO: Created: latency-svc-9ptpp Jul 21 00:47:49.068: INFO: Got endpoints: latency-svc-9ptpp [1.409528397s] Jul 21 00:47:49.234: INFO: Created: latency-svc-667nz Jul 21 00:47:49.267: INFO: Got endpoints: latency-svc-667nz [1.585923079s] Jul 21 00:47:49.438: INFO: Created: latency-svc-l9hpl Jul 21 00:47:49.441: INFO: Got endpoints: latency-svc-l9hpl [1.572749097s] Jul 21 00:47:49.665: INFO: Created: latency-svc-42nhm Jul 21 00:47:49.891: INFO: Got endpoints: latency-svc-42nhm [1.99357674s] Jul 21 00:47:50.144: INFO: Created: latency-svc-mpr6p Jul 21 00:47:50.330: INFO: Got endpoints: latency-svc-mpr6p [2.324664275s] Jul 21 00:47:50.383: INFO: Created: latency-svc-4645h Jul 21 00:47:50.647: INFO: Got endpoints: latency-svc-4645h [2.533558386s] Jul 21 00:47:50.651: INFO: Created: latency-svc-2nvxn Jul 21 00:47:50.856: INFO: Got endpoints: latency-svc-2nvxn [2.677209072s] Jul 21 00:47:50.887: INFO: Created: latency-svc-qk26c Jul 21 00:47:50.927: INFO: Got endpoints: latency-svc-qk26c [2.646089717s] Jul 21 00:47:51.104: INFO: Created: latency-svc-82682 Jul 21 00:47:51.128: INFO: Got endpoints: latency-svc-82682 [2.774197289s] Jul 21 00:47:51.222: INFO: Created: latency-svc-bgtxw Jul 21 00:47:51.233: INFO: Got endpoints: latency-svc-bgtxw [2.777208042s] Jul 21 00:47:51.287: INFO: Created: latency-svc-zsbtp Jul 21 00:47:51.305: INFO: Got endpoints: latency-svc-zsbtp [2.718383468s] Jul 21 00:47:51.416: INFO: Created: latency-svc-r8rk6 Jul 21 00:47:51.438: INFO: Got endpoints: latency-svc-r8rk6 [2.819569672s] Jul 21 00:47:51.555: INFO: Created: latency-svc-bt8qm Jul 21 00:47:51.587: INFO: Got endpoints: latency-svc-bt8qm [2.915160643s] Jul 21 00:47:51.627: INFO: Created: latency-svc-mm29m Jul 21 00:47:51.682: INFO: Got endpoints: latency-svc-mm29m [2.864149535s] Jul 21 00:47:51.716: INFO: Created: latency-svc-bbc6q Jul 21 00:47:51.732: INFO: Got endpoints: latency-svc-bbc6q [2.713474815s] Jul 21 00:47:51.759: INFO: Created: latency-svc-jzrmd Jul 21 00:47:51.776: INFO: Got endpoints: latency-svc-jzrmd [2.708217185s] Jul 21 00:47:51.842: INFO: Created: latency-svc-t2452 Jul 21 00:47:51.858: INFO: Got endpoints: latency-svc-t2452 [2.591015126s] Jul 21 00:47:51.903: INFO: Created: latency-svc-6fwfc Jul 21 00:47:51.924: INFO: Got endpoints: latency-svc-6fwfc [2.483520644s] Jul 21 00:47:51.983: INFO: Created: latency-svc-qxwk5 Jul 21 00:47:51.990: INFO: Got endpoints: latency-svc-qxwk5 [2.099499345s] Jul 21 00:47:52.016: INFO: Created: latency-svc-wwwdp Jul 21 00:47:52.039: INFO: Got endpoints: latency-svc-wwwdp [1.708995105s] Jul 21 00:47:52.150: INFO: Created: latency-svc-wgv4s Jul 21 00:47:52.153: INFO: Got endpoints: latency-svc-wgv4s [1.505494581s] Jul 21 00:47:52.185: INFO: Created: latency-svc-vk55f Jul 21 00:47:52.214: INFO: Got endpoints: latency-svc-vk55f [1.357898741s] Jul 21 00:47:52.319: INFO: Created: latency-svc-d6jpf Jul 21 00:47:52.394: INFO: Got endpoints: latency-svc-d6jpf [1.466852061s] Jul 21 00:47:52.395: INFO: Created: latency-svc-qnbj9 Jul 21 00:47:52.527: INFO: Got endpoints: latency-svc-qnbj9 [1.398713692s] Jul 21 00:47:52.593: INFO: Created: latency-svc-5k8tn Jul 21 00:47:52.603: INFO: Got endpoints: latency-svc-5k8tn [1.370239824s] Jul 21 00:47:52.670: INFO: Created: latency-svc-cq4f6 Jul 21 00:47:52.693: INFO: Got endpoints: latency-svc-cq4f6 [1.387973826s] Jul 21 00:47:52.719: INFO: Created: latency-svc-gtqlj Jul 21 00:47:52.736: INFO: Got endpoints: latency-svc-gtqlj [1.297814681s] Jul 21 00:47:52.762: INFO: Created: latency-svc-jknsm Jul 21 00:47:52.820: INFO: Got endpoints: latency-svc-jknsm [1.232692229s] Jul 21 00:47:52.822: INFO: Created: latency-svc-z49js Jul 21 00:47:52.856: INFO: Got endpoints: latency-svc-z49js [1.173695252s] Jul 21 00:47:52.896: INFO: Created: latency-svc-dcc86 Jul 21 00:47:52.910: INFO: Got endpoints: latency-svc-dcc86 [1.177642768s] Jul 21 00:47:52.989: INFO: Created: latency-svc-kj2p6 Jul 21 00:47:52.991: INFO: Got endpoints: latency-svc-kj2p6 [1.215105419s] Jul 21 00:47:53.042: INFO: Created: latency-svc-9d2vs Jul 21 00:47:53.150: INFO: Created: latency-svc-xxn9j Jul 21 00:47:53.154: INFO: Got endpoints: latency-svc-9d2vs [1.296008751s] Jul 21 00:47:53.192: INFO: Created: latency-svc-s4gsv Jul 21 00:47:53.235: INFO: Got endpoints: latency-svc-s4gsv [1.244218736s] Jul 21 00:47:53.235: INFO: Got endpoints: latency-svc-xxn9j [1.31052604s] Jul 21 00:47:53.348: INFO: Created: latency-svc-dl92k Jul 21 00:47:53.355: INFO: Got endpoints: latency-svc-dl92k [1.315632628s] Jul 21 00:47:53.389: INFO: Created: latency-svc-zp4bm Jul 21 00:47:53.392: INFO: Got endpoints: latency-svc-zp4bm [1.239198251s] Jul 21 00:47:53.462: INFO: Created: latency-svc-bgkvs Jul 21 00:47:53.465: INFO: Got endpoints: latency-svc-bgkvs [1.250288467s] Jul 21 00:47:53.504: INFO: Created: latency-svc-clbzf Jul 21 00:47:53.525: INFO: Got endpoints: latency-svc-clbzf [1.130324514s] Jul 21 00:47:53.546: INFO: Created: latency-svc-6n5hx Jul 21 00:47:53.560: INFO: Got endpoints: latency-svc-6n5hx [1.032893917s] Jul 21 00:47:53.673: INFO: Created: latency-svc-pf29s Jul 21 00:47:53.686: INFO: Got endpoints: latency-svc-pf29s [1.082883828s] Jul 21 00:47:53.714: INFO: Created: latency-svc-dxqf5 Jul 21 00:47:53.766: INFO: Got endpoints: latency-svc-dxqf5 [1.072893069s] Jul 21 00:47:53.780: INFO: Created: latency-svc-lqmrc Jul 21 00:47:53.794: INFO: Got endpoints: latency-svc-lqmrc [1.058793585s] Jul 21 00:47:53.827: INFO: Created: latency-svc-4svdm Jul 21 00:47:53.849: INFO: Got endpoints: latency-svc-4svdm [1.028610484s] Jul 21 00:47:54.004: INFO: Created: latency-svc-bttdv Jul 21 00:47:54.049: INFO: Got endpoints: latency-svc-bttdv [1.193267846s] Jul 21 00:47:54.131: INFO: Created: latency-svc-65p5v Jul 21 00:47:54.161: INFO: Got endpoints: latency-svc-65p5v [1.251385213s] Jul 21 00:47:54.188: INFO: Created: latency-svc-fmdsc Jul 21 00:47:54.203: INFO: Got endpoints: latency-svc-fmdsc [1.211900353s] Jul 21 00:47:54.230: INFO: Created: latency-svc-rpx6s Jul 21 00:47:54.269: INFO: Got endpoints: latency-svc-rpx6s [1.115295208s] Jul 21 00:47:54.315: INFO: Created: latency-svc-7vx2t Jul 21 00:47:54.330: INFO: Got endpoints: latency-svc-7vx2t [1.094893867s] Jul 21 00:47:54.350: INFO: Created: latency-svc-k29v4 Jul 21 00:47:54.366: INFO: Got endpoints: latency-svc-k29v4 [1.131161084s] Jul 21 00:47:54.449: INFO: Created: latency-svc-8ll7c Jul 21 00:47:54.506: INFO: Got endpoints: latency-svc-8ll7c [1.150905281s] Jul 21 00:47:54.506: INFO: Created: latency-svc-pp59r Jul 21 00:47:54.592: INFO: Got endpoints: latency-svc-pp59r [1.200247443s] Jul 21 00:47:54.693: INFO: Created: latency-svc-tnkx7 Jul 21 00:47:54.739: INFO: Got endpoints: latency-svc-tnkx7 [1.274395323s] Jul 21 00:47:54.764: INFO: Created: latency-svc-4zhcg Jul 21 00:47:54.781: INFO: Got endpoints: latency-svc-4zhcg [1.255947405s] Jul 21 00:47:54.800: INFO: Created: latency-svc-4vkwl Jul 21 00:47:54.817: INFO: Got endpoints: latency-svc-4vkwl [1.256969673s] Jul 21 00:47:54.874: INFO: Created: latency-svc-5zjdz Jul 21 00:47:54.882: INFO: Got endpoints: latency-svc-5zjdz [1.196028086s] Jul 21 00:47:54.915: INFO: Created: latency-svc-b8mxl Jul 21 00:47:54.943: INFO: Got endpoints: latency-svc-b8mxl [1.177266636s] Jul 21 00:47:55.055: INFO: Created: latency-svc-x447g Jul 21 00:47:55.057: INFO: Got endpoints: latency-svc-x447g [1.262116043s] Jul 21 00:47:55.274: INFO: Created: latency-svc-gxbww Jul 21 00:47:55.292: INFO: Got endpoints: latency-svc-gxbww [1.442800006s] Jul 21 00:47:55.346: INFO: Created: latency-svc-fx9w7 Jul 21 00:47:55.399: INFO: Got endpoints: latency-svc-fx9w7 [1.349708194s] Jul 21 00:47:55.424: INFO: Created: latency-svc-wmdx2 Jul 21 00:47:55.461: INFO: Got endpoints: latency-svc-wmdx2 [1.299909557s] Jul 21 00:47:55.600: INFO: Created: latency-svc-k8mk8 Jul 21 00:47:55.653: INFO: Got endpoints: latency-svc-k8mk8 [1.449447071s] Jul 21 00:47:56.222: INFO: Created: latency-svc-97ptn Jul 21 00:47:56.275: INFO: Got endpoints: latency-svc-97ptn [2.005984313s] Jul 21 00:47:56.593: INFO: Created: latency-svc-gtqm2 Jul 21 00:47:56.601: INFO: Got endpoints: latency-svc-gtqm2 [2.270801619s] Jul 21 00:47:56.745: INFO: Created: latency-svc-jpn6x Jul 21 00:47:56.775: INFO: Got endpoints: latency-svc-jpn6x [2.408699747s] Jul 21 00:47:56.775: INFO: Created: latency-svc-kc8k8 Jul 21 00:47:56.842: INFO: Got endpoints: latency-svc-kc8k8 [2.336652595s] Jul 21 00:47:56.907: INFO: Created: latency-svc-22nbp Jul 21 00:47:56.922: INFO: Got endpoints: latency-svc-22nbp [2.330035821s] Jul 21 00:47:56.922: INFO: Latencies: [72.012343ms 247.231969ms 628.795894ms 852.019194ms 891.220095ms 912.739043ms 955.785453ms 974.870676ms 1.022493886s 1.028610484s 1.032893917s 1.040188943s 1.045602606s 1.052570153s 1.058793585s 1.072893069s 1.082883828s 1.088462943s 1.094414956s 1.094893867s 1.102382703s 1.115295208s 1.12420851s 1.130324514s 1.131161084s 1.135473031s 1.135710449s 1.142295516s 1.149132286s 1.150905281s 1.152167599s 1.154096017s 1.172061666s 1.173695252s 1.177266636s 1.177642768s 1.193267846s 1.196028086s 1.196205163s 1.196223593s 1.19718509s 1.200247443s 1.204977137s 1.208574025s 1.211900353s 1.213352338s 1.21496008s 1.215105419s 1.219142484s 1.232692229s 1.239198251s 1.244165619s 1.244218736s 1.250288467s 1.251385213s 1.252609316s 1.252980482s 1.25447755s 1.255947405s 1.256969673s 1.262116043s 1.267942472s 1.27131285s 1.274395323s 1.296008751s 1.297814681s 1.299909557s 1.302141111s 1.30317393s 1.304027009s 1.306982599s 1.31052604s 1.315632628s 1.317353692s 1.321028707s 1.339251466s 1.34134405s 1.349708194s 1.352026994s 1.352134521s 1.352140829s 1.357898741s 1.363602621s 1.364120062s 1.367516067s 1.370239824s 1.372339029s 1.374719612s 1.376471508s 1.387973826s 1.394836752s 1.398713692s 1.409528397s 1.411434567s 1.4135666s 1.43308851s 1.442800006s 1.446146672s 1.447053017s 1.449447071s 1.466852061s 1.505494581s 1.521590542s 1.534368395s 1.5511544s 1.559191312s 1.564993989s 1.572749097s 1.573509359s 1.585923079s 1.589287202s 1.591100289s 1.604832733s 1.651183074s 1.655056594s 1.684036704s 1.68674112s 1.705501887s 1.708995105s 1.73331588s 1.753369369s 1.756540806s 1.760698221s 1.789599583s 1.801769093s 1.802358292s 1.807326431s 1.811788381s 1.83111685s 1.831925181s 1.832045092s 1.83470644s 1.864865757s 1.873855639s 1.876321338s 1.897524194s 1.953967745s 1.969184259s 1.992186537s 1.99357674s 2.005984313s 2.025977212s 2.028755773s 2.03293594s 2.052629323s 2.058149729s 2.068915457s 2.071141834s 2.099499345s 2.112654985s 2.137412566s 2.154171383s 2.207592761s 2.258502585s 2.260149584s 2.270801619s 2.279680231s 2.286190727s 2.298497887s 2.301112113s 2.301866972s 2.324664275s 2.326839712s 2.330035821s 2.336652595s 2.342562789s 2.351497995s 2.365898801s 2.396893899s 2.408699747s 2.41742695s 2.483520644s 2.501878192s 2.533558386s 2.568487607s 2.584144574s 2.591015126s 2.603874041s 2.646089717s 2.677209072s 2.708217185s 2.713474815s 2.718383468s 2.730073548s 2.774197289s 2.777208042s 2.779198412s 2.819569672s 2.864149535s 2.893379677s 2.915160643s 2.96214201s 3.057918268s 3.124531314s 3.437266079s 3.554258754s 3.711869776s 3.716962084s 3.758036525s 3.810553498s] Jul 21 00:47:56.923: INFO: 50 %ile: 1.466852061s Jul 21 00:47:56.923: INFO: 90 %ile: 2.708217185s Jul 21 00:47:56.923: INFO: 99 %ile: 3.758036525s Jul 21 00:47:56.923: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:47:56.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-ljzzb" for this suite. Jul 21 00:48:34.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:48:34.996: INFO: namespace: e2e-tests-svc-latency-ljzzb, resource: bindings, ignored listing per whitelist Jul 21 00:48:35.001: INFO: namespace e2e-tests-svc-latency-ljzzb deletion completed in 38.072000042s • [SLOW TEST:66.860 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:48:35.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jul 21 00:48:42.192: INFO: Successfully updated pod "pod-update-activedeadlineseconds-e89449f4-caeb-11ea-86e4-0242ac110009" Jul 21 00:48:42.192: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-e89449f4-caeb-11ea-86e4-0242ac110009" in namespace "e2e-tests-pods-bxmcl" to be "terminated due to deadline exceeded" Jul 21 00:48:42.270: INFO: Pod "pod-update-activedeadlineseconds-e89449f4-caeb-11ea-86e4-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 77.989226ms Jul 21 00:48:44.275: INFO: Pod "pod-update-activedeadlineseconds-e89449f4-caeb-11ea-86e4-0242ac110009": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.082673508s Jul 21 00:48:44.275: INFO: Pod "pod-update-activedeadlineseconds-e89449f4-caeb-11ea-86e4-0242ac110009" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:48:44.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-bxmcl" for this suite. Jul 21 00:48:50.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:48:50.333: INFO: namespace: e2e-tests-pods-bxmcl, resource: bindings, ignored listing per whitelist Jul 21 00:48:50.390: INFO: namespace e2e-tests-pods-bxmcl deletion completed in 6.111043391s • [SLOW TEST:15.388 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:48:50.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 21 00:48:50.604: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f17c9758-caeb-11ea-86e4-0242ac110009" in namespace "e2e-tests-downward-api-xss88" to be "success or failure" Jul 21 00:48:50.677: INFO: Pod "downwardapi-volume-f17c9758-caeb-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 73.311127ms Jul 21 00:48:52.681: INFO: Pod "downwardapi-volume-f17c9758-caeb-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076957757s Jul 21 00:48:54.685: INFO: Pod "downwardapi-volume-f17c9758-caeb-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081018668s STEP: Saw pod success Jul 21 00:48:54.685: INFO: Pod "downwardapi-volume-f17c9758-caeb-11ea-86e4-0242ac110009" satisfied condition "success or failure" Jul 21 00:48:54.688: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-f17c9758-caeb-11ea-86e4-0242ac110009 container client-container: STEP: delete the pod Jul 21 00:48:54.718: INFO: Waiting for pod downwardapi-volume-f17c9758-caeb-11ea-86e4-0242ac110009 to disappear Jul 21 00:48:54.755: INFO: Pod downwardapi-volume-f17c9758-caeb-11ea-86e4-0242ac110009 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:48:54.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-xss88" for this suite. Jul 21 00:49:00.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:49:00.841: INFO: namespace: e2e-tests-downward-api-xss88, resource: bindings, ignored listing per whitelist Jul 21 00:49:00.910: INFO: namespace e2e-tests-downward-api-xss88 deletion completed in 6.151345585s • [SLOW TEST:10.520 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:49:00.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:49:10.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-nw6ts" for this suite. Jul 21 00:49:32.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:49:32.162: INFO: namespace: e2e-tests-replication-controller-nw6ts, resource: bindings, ignored listing per whitelist Jul 21 00:49:32.217: INFO: namespace e2e-tests-replication-controller-nw6ts deletion completed in 22.154645034s • [SLOW TEST:31.306 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:49:32.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jul 21 00:49:32.298: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:49:40.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-hc5hv" for this suite. Jul 21 00:50:02.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:50:02.468: INFO: namespace: e2e-tests-init-container-hc5hv, resource: bindings, ignored listing per whitelist Jul 21 00:50:02.535: INFO: namespace e2e-tests-init-container-hc5hv deletion completed in 22.155207122s • [SLOW TEST:30.317 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:50:02.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 21 00:50:02.681: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1c752260-caec-11ea-86e4-0242ac110009" in namespace "e2e-tests-downward-api-6vf6v" to be "success or failure" Jul 21 00:50:02.707: INFO: Pod "downwardapi-volume-1c752260-caec-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 25.8738ms Jul 21 00:50:04.775: INFO: Pod "downwardapi-volume-1c752260-caec-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093727762s Jul 21 00:50:06.805: INFO: Pod "downwardapi-volume-1c752260-caec-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.123527064s STEP: Saw pod success Jul 21 00:50:06.805: INFO: Pod "downwardapi-volume-1c752260-caec-11ea-86e4-0242ac110009" satisfied condition "success or failure" Jul 21 00:50:06.807: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-1c752260-caec-11ea-86e4-0242ac110009 container client-container: STEP: delete the pod Jul 21 00:50:06.822: INFO: Waiting for pod downwardapi-volume-1c752260-caec-11ea-86e4-0242ac110009 to disappear Jul 21 00:50:06.827: INFO: Pod downwardapi-volume-1c752260-caec-11ea-86e4-0242ac110009 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:50:06.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-6vf6v" for this suite. Jul 21 00:50:12.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:50:12.887: INFO: namespace: e2e-tests-downward-api-6vf6v, resource: bindings, ignored listing per whitelist Jul 21 00:50:12.960: INFO: namespace e2e-tests-downward-api-6vf6v deletion completed in 6.129925704s • [SLOW TEST:10.425 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:50:12.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jul 21 00:50:17.637: INFO: Successfully updated pod "labelsupdate22a6f9e7-caec-11ea-86e4-0242ac110009" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:50:19.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xzj5f" for this suite. Jul 21 00:50:41.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:50:41.718: INFO: namespace: e2e-tests-projected-xzj5f, resource: bindings, ignored listing per whitelist Jul 21 00:50:41.755: INFO: namespace e2e-tests-projected-xzj5f deletion completed in 22.090890442s • [SLOW TEST:28.795 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:50:41.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 21 00:50:41.890: INFO: Waiting up to 5m0s for pod "downwardapi-volume-33d1dae4-caec-11ea-86e4-0242ac110009" in namespace "e2e-tests-projected-7lfq7" to be "success or failure" Jul 21 00:50:41.894: INFO: Pod "downwardapi-volume-33d1dae4-caec-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.58745ms Jul 21 00:50:43.897: INFO: Pod "downwardapi-volume-33d1dae4-caec-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007058394s Jul 21 00:50:45.902: INFO: Pod "downwardapi-volume-33d1dae4-caec-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011415183s STEP: Saw pod success Jul 21 00:50:45.902: INFO: Pod "downwardapi-volume-33d1dae4-caec-11ea-86e4-0242ac110009" satisfied condition "success or failure" Jul 21 00:50:45.904: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-33d1dae4-caec-11ea-86e4-0242ac110009 container client-container: STEP: delete the pod Jul 21 00:50:45.962: INFO: Waiting for pod downwardapi-volume-33d1dae4-caec-11ea-86e4-0242ac110009 to disappear Jul 21 00:50:45.972: INFO: Pod downwardapi-volume-33d1dae4-caec-11ea-86e4-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:50:45.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7lfq7" for this suite. Jul 21 00:50:51.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:50:52.060: INFO: namespace: e2e-tests-projected-7lfq7, resource: bindings, ignored listing per whitelist Jul 21 00:50:52.069: INFO: namespace e2e-tests-projected-7lfq7 deletion completed in 6.093430978s • [SLOW TEST:10.314 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:50:52.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jul 21 00:50:52.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6548d' Jul 21 00:50:54.712: INFO: stderr: "" Jul 21 00:50:54.712: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 21 00:50:54.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6548d' Jul 21 00:50:54.872: INFO: stderr: "" Jul 21 00:50:54.872: INFO: stdout: "update-demo-nautilus-4cchd update-demo-nautilus-td67x " Jul 21 00:50:54.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4cchd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6548d' Jul 21 00:50:54.960: INFO: stderr: "" Jul 21 00:50:54.960: INFO: stdout: "" Jul 21 00:50:54.960: INFO: update-demo-nautilus-4cchd is created but not running Jul 21 00:50:59.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6548d' Jul 21 00:51:00.064: INFO: stderr: "" Jul 21 00:51:00.065: INFO: stdout: "update-demo-nautilus-4cchd update-demo-nautilus-td67x " Jul 21 00:51:00.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4cchd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6548d' Jul 21 00:51:00.161: INFO: stderr: "" Jul 21 00:51:00.162: INFO: stdout: "true" Jul 21 00:51:00.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4cchd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6548d' Jul 21 00:51:00.273: INFO: stderr: "" Jul 21 00:51:00.273: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 21 00:51:00.273: INFO: validating pod update-demo-nautilus-4cchd Jul 21 00:51:00.293: INFO: got data: { "image": "nautilus.jpg" } Jul 21 00:51:00.293: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 21 00:51:00.293: INFO: update-demo-nautilus-4cchd is verified up and running Jul 21 00:51:00.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-td67x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6548d' Jul 21 00:51:00.406: INFO: stderr: "" Jul 21 00:51:00.406: INFO: stdout: "true" Jul 21 00:51:00.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-td67x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6548d' Jul 21 00:51:00.511: INFO: stderr: "" Jul 21 00:51:00.511: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 21 00:51:00.511: INFO: validating pod update-demo-nautilus-td67x Jul 21 00:51:00.515: INFO: got data: { "image": "nautilus.jpg" } Jul 21 00:51:00.515: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 21 00:51:00.515: INFO: update-demo-nautilus-td67x is verified up and running STEP: scaling down the replication controller Jul 21 00:51:00.518: INFO: scanned /root for discovery docs: Jul 21 00:51:00.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-6548d' Jul 21 00:51:01.672: INFO: stderr: "" Jul 21 00:51:01.672: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 21 00:51:01.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6548d' Jul 21 00:51:01.777: INFO: stderr: "" Jul 21 00:51:01.777: INFO: stdout: "update-demo-nautilus-4cchd update-demo-nautilus-td67x " STEP: Replicas for name=update-demo: expected=1 actual=2 Jul 21 00:51:06.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6548d' Jul 21 00:51:06.886: INFO: stderr: "" Jul 21 00:51:06.886: INFO: stdout: "update-demo-nautilus-4cchd " Jul 21 00:51:06.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4cchd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6548d' Jul 21 00:51:06.985: INFO: stderr: "" Jul 21 00:51:06.985: INFO: stdout: "true" Jul 21 00:51:06.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4cchd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6548d' Jul 21 00:51:07.084: INFO: stderr: "" Jul 21 00:51:07.084: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 21 00:51:07.084: INFO: validating pod update-demo-nautilus-4cchd Jul 21 00:51:07.088: INFO: got data: { "image": "nautilus.jpg" } Jul 21 00:51:07.088: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 21 00:51:07.088: INFO: update-demo-nautilus-4cchd is verified up and running STEP: scaling up the replication controller Jul 21 00:51:07.090: INFO: scanned /root for discovery docs: Jul 21 00:51:07.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-6548d' Jul 21 00:51:08.259: INFO: stderr: "" Jul 21 00:51:08.259: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 21 00:51:08.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6548d' Jul 21 00:51:08.359: INFO: stderr: "" Jul 21 00:51:08.359: INFO: stdout: "update-demo-nautilus-4cchd update-demo-nautilus-qnnwb " Jul 21 00:51:08.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4cchd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6548d' Jul 21 00:51:08.465: INFO: stderr: "" Jul 21 00:51:08.465: INFO: stdout: "true" Jul 21 00:51:08.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4cchd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6548d' Jul 21 00:51:08.586: INFO: stderr: "" Jul 21 00:51:08.586: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 21 00:51:08.586: INFO: validating pod update-demo-nautilus-4cchd Jul 21 00:51:08.590: INFO: got data: { "image": "nautilus.jpg" } Jul 21 00:51:08.590: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 21 00:51:08.590: INFO: update-demo-nautilus-4cchd is verified up and running Jul 21 00:51:08.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qnnwb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6548d' Jul 21 00:51:08.689: INFO: stderr: "" Jul 21 00:51:08.689: INFO: stdout: "" Jul 21 00:51:08.689: INFO: update-demo-nautilus-qnnwb is created but not running Jul 21 00:51:13.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6548d' Jul 21 00:51:13.803: INFO: stderr: "" Jul 21 00:51:13.803: INFO: stdout: "update-demo-nautilus-4cchd update-demo-nautilus-qnnwb " Jul 21 00:51:13.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4cchd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6548d' Jul 21 00:51:13.901: INFO: stderr: "" Jul 21 00:51:13.901: INFO: stdout: "true" Jul 21 00:51:13.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4cchd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6548d' Jul 21 00:51:14.002: INFO: stderr: "" Jul 21 00:51:14.002: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 21 00:51:14.002: INFO: validating pod update-demo-nautilus-4cchd Jul 21 00:51:14.005: INFO: got data: { "image": "nautilus.jpg" } Jul 21 00:51:14.005: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 21 00:51:14.005: INFO: update-demo-nautilus-4cchd is verified up and running Jul 21 00:51:14.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qnnwb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6548d' Jul 21 00:51:14.103: INFO: stderr: "" Jul 21 00:51:14.103: INFO: stdout: "true" Jul 21 00:51:14.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qnnwb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6548d' Jul 21 00:51:14.204: INFO: stderr: "" Jul 21 00:51:14.204: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 21 00:51:14.204: INFO: validating pod update-demo-nautilus-qnnwb Jul 21 00:51:14.208: INFO: got data: { "image": "nautilus.jpg" } Jul 21 00:51:14.208: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 21 00:51:14.208: INFO: update-demo-nautilus-qnnwb is verified up and running STEP: using delete to clean up resources Jul 21 00:51:14.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-6548d' Jul 21 00:51:14.338: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 21 00:51:14.338: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jul 21 00:51:14.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-6548d' Jul 21 00:51:14.436: INFO: stderr: "No resources found.\n" Jul 21 00:51:14.436: INFO: stdout: "" Jul 21 00:51:14.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-6548d -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 21 00:51:14.549: INFO: stderr: "" Jul 21 00:51:14.549: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:51:14.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6548d" for this suite. Jul 21 00:51:36.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:51:37.178: INFO: namespace: e2e-tests-kubectl-6548d, resource: bindings, ignored listing per whitelist Jul 21 00:51:37.216: INFO: namespace e2e-tests-kubectl-6548d deletion completed in 22.663189372s • [SLOW TEST:45.147 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:51:37.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-54df5168-caec-11ea-86e4-0242ac110009 STEP: Creating a pod to test consume configMaps Jul 21 00:51:37.350: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-54dfcbc7-caec-11ea-86e4-0242ac110009" in namespace "e2e-tests-projected-r27ck" to be "success or failure" Jul 21 00:51:37.356: INFO: Pod "pod-projected-configmaps-54dfcbc7-caec-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 6.248027ms Jul 21 00:51:39.471: INFO: Pod "pod-projected-configmaps-54dfcbc7-caec-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121160197s Jul 21 00:51:41.475: INFO: Pod "pod-projected-configmaps-54dfcbc7-caec-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.124972465s STEP: Saw pod success Jul 21 00:51:41.475: INFO: Pod "pod-projected-configmaps-54dfcbc7-caec-11ea-86e4-0242ac110009" satisfied condition "success or failure" Jul 21 00:51:41.478: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-54dfcbc7-caec-11ea-86e4-0242ac110009 container projected-configmap-volume-test: STEP: delete the pod Jul 21 00:51:41.495: INFO: Waiting for pod pod-projected-configmaps-54dfcbc7-caec-11ea-86e4-0242ac110009 to disappear Jul 21 00:51:41.500: INFO: Pod pod-projected-configmaps-54dfcbc7-caec-11ea-86e4-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:51:41.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-r27ck" for this suite. Jul 21 00:51:47.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:51:47.592: INFO: namespace: e2e-tests-projected-r27ck, resource: bindings, ignored listing per whitelist Jul 21 00:51:47.596: INFO: namespace e2e-tests-projected-r27ck deletion completed in 6.092400369s • [SLOW TEST:10.379 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:51:47.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-5b1354e9-caec-11ea-86e4-0242ac110009 STEP: Creating a pod to test consume secrets Jul 21 00:51:47.762: INFO: Waiting up to 5m0s for pod "pod-secrets-5b13ec8a-caec-11ea-86e4-0242ac110009" in namespace "e2e-tests-secrets-7t42v" to be "success or failure" Jul 21 00:51:47.798: INFO: Pod "pod-secrets-5b13ec8a-caec-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 36.228374ms Jul 21 00:51:49.803: INFO: Pod "pod-secrets-5b13ec8a-caec-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040637183s Jul 21 00:51:51.807: INFO: Pod "pod-secrets-5b13ec8a-caec-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045433316s STEP: Saw pod success Jul 21 00:51:51.807: INFO: Pod "pod-secrets-5b13ec8a-caec-11ea-86e4-0242ac110009" satisfied condition "success or failure" Jul 21 00:51:51.810: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-5b13ec8a-caec-11ea-86e4-0242ac110009 container secret-volume-test: STEP: delete the pod Jul 21 00:51:52.017: INFO: Waiting for pod pod-secrets-5b13ec8a-caec-11ea-86e4-0242ac110009 to disappear Jul 21 00:51:52.231: INFO: Pod pod-secrets-5b13ec8a-caec-11ea-86e4-0242ac110009 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:51:52.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-7t42v" for this suite. Jul 21 00:51:58.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:51:58.280: INFO: namespace: e2e-tests-secrets-7t42v, resource: bindings, ignored listing per whitelist Jul 21 00:51:58.331: INFO: namespace e2e-tests-secrets-7t42v deletion completed in 6.095720049s • [SLOW TEST:10.735 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:51:58.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jul 21 00:51:58.490: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:51:58.492: INFO: Number of nodes with available pods: 0 Jul 21 00:51:58.492: INFO: Node hunter-worker is running more than one daemon pod Jul 21 00:51:59.497: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:51:59.499: INFO: Number of nodes with available pods: 0 Jul 21 00:51:59.499: INFO: Node hunter-worker is running more than one daemon pod Jul 21 00:52:00.581: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:52:00.603: INFO: Number of nodes with available pods: 0 Jul 21 00:52:00.603: INFO: Node hunter-worker is running more than one daemon pod Jul 21 00:52:01.634: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:52:01.637: INFO: Number of nodes with available pods: 0 Jul 21 00:52:01.637: INFO: Node hunter-worker is running more than one daemon pod Jul 21 00:52:02.532: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:52:02.535: INFO: Number of nodes with available pods: 0 Jul 21 00:52:02.535: INFO: Node hunter-worker is running more than one daemon pod Jul 21 00:52:03.497: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:52:03.500: INFO: Number of nodes with available pods: 2 Jul 21 00:52:03.500: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jul 21 00:52:03.513: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:52:03.543: INFO: Number of nodes with available pods: 1 Jul 21 00:52:03.543: INFO: Node hunter-worker2 is running more than one daemon pod Jul 21 00:52:04.547: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:52:04.551: INFO: Number of nodes with available pods: 1 Jul 21 00:52:04.551: INFO: Node hunter-worker2 is running more than one daemon pod Jul 21 00:52:05.547: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:52:05.549: INFO: Number of nodes with available pods: 1 Jul 21 00:52:05.549: INFO: Node hunter-worker2 is running more than one daemon pod Jul 21 00:52:06.547: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:52:06.550: INFO: Number of nodes with available pods: 1 Jul 21 00:52:06.550: INFO: Node hunter-worker2 is running more than one daemon pod Jul 21 00:52:07.547: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 21 00:52:07.550: INFO: Number of nodes with available pods: 2 Jul 21 00:52:07.550: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-gmkhq, will wait for the garbage collector to delete the pods Jul 21 00:52:07.613: INFO: Deleting DaemonSet.extensions daemon-set took: 5.719482ms Jul 21 00:52:07.814: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.357015ms Jul 21 00:52:17.517: INFO: Number of nodes with available pods: 0 Jul 21 00:52:17.517: INFO: Number of running nodes: 0, number of available pods: 0 Jul 21 00:52:17.520: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-gmkhq/daemonsets","resourceVersion":"1915309"},"items":null} Jul 21 00:52:17.524: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-gmkhq/pods","resourceVersion":"1915309"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:52:17.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-gmkhq" for this suite. Jul 21 00:52:23.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:52:23.575: INFO: namespace: e2e-tests-daemonsets-gmkhq, resource: bindings, ignored listing per whitelist Jul 21 00:52:23.643: INFO: namespace e2e-tests-daemonsets-gmkhq deletion completed in 6.104301591s • [SLOW TEST:25.311 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:52:23.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-vzg9l [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-vzg9l STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-vzg9l Jul 21 00:52:23.768: INFO: Found 0 stateful pods, waiting for 1 Jul 21 00:52:33.772: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jul 21 00:52:33.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzg9l ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 21 00:52:34.017: INFO: stderr: "I0721 00:52:33.913789 1137 log.go:172] (0xc000138840) (0xc00062d360) Create stream\nI0721 00:52:33.913844 1137 log.go:172] (0xc000138840) (0xc00062d360) Stream added, broadcasting: 1\nI0721 00:52:33.916598 1137 log.go:172] (0xc000138840) Reply frame received for 1\nI0721 00:52:33.916649 1137 log.go:172] (0xc000138840) (0xc00034a000) Create stream\nI0721 00:52:33.916666 1137 log.go:172] (0xc000138840) (0xc00034a000) Stream added, broadcasting: 3\nI0721 00:52:33.917750 1137 log.go:172] (0xc000138840) Reply frame received for 3\nI0721 00:52:33.917801 1137 log.go:172] (0xc000138840) (0xc00062d400) Create stream\nI0721 00:52:33.917821 1137 log.go:172] (0xc000138840) (0xc00062d400) Stream added, broadcasting: 5\nI0721 00:52:33.918892 1137 log.go:172] (0xc000138840) Reply frame received for 5\nI0721 00:52:34.010486 1137 log.go:172] (0xc000138840) Data frame received for 3\nI0721 00:52:34.010548 1137 log.go:172] (0xc00034a000) (3) Data frame handling\nI0721 00:52:34.010574 1137 log.go:172] (0xc00034a000) (3) Data frame sent\nI0721 00:52:34.010582 1137 log.go:172] (0xc000138840) Data frame received for 3\nI0721 00:52:34.010588 1137 log.go:172] (0xc00034a000) (3) Data frame handling\nI0721 00:52:34.010639 1137 log.go:172] (0xc000138840) Data frame received for 5\nI0721 00:52:34.010648 1137 log.go:172] (0xc00062d400) (5) Data frame handling\nI0721 00:52:34.012454 1137 log.go:172] (0xc000138840) Data frame received for 1\nI0721 00:52:34.012470 1137 log.go:172] (0xc00062d360) (1) Data frame handling\nI0721 00:52:34.012481 1137 log.go:172] (0xc00062d360) (1) Data frame sent\nI0721 00:52:34.012489 1137 log.go:172] (0xc000138840) (0xc00062d360) Stream removed, broadcasting: 1\nI0721 00:52:34.012497 1137 log.go:172] (0xc000138840) Go away received\nI0721 00:52:34.012979 1137 log.go:172] (0xc000138840) (0xc00062d360) Stream removed, broadcasting: 1\nI0721 00:52:34.013010 1137 log.go:172] (0xc000138840) (0xc00034a000) Stream removed, broadcasting: 3\nI0721 00:52:34.013034 1137 log.go:172] (0xc000138840) (0xc00062d400) Stream removed, broadcasting: 5\n" Jul 21 00:52:34.017: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 21 00:52:34.017: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 21 00:52:34.021: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jul 21 00:52:44.027: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 21 00:52:44.027: INFO: Waiting for statefulset status.replicas updated to 0 Jul 21 00:52:44.049: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999603s Jul 21 00:52:45.054: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.986698944s Jul 21 00:52:46.059: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.981628275s Jul 21 00:52:47.063: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.976876764s Jul 21 00:52:48.068: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.972963315s Jul 21 00:52:49.073: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.967884032s Jul 21 00:52:50.077: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.963148035s Jul 21 00:52:51.082: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.95848772s Jul 21 00:52:52.087: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.953519397s Jul 21 00:52:53.093: INFO: Verifying statefulset ss doesn't scale past 1 for another 948.505921ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-vzg9l Jul 21 00:52:54.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzg9l ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 21 00:52:54.309: INFO: stderr: "I0721 00:52:54.216280 1160 log.go:172] (0xc000138840) (0xc000736640) Create stream\nI0721 00:52:54.216333 1160 log.go:172] (0xc000138840) (0xc000736640) Stream added, broadcasting: 1\nI0721 00:52:54.218969 1160 log.go:172] (0xc000138840) Reply frame received for 1\nI0721 00:52:54.219012 1160 log.go:172] (0xc000138840) (0xc0007366e0) Create stream\nI0721 00:52:54.219024 1160 log.go:172] (0xc000138840) (0xc0007366e0) Stream added, broadcasting: 3\nI0721 00:52:54.219906 1160 log.go:172] (0xc000138840) Reply frame received for 3\nI0721 00:52:54.219939 1160 log.go:172] (0xc000138840) (0xc000736780) Create stream\nI0721 00:52:54.219955 1160 log.go:172] (0xc000138840) (0xc000736780) Stream added, broadcasting: 5\nI0721 00:52:54.220902 1160 log.go:172] (0xc000138840) Reply frame received for 5\nI0721 00:52:54.302609 1160 log.go:172] (0xc000138840) Data frame received for 5\nI0721 00:52:54.302651 1160 log.go:172] (0xc000138840) Data frame received for 3\nI0721 00:52:54.302689 1160 log.go:172] (0xc0007366e0) (3) Data frame handling\nI0721 00:52:54.302706 1160 log.go:172] (0xc0007366e0) (3) Data frame sent\nI0721 00:52:54.302715 1160 log.go:172] (0xc000138840) Data frame received for 3\nI0721 00:52:54.302721 1160 log.go:172] (0xc0007366e0) (3) Data frame handling\nI0721 00:52:54.302751 1160 log.go:172] (0xc000736780) (5) Data frame handling\nI0721 00:52:54.304506 1160 log.go:172] (0xc000138840) Data frame received for 1\nI0721 00:52:54.304524 1160 log.go:172] (0xc000736640) (1) Data frame handling\nI0721 00:52:54.304536 1160 log.go:172] (0xc000736640) (1) Data frame sent\nI0721 00:52:54.304548 1160 log.go:172] (0xc000138840) (0xc000736640) Stream removed, broadcasting: 1\nI0721 00:52:54.304613 1160 log.go:172] (0xc000138840) Go away received\nI0721 00:52:54.304832 1160 log.go:172] (0xc000138840) (0xc000736640) Stream removed, broadcasting: 1\nI0721 00:52:54.304862 1160 log.go:172] (0xc000138840) (0xc0007366e0) Stream removed, broadcasting: 3\nI0721 00:52:54.304896 1160 log.go:172] (0xc000138840) (0xc000736780) Stream removed, broadcasting: 5\n" Jul 21 00:52:54.309: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 21 00:52:54.309: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 21 00:52:54.313: INFO: Found 1 stateful pods, waiting for 3 Jul 21 00:53:04.318: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jul 21 00:53:04.318: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jul 21 00:53:04.318: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jul 21 00:53:04.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzg9l ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 21 00:53:04.533: INFO: stderr: "I0721 00:53:04.462599 1183 log.go:172] (0xc000138790) (0xc000681220) Create stream\nI0721 00:53:04.462665 1183 log.go:172] (0xc000138790) (0xc000681220) Stream added, broadcasting: 1\nI0721 00:53:04.465086 1183 log.go:172] (0xc000138790) Reply frame received for 1\nI0721 00:53:04.465132 1183 log.go:172] (0xc000138790) (0xc000746000) Create stream\nI0721 00:53:04.465150 1183 log.go:172] (0xc000138790) (0xc000746000) Stream added, broadcasting: 3\nI0721 00:53:04.466162 1183 log.go:172] (0xc000138790) Reply frame received for 3\nI0721 00:53:04.466244 1183 log.go:172] (0xc000138790) (0xc0003fc000) Create stream\nI0721 00:53:04.466286 1183 log.go:172] (0xc000138790) (0xc0003fc000) Stream added, broadcasting: 5\nI0721 00:53:04.467217 1183 log.go:172] (0xc000138790) Reply frame received for 5\nI0721 00:53:04.524136 1183 log.go:172] (0xc000138790) Data frame received for 3\nI0721 00:53:04.524386 1183 log.go:172] (0xc000746000) (3) Data frame handling\nI0721 00:53:04.524513 1183 log.go:172] (0xc000746000) (3) Data frame sent\nI0721 00:53:04.525432 1183 log.go:172] (0xc000138790) Data frame received for 3\nI0721 00:53:04.525510 1183 log.go:172] (0xc000746000) (3) Data frame handling\nI0721 00:53:04.526090 1183 log.go:172] (0xc000138790) Data frame received for 5\nI0721 00:53:04.526167 1183 log.go:172] (0xc0003fc000) (5) Data frame handling\nI0721 00:53:04.527064 1183 log.go:172] (0xc000138790) Data frame received for 1\nI0721 00:53:04.527151 1183 log.go:172] (0xc000681220) (1) Data frame handling\nI0721 00:53:04.527247 1183 log.go:172] (0xc000681220) (1) Data frame sent\nI0721 00:53:04.528263 1183 log.go:172] (0xc000138790) (0xc000681220) Stream removed, broadcasting: 1\nI0721 00:53:04.528513 1183 log.go:172] (0xc000138790) (0xc000681220) Stream removed, broadcasting: 1\nI0721 00:53:04.528540 1183 log.go:172] (0xc000138790) (0xc000746000) Stream removed, broadcasting: 3\nI0721 00:53:04.528965 1183 log.go:172] (0xc000138790) (0xc0003fc000) Stream removed, broadcasting: 5\nI0721 00:53:04.529444 1183 log.go:172] (0xc000138790) Go away received\n" Jul 21 00:53:04.533: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 21 00:53:04.533: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 21 00:53:04.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzg9l ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 21 00:53:04.769: INFO: stderr: "I0721 00:53:04.649419 1206 log.go:172] (0xc000138160) (0xc00065c1e0) Create stream\nI0721 00:53:04.649490 1206 log.go:172] (0xc000138160) (0xc00065c1e0) Stream added, broadcasting: 1\nI0721 00:53:04.652592 1206 log.go:172] (0xc000138160) Reply frame received for 1\nI0721 00:53:04.652647 1206 log.go:172] (0xc000138160) (0xc0002a0aa0) Create stream\nI0721 00:53:04.652678 1206 log.go:172] (0xc000138160) (0xc0002a0aa0) Stream added, broadcasting: 3\nI0721 00:53:04.653879 1206 log.go:172] (0xc000138160) Reply frame received for 3\nI0721 00:53:04.653937 1206 log.go:172] (0xc000138160) (0xc00065c280) Create stream\nI0721 00:53:04.653958 1206 log.go:172] (0xc000138160) (0xc00065c280) Stream added, broadcasting: 5\nI0721 00:53:04.654964 1206 log.go:172] (0xc000138160) Reply frame received for 5\nI0721 00:53:04.762028 1206 log.go:172] (0xc000138160) Data frame received for 3\nI0721 00:53:04.762095 1206 log.go:172] (0xc0002a0aa0) (3) Data frame handling\nI0721 00:53:04.762146 1206 log.go:172] (0xc000138160) Data frame received for 5\nI0721 00:53:04.762210 1206 log.go:172] (0xc00065c280) (5) Data frame handling\nI0721 00:53:04.762244 1206 log.go:172] (0xc0002a0aa0) (3) Data frame sent\nI0721 00:53:04.762306 1206 log.go:172] (0xc000138160) Data frame received for 3\nI0721 00:53:04.762332 1206 log.go:172] (0xc0002a0aa0) (3) Data frame handling\nI0721 00:53:04.763926 1206 log.go:172] (0xc000138160) Data frame received for 1\nI0721 00:53:04.763942 1206 log.go:172] (0xc00065c1e0) (1) Data frame handling\nI0721 00:53:04.763959 1206 log.go:172] (0xc00065c1e0) (1) Data frame sent\nI0721 00:53:04.764005 1206 log.go:172] (0xc000138160) (0xc00065c1e0) Stream removed, broadcasting: 1\nI0721 00:53:04.764159 1206 log.go:172] (0xc000138160) Go away received\nI0721 00:53:04.764197 1206 log.go:172] (0xc000138160) (0xc00065c1e0) Stream removed, broadcasting: 1\nI0721 00:53:04.764214 1206 log.go:172] (0xc000138160) (0xc0002a0aa0) Stream removed, broadcasting: 3\nI0721 00:53:04.764221 1206 log.go:172] (0xc000138160) (0xc00065c280) Stream removed, broadcasting: 5\n" Jul 21 00:53:04.769: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 21 00:53:04.769: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 21 00:53:04.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzg9l ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 21 00:53:05.036: INFO: stderr: "I0721 00:53:04.897381 1228 log.go:172] (0xc00082c2c0) (0xc000726640) Create stream\nI0721 00:53:04.897455 1228 log.go:172] (0xc00082c2c0) (0xc000726640) Stream added, broadcasting: 1\nI0721 00:53:04.899353 1228 log.go:172] (0xc00082c2c0) Reply frame received for 1\nI0721 00:53:04.899391 1228 log.go:172] (0xc00082c2c0) (0xc0005d2c80) Create stream\nI0721 00:53:04.899404 1228 log.go:172] (0xc00082c2c0) (0xc0005d2c80) Stream added, broadcasting: 3\nI0721 00:53:04.900233 1228 log.go:172] (0xc00082c2c0) Reply frame received for 3\nI0721 00:53:04.900273 1228 log.go:172] (0xc00082c2c0) (0xc0004f8000) Create stream\nI0721 00:53:04.900286 1228 log.go:172] (0xc00082c2c0) (0xc0004f8000) Stream added, broadcasting: 5\nI0721 00:53:04.901148 1228 log.go:172] (0xc00082c2c0) Reply frame received for 5\nI0721 00:53:05.029245 1228 log.go:172] (0xc00082c2c0) Data frame received for 3\nI0721 00:53:05.029287 1228 log.go:172] (0xc0005d2c80) (3) Data frame handling\nI0721 00:53:05.029311 1228 log.go:172] (0xc0005d2c80) (3) Data frame sent\nI0721 00:53:05.029475 1228 log.go:172] (0xc00082c2c0) Data frame received for 3\nI0721 00:53:05.029515 1228 log.go:172] (0xc0005d2c80) (3) Data frame handling\nI0721 00:53:05.029589 1228 log.go:172] (0xc00082c2c0) Data frame received for 5\nI0721 00:53:05.029602 1228 log.go:172] (0xc0004f8000) (5) Data frame handling\nI0721 00:53:05.031353 1228 log.go:172] (0xc00082c2c0) Data frame received for 1\nI0721 00:53:05.031378 1228 log.go:172] (0xc000726640) (1) Data frame handling\nI0721 00:53:05.031391 1228 log.go:172] (0xc000726640) (1) Data frame sent\nI0721 00:53:05.031404 1228 log.go:172] (0xc00082c2c0) (0xc000726640) Stream removed, broadcasting: 1\nI0721 00:53:05.031418 1228 log.go:172] (0xc00082c2c0) Go away received\nI0721 00:53:05.031709 1228 log.go:172] (0xc00082c2c0) (0xc000726640) Stream removed, broadcasting: 1\nI0721 00:53:05.031746 1228 log.go:172] (0xc00082c2c0) (0xc0005d2c80) Stream removed, broadcasting: 3\nI0721 00:53:05.031772 1228 log.go:172] (0xc00082c2c0) (0xc0004f8000) Stream removed, broadcasting: 5\n" Jul 21 00:53:05.036: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 21 00:53:05.036: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 21 00:53:05.036: INFO: Waiting for statefulset status.replicas updated to 0 Jul 21 00:53:05.039: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Jul 21 00:53:15.049: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 21 00:53:15.050: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jul 21 00:53:15.050: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jul 21 00:53:15.068: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999641s Jul 21 00:53:16.072: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989330687s Jul 21 00:53:17.077: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.985207497s Jul 21 00:53:18.082: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.980532629s Jul 21 00:53:19.087: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.975048272s Jul 21 00:53:20.092: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.970273829s Jul 21 00:53:21.097: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.964967601s Jul 21 00:53:22.102: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.960819534s Jul 21 00:53:23.105: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.955723272s Jul 21 00:53:24.111: INFO: Verifying statefulset ss doesn't scale past 3 for another 951.941111ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-vzg9l Jul 21 00:53:25.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzg9l ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 21 00:53:25.339: INFO: stderr: "I0721 00:53:25.244632 1251 log.go:172] (0xc00013a790) (0xc000720640) Create stream\nI0721 00:53:25.244682 1251 log.go:172] (0xc00013a790) (0xc000720640) Stream added, broadcasting: 1\nI0721 00:53:25.246822 1251 log.go:172] (0xc00013a790) Reply frame received for 1\nI0721 00:53:25.246885 1251 log.go:172] (0xc00013a790) (0xc00062abe0) Create stream\nI0721 00:53:25.246910 1251 log.go:172] (0xc00013a790) (0xc00062abe0) Stream added, broadcasting: 3\nI0721 00:53:25.247851 1251 log.go:172] (0xc00013a790) Reply frame received for 3\nI0721 00:53:25.247885 1251 log.go:172] (0xc00013a790) (0xc0007206e0) Create stream\nI0721 00:53:25.247894 1251 log.go:172] (0xc00013a790) (0xc0007206e0) Stream added, broadcasting: 5\nI0721 00:53:25.248869 1251 log.go:172] (0xc00013a790) Reply frame received for 5\nI0721 00:53:25.323708 1251 log.go:172] (0xc00013a790) Data frame received for 3\nI0721 00:53:25.323763 1251 log.go:172] (0xc00062abe0) (3) Data frame handling\nI0721 00:53:25.323783 1251 log.go:172] (0xc00062abe0) (3) Data frame sent\nI0721 00:53:25.323797 1251 log.go:172] (0xc00013a790) Data frame received for 3\nI0721 00:53:25.323811 1251 log.go:172] (0xc00062abe0) (3) Data frame handling\nI0721 00:53:25.323826 1251 log.go:172] (0xc00013a790) Data frame received for 5\nI0721 00:53:25.323836 1251 log.go:172] (0xc0007206e0) (5) Data frame handling\nI0721 00:53:25.333940 1251 log.go:172] (0xc00013a790) Data frame received for 1\nI0721 00:53:25.333967 1251 log.go:172] (0xc000720640) (1) Data frame handling\nI0721 00:53:25.333978 1251 log.go:172] (0xc000720640) (1) Data frame sent\nI0721 00:53:25.333995 1251 log.go:172] (0xc00013a790) (0xc000720640) Stream removed, broadcasting: 1\nI0721 00:53:25.334012 1251 log.go:172] (0xc00013a790) Go away received\nI0721 00:53:25.334258 1251 log.go:172] (0xc00013a790) (0xc000720640) Stream removed, broadcasting: 1\nI0721 00:53:25.334276 1251 log.go:172] (0xc00013a790) (0xc00062abe0) Stream removed, broadcasting: 3\nI0721 00:53:25.334282 1251 log.go:172] (0xc00013a790) (0xc0007206e0) Stream removed, broadcasting: 5\n" Jul 21 00:53:25.340: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 21 00:53:25.340: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 21 00:53:25.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzg9l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 21 00:53:25.521: INFO: stderr: "I0721 00:53:25.465962 1274 log.go:172] (0xc000600420) (0xc00079f400) Create stream\nI0721 00:53:25.466035 1274 log.go:172] (0xc000600420) (0xc00079f400) Stream added, broadcasting: 1\nI0721 00:53:25.470022 1274 log.go:172] (0xc000600420) Reply frame received for 1\nI0721 00:53:25.470063 1274 log.go:172] (0xc000600420) (0xc0006ce000) Create stream\nI0721 00:53:25.470071 1274 log.go:172] (0xc000600420) (0xc0006ce000) Stream added, broadcasting: 3\nI0721 00:53:25.470963 1274 log.go:172] (0xc000600420) Reply frame received for 3\nI0721 00:53:25.470998 1274 log.go:172] (0xc000600420) (0xc0006ce0a0) Create stream\nI0721 00:53:25.471008 1274 log.go:172] (0xc000600420) (0xc0006ce0a0) Stream added, broadcasting: 5\nI0721 00:53:25.471734 1274 log.go:172] (0xc000600420) Reply frame received for 5\nI0721 00:53:25.516298 1274 log.go:172] (0xc000600420) Data frame received for 5\nI0721 00:53:25.516334 1274 log.go:172] (0xc0006ce0a0) (5) Data frame handling\nI0721 00:53:25.516374 1274 log.go:172] (0xc000600420) Data frame received for 3\nI0721 00:53:25.516383 1274 log.go:172] (0xc0006ce000) (3) Data frame handling\nI0721 00:53:25.516393 1274 log.go:172] (0xc0006ce000) (3) Data frame sent\nI0721 00:53:25.516402 1274 log.go:172] (0xc000600420) Data frame received for 3\nI0721 00:53:25.516409 1274 log.go:172] (0xc0006ce000) (3) Data frame handling\nI0721 00:53:25.517785 1274 log.go:172] (0xc000600420) Data frame received for 1\nI0721 00:53:25.517803 1274 log.go:172] (0xc00079f400) (1) Data frame handling\nI0721 00:53:25.517810 1274 log.go:172] (0xc00079f400) (1) Data frame sent\nI0721 00:53:25.517817 1274 log.go:172] (0xc000600420) (0xc00079f400) Stream removed, broadcasting: 1\nI0721 00:53:25.517880 1274 log.go:172] (0xc000600420) Go away received\nI0721 00:53:25.517959 1274 log.go:172] (0xc000600420) (0xc00079f400) Stream removed, broadcasting: 1\nI0721 00:53:25.517971 1274 log.go:172] (0xc000600420) (0xc0006ce000) Stream removed, broadcasting: 3\nI0721 00:53:25.517984 1274 log.go:172] (0xc000600420) (0xc0006ce0a0) Stream removed, broadcasting: 5\n" Jul 21 00:53:25.521: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 21 00:53:25.521: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 21 00:53:25.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzg9l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 21 00:53:25.708: INFO: stderr: "I0721 00:53:25.646176 1297 log.go:172] (0xc00015c840) (0xc0007c6640) Create stream\nI0721 00:53:25.646231 1297 log.go:172] (0xc00015c840) (0xc0007c6640) Stream added, broadcasting: 1\nI0721 00:53:25.649062 1297 log.go:172] (0xc00015c840) Reply frame received for 1\nI0721 00:53:25.649118 1297 log.go:172] (0xc00015c840) (0xc00067ad20) Create stream\nI0721 00:53:25.649135 1297 log.go:172] (0xc00015c840) (0xc00067ad20) Stream added, broadcasting: 3\nI0721 00:53:25.650150 1297 log.go:172] (0xc00015c840) Reply frame received for 3\nI0721 00:53:25.650182 1297 log.go:172] (0xc00015c840) (0xc0007c66e0) Create stream\nI0721 00:53:25.650189 1297 log.go:172] (0xc00015c840) (0xc0007c66e0) Stream added, broadcasting: 5\nI0721 00:53:25.651161 1297 log.go:172] (0xc00015c840) Reply frame received for 5\nI0721 00:53:25.702819 1297 log.go:172] (0xc00015c840) Data frame received for 3\nI0721 00:53:25.702873 1297 log.go:172] (0xc00067ad20) (3) Data frame handling\nI0721 00:53:25.702889 1297 log.go:172] (0xc00067ad20) (3) Data frame sent\nI0721 00:53:25.702900 1297 log.go:172] (0xc00015c840) Data frame received for 3\nI0721 00:53:25.702910 1297 log.go:172] (0xc00067ad20) (3) Data frame handling\nI0721 00:53:25.702955 1297 log.go:172] (0xc00015c840) Data frame received for 5\nI0721 00:53:25.702967 1297 log.go:172] (0xc0007c66e0) (5) Data frame handling\nI0721 00:53:25.704675 1297 log.go:172] (0xc00015c840) Data frame received for 1\nI0721 00:53:25.704688 1297 log.go:172] (0xc0007c6640) (1) Data frame handling\nI0721 00:53:25.704695 1297 log.go:172] (0xc0007c6640) (1) Data frame sent\nI0721 00:53:25.704703 1297 log.go:172] (0xc00015c840) (0xc0007c6640) Stream removed, broadcasting: 1\nI0721 00:53:25.704886 1297 log.go:172] (0xc00015c840) Go away received\nI0721 00:53:25.704950 1297 log.go:172] (0xc00015c840) (0xc0007c6640) Stream removed, broadcasting: 1\nI0721 00:53:25.704973 1297 log.go:172] (0xc00015c840) (0xc00067ad20) Stream removed, broadcasting: 3\nI0721 00:53:25.704987 1297 log.go:172] (0xc00015c840) (0xc0007c66e0) Stream removed, broadcasting: 5\n" Jul 21 00:53:25.708: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 21 00:53:25.708: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 21 00:53:25.708: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jul 21 00:53:55.724: INFO: Deleting all statefulset in ns e2e-tests-statefulset-vzg9l Jul 21 00:53:55.727: INFO: Scaling statefulset ss to 0 Jul 21 00:53:55.735: INFO: Waiting for statefulset status.replicas updated to 0 Jul 21 00:53:55.737: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:53:55.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-vzg9l" for this suite. Jul 21 00:54:01.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:54:01.836: INFO: namespace: e2e-tests-statefulset-vzg9l, resource: bindings, ignored listing per whitelist Jul 21 00:54:01.841: INFO: namespace e2e-tests-statefulset-vzg9l deletion completed in 6.077093312s • [SLOW TEST:98.198 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:54:01.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-jpdr STEP: Creating a pod to test atomic-volume-subpath Jul 21 00:54:01.996: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-jpdr" in namespace "e2e-tests-subpath-zs5rw" to be "success or failure" Jul 21 00:54:02.018: INFO: Pod "pod-subpath-test-downwardapi-jpdr": Phase="Pending", Reason="", readiness=false. Elapsed: 22.179573ms Jul 21 00:54:04.023: INFO: Pod "pod-subpath-test-downwardapi-jpdr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026503686s Jul 21 00:54:06.027: INFO: Pod "pod-subpath-test-downwardapi-jpdr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030890742s Jul 21 00:54:08.036: INFO: Pod "pod-subpath-test-downwardapi-jpdr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039855675s Jul 21 00:54:10.039: INFO: Pod "pod-subpath-test-downwardapi-jpdr": Phase="Running", Reason="", readiness=false. Elapsed: 8.043478617s Jul 21 00:54:12.044: INFO: Pod "pod-subpath-test-downwardapi-jpdr": Phase="Running", Reason="", readiness=false. Elapsed: 10.047899132s Jul 21 00:54:14.048: INFO: Pod "pod-subpath-test-downwardapi-jpdr": Phase="Running", Reason="", readiness=false. Elapsed: 12.051870794s Jul 21 00:54:16.052: INFO: Pod "pod-subpath-test-downwardapi-jpdr": Phase="Running", Reason="", readiness=false. Elapsed: 14.056127441s Jul 21 00:54:18.057: INFO: Pod "pod-subpath-test-downwardapi-jpdr": Phase="Running", Reason="", readiness=false. Elapsed: 16.060832374s Jul 21 00:54:20.061: INFO: Pod "pod-subpath-test-downwardapi-jpdr": Phase="Running", Reason="", readiness=false. Elapsed: 18.065086261s Jul 21 00:54:22.066: INFO: Pod "pod-subpath-test-downwardapi-jpdr": Phase="Running", Reason="", readiness=false. Elapsed: 20.069621808s Jul 21 00:54:24.070: INFO: Pod "pod-subpath-test-downwardapi-jpdr": Phase="Running", Reason="", readiness=false. Elapsed: 22.073992884s Jul 21 00:54:26.074: INFO: Pod "pod-subpath-test-downwardapi-jpdr": Phase="Running", Reason="", readiness=false. Elapsed: 24.077883749s Jul 21 00:54:28.078: INFO: Pod "pod-subpath-test-downwardapi-jpdr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.082295718s STEP: Saw pod success Jul 21 00:54:28.078: INFO: Pod "pod-subpath-test-downwardapi-jpdr" satisfied condition "success or failure" Jul 21 00:54:28.082: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-downwardapi-jpdr container test-container-subpath-downwardapi-jpdr: STEP: delete the pod Jul 21 00:54:28.133: INFO: Waiting for pod pod-subpath-test-downwardapi-jpdr to disappear Jul 21 00:54:28.144: INFO: Pod pod-subpath-test-downwardapi-jpdr no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-jpdr Jul 21 00:54:28.144: INFO: Deleting pod "pod-subpath-test-downwardapi-jpdr" in namespace "e2e-tests-subpath-zs5rw" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:54:28.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-zs5rw" for this suite. Jul 21 00:54:34.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:54:34.185: INFO: namespace: e2e-tests-subpath-zs5rw, resource: bindings, ignored listing per whitelist Jul 21 00:54:34.247: INFO: namespace e2e-tests-subpath-zs5rw deletion completed in 6.097078667s • [SLOW TEST:32.405 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:54:34.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-vgvmz STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 21 00:54:34.344: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 21 00:54:58.532: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.168:8080/dial?request=hostName&protocol=http&host=10.244.1.167&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-vgvmz PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 21 00:54:58.532: INFO: >>> kubeConfig: /root/.kube/config I0721 00:54:58.567361 6 log.go:172] (0xc00168a0b0) (0xc001f643c0) Create stream I0721 00:54:58.567390 6 log.go:172] (0xc00168a0b0) (0xc001f643c0) Stream added, broadcasting: 1 I0721 00:54:58.570087 6 log.go:172] (0xc00168a0b0) Reply frame received for 1 I0721 00:54:58.570148 6 log.go:172] (0xc00168a0b0) (0xc001902640) Create stream I0721 00:54:58.570165 6 log.go:172] (0xc00168a0b0) (0xc001902640) Stream added, broadcasting: 3 I0721 00:54:58.571381 6 log.go:172] (0xc00168a0b0) Reply frame received for 3 I0721 00:54:58.571410 6 log.go:172] (0xc00168a0b0) (0xc001f64460) Create stream I0721 00:54:58.571422 6 log.go:172] (0xc00168a0b0) (0xc001f64460) Stream added, broadcasting: 5 I0721 00:54:58.572490 6 log.go:172] (0xc00168a0b0) Reply frame received for 5 I0721 00:54:58.665191 6 log.go:172] (0xc00168a0b0) Data frame received for 3 I0721 00:54:58.665225 6 log.go:172] (0xc001902640) (3) Data frame handling I0721 00:54:58.665267 6 log.go:172] (0xc001902640) (3) Data frame sent I0721 00:54:58.666421 6 log.go:172] (0xc00168a0b0) Data frame received for 5 I0721 00:54:58.666474 6 log.go:172] (0xc001f64460) (5) Data frame handling I0721 00:54:58.666505 6 log.go:172] (0xc00168a0b0) Data frame received for 3 I0721 00:54:58.666518 6 log.go:172] (0xc001902640) (3) Data frame handling I0721 00:54:58.668315 6 log.go:172] (0xc00168a0b0) Data frame received for 1 I0721 00:54:58.668337 6 log.go:172] (0xc001f643c0) (1) Data frame handling I0721 00:54:58.668358 6 log.go:172] (0xc001f643c0) (1) Data frame sent I0721 00:54:58.668456 6 log.go:172] (0xc00168a0b0) (0xc001f643c0) Stream removed, broadcasting: 1 I0721 00:54:58.668571 6 log.go:172] (0xc00168a0b0) (0xc001f643c0) Stream removed, broadcasting: 1 I0721 00:54:58.668594 6 log.go:172] (0xc00168a0b0) (0xc001902640) Stream removed, broadcasting: 3 I0721 00:54:58.668612 6 log.go:172] (0xc00168a0b0) (0xc001f64460) Stream removed, broadcasting: 5 I0721 00:54:58.668635 6 log.go:172] (0xc00168a0b0) Go away received Jul 21 00:54:58.668: INFO: Waiting for endpoints: map[] Jul 21 00:54:58.672: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.168:8080/dial?request=hostName&protocol=http&host=10.244.2.142&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-vgvmz PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 21 00:54:58.672: INFO: >>> kubeConfig: /root/.kube/config I0721 00:54:58.699072 6 log.go:172] (0xc0021202c0) (0xc002011680) Create stream I0721 00:54:58.699120 6 log.go:172] (0xc0021202c0) (0xc002011680) Stream added, broadcasting: 1 I0721 00:54:58.701616 6 log.go:172] (0xc0021202c0) Reply frame received for 1 I0721 00:54:58.701655 6 log.go:172] (0xc0021202c0) (0xc001839220) Create stream I0721 00:54:58.701670 6 log.go:172] (0xc0021202c0) (0xc001839220) Stream added, broadcasting: 3 I0721 00:54:58.702394 6 log.go:172] (0xc0021202c0) Reply frame received for 3 I0721 00:54:58.702413 6 log.go:172] (0xc0021202c0) (0xc002011720) Create stream I0721 00:54:58.702418 6 log.go:172] (0xc0021202c0) (0xc002011720) Stream added, broadcasting: 5 I0721 00:54:58.703191 6 log.go:172] (0xc0021202c0) Reply frame received for 5 I0721 00:54:58.774682 6 log.go:172] (0xc0021202c0) Data frame received for 3 I0721 00:54:58.774718 6 log.go:172] (0xc001839220) (3) Data frame handling I0721 00:54:58.774752 6 log.go:172] (0xc001839220) (3) Data frame sent I0721 00:54:58.775704 6 log.go:172] (0xc0021202c0) Data frame received for 5 I0721 00:54:58.775739 6 log.go:172] (0xc002011720) (5) Data frame handling I0721 00:54:58.775819 6 log.go:172] (0xc0021202c0) Data frame received for 3 I0721 00:54:58.775832 6 log.go:172] (0xc001839220) (3) Data frame handling I0721 00:54:58.777908 6 log.go:172] (0xc0021202c0) Data frame received for 1 I0721 00:54:58.777930 6 log.go:172] (0xc002011680) (1) Data frame handling I0721 00:54:58.777946 6 log.go:172] (0xc002011680) (1) Data frame sent I0721 00:54:58.777958 6 log.go:172] (0xc0021202c0) (0xc002011680) Stream removed, broadcasting: 1 I0721 00:54:58.777971 6 log.go:172] (0xc0021202c0) Go away received I0721 00:54:58.778091 6 log.go:172] (0xc0021202c0) (0xc002011680) Stream removed, broadcasting: 1 I0721 00:54:58.778126 6 log.go:172] (0xc0021202c0) (0xc001839220) Stream removed, broadcasting: 3 I0721 00:54:58.778149 6 log.go:172] (0xc0021202c0) (0xc002011720) Stream removed, broadcasting: 5 Jul 21 00:54:58.778: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:54:58.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-vgvmz" for this suite. Jul 21 00:55:22.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:55:22.869: INFO: namespace: e2e-tests-pod-network-test-vgvmz, resource: bindings, ignored listing per whitelist Jul 21 00:55:22.904: INFO: namespace e2e-tests-pod-network-test-vgvmz deletion completed in 24.122322283s • [SLOW TEST:48.657 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:55:22.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:55:27.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-52pkx" for this suite. Jul 21 00:56:09.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:56:09.147: INFO: namespace: e2e-tests-kubelet-test-52pkx, resource: bindings, ignored listing per whitelist Jul 21 00:56:09.171: INFO: namespace e2e-tests-kubelet-test-52pkx deletion completed in 42.093003075s • [SLOW TEST:46.266 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:56:09.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:56:09.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-29j4f" for this suite. Jul 21 00:56:15.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:56:15.432: INFO: namespace: e2e-tests-kubelet-test-29j4f, resource: bindings, ignored listing per whitelist Jul 21 00:56:15.452: INFO: namespace e2e-tests-kubelet-test-29j4f deletion completed in 6.103139946s • [SLOW TEST:6.281 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:56:15.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-x849x.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-x849x.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-x849x.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-x849x.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-x849x.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-x849x.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 21 00:56:21.734: INFO: DNS probes using e2e-tests-dns-x849x/dns-test-fab7fe7d-caec-11ea-86e4-0242ac110009 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:56:21.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-x849x" for this suite. Jul 21 00:56:27.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:56:27.800: INFO: namespace: e2e-tests-dns-x849x, resource: bindings, ignored listing per whitelist Jul 21 00:56:27.856: INFO: namespace e2e-tests-dns-x849x deletion completed in 6.085107162s • [SLOW TEST:12.403 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:56:27.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jul 21 00:56:38.047: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 21 00:56:38.051: INFO: Pod pod-with-poststart-http-hook still exists Jul 21 00:56:40.051: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 21 00:56:40.056: INFO: Pod pod-with-poststart-http-hook still exists Jul 21 00:56:42.051: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 21 00:56:42.055: INFO: Pod pod-with-poststart-http-hook still exists Jul 21 00:56:44.051: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 21 00:56:44.056: INFO: Pod pod-with-poststart-http-hook still exists Jul 21 00:56:46.051: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 21 00:56:46.055: INFO: Pod pod-with-poststart-http-hook still exists Jul 21 00:56:48.051: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 21 00:56:48.055: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:56:48.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-v6jx8" for this suite. Jul 21 00:57:10.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:57:10.125: INFO: namespace: e2e-tests-container-lifecycle-hook-v6jx8, resource: bindings, ignored listing per whitelist Jul 21 00:57:10.188: INFO: namespace e2e-tests-container-lifecycle-hook-v6jx8 deletion completed in 22.130035832s • [SLOW TEST:42.332 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:57:10.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 21 00:57:10.293: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1b5413ac-caed-11ea-86e4-0242ac110009" in namespace "e2e-tests-projected-qnmmf" to be "success or failure" Jul 21 00:57:10.297: INFO: Pod "downwardapi-volume-1b5413ac-caed-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.800018ms Jul 21 00:57:12.301: INFO: Pod "downwardapi-volume-1b5413ac-caed-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007704217s Jul 21 00:57:14.305: INFO: Pod "downwardapi-volume-1b5413ac-caed-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011504637s STEP: Saw pod success Jul 21 00:57:14.305: INFO: Pod "downwardapi-volume-1b5413ac-caed-11ea-86e4-0242ac110009" satisfied condition "success or failure" Jul 21 00:57:14.308: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-1b5413ac-caed-11ea-86e4-0242ac110009 container client-container: STEP: delete the pod Jul 21 00:57:14.328: INFO: Waiting for pod downwardapi-volume-1b5413ac-caed-11ea-86e4-0242ac110009 to disappear Jul 21 00:57:14.333: INFO: Pod downwardapi-volume-1b5413ac-caed-11ea-86e4-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:57:14.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qnmmf" for this suite. Jul 21 00:57:20.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:57:20.398: INFO: namespace: e2e-tests-projected-qnmmf, resource: bindings, ignored listing per whitelist Jul 21 00:57:20.444: INFO: namespace e2e-tests-projected-qnmmf deletion completed in 6.107485621s • [SLOW TEST:10.255 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:57:20.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-21cd25e7-caed-11ea-86e4-0242ac110009 STEP: Creating configMap with name cm-test-opt-upd-21cd2642-caed-11ea-86e4-0242ac110009 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-21cd25e7-caed-11ea-86e4-0242ac110009 STEP: Updating configmap cm-test-opt-upd-21cd2642-caed-11ea-86e4-0242ac110009 STEP: Creating configMap with name cm-test-opt-create-21cd2661-caed-11ea-86e4-0242ac110009 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:57:31.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-m5jz6" for this suite. Jul 21 00:57:46.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:57:46.055: INFO: namespace: e2e-tests-configmap-m5jz6, resource: bindings, ignored listing per whitelist Jul 21 00:57:46.096: INFO: namespace e2e-tests-configmap-m5jz6 deletion completed in 14.280853947s • [SLOW TEST:25.653 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:57:46.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 21 00:57:46.256: INFO: Waiting up to 5m0s for pod "downwardapi-volume-30c31387-caed-11ea-86e4-0242ac110009" in namespace "e2e-tests-projected-l2rsh" to be "success or failure" Jul 21 00:57:46.260: INFO: Pod "downwardapi-volume-30c31387-caed-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.223787ms Jul 21 00:57:48.264: INFO: Pod "downwardapi-volume-30c31387-caed-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00813911s Jul 21 00:57:50.268: INFO: Pod "downwardapi-volume-30c31387-caed-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011893486s STEP: Saw pod success Jul 21 00:57:50.268: INFO: Pod "downwardapi-volume-30c31387-caed-11ea-86e4-0242ac110009" satisfied condition "success or failure" Jul 21 00:57:50.271: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-30c31387-caed-11ea-86e4-0242ac110009 container client-container: STEP: delete the pod Jul 21 00:57:50.285: INFO: Waiting for pod downwardapi-volume-30c31387-caed-11ea-86e4-0242ac110009 to disappear Jul 21 00:57:50.314: INFO: Pod downwardapi-volume-30c31387-caed-11ea-86e4-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 21 00:57:50.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-l2rsh" for this suite. Jul 21 00:57:56.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 21 00:57:56.416: INFO: namespace: e2e-tests-projected-l2rsh, resource: bindings, ignored listing per whitelist Jul 21 00:57:56.447: INFO: namespace e2e-tests-projected-l2rsh deletion completed in 6.129856212s • [SLOW TEST:10.351 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 21 00:57:56.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 21 00:57:56.542: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul 21 00:58:02.910: INFO: Waiting up to 5m0s for pod "pod-3aac8b3e-caed-11ea-86e4-0242ac110009" in namespace "e2e-tests-emptydir-4vc6c" to be "success or failure"
Jul 21 00:58:02.914: INFO: Pod "pod-3aac8b3e-caed-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.550736ms
Jul 21 00:58:04.917: INFO: Pod "pod-3aac8b3e-caed-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007580144s
Jul 21 00:58:06.922: INFO: Pod "pod-3aac8b3e-caed-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0120723s
STEP: Saw pod success
Jul 21 00:58:06.922: INFO: Pod "pod-3aac8b3e-caed-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 00:58:06.925: INFO: Trying to get logs from node hunter-worker pod pod-3aac8b3e-caed-11ea-86e4-0242ac110009 container test-container: 
STEP: delete the pod
Jul 21 00:58:06.981: INFO: Waiting for pod pod-3aac8b3e-caed-11ea-86e4-0242ac110009 to disappear
Jul 21 00:58:06.993: INFO: Pod pod-3aac8b3e-caed-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 00:58:06.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-4vc6c" for this suite.
Jul 21 00:58:13.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 00:58:13.113: INFO: namespace: e2e-tests-emptydir-4vc6c, resource: bindings, ignored listing per whitelist
Jul 21 00:58:13.117: INFO: namespace e2e-tests-emptydir-4vc6c deletion completed in 6.12102109s

• [SLOW TEST:10.366 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 00:58:13.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-40db1ac9-caed-11ea-86e4-0242ac110009
STEP: Creating a pod to test consume configMaps
Jul 21 00:58:13.262: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-40dcbf04-caed-11ea-86e4-0242ac110009" in namespace "e2e-tests-projected-6lznq" to be "success or failure"
Jul 21 00:58:13.281: INFO: Pod "pod-projected-configmaps-40dcbf04-caed-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 18.886853ms
Jul 21 00:58:15.286: INFO: Pod "pod-projected-configmaps-40dcbf04-caed-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023686431s
Jul 21 00:58:17.290: INFO: Pod "pod-projected-configmaps-40dcbf04-caed-11ea-86e4-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.027245283s
Jul 21 00:58:19.294: INFO: Pod "pod-projected-configmaps-40dcbf04-caed-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031979804s
STEP: Saw pod success
Jul 21 00:58:19.294: INFO: Pod "pod-projected-configmaps-40dcbf04-caed-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 00:58:19.297: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-40dcbf04-caed-11ea-86e4-0242ac110009 container projected-configmap-volume-test: 
STEP: delete the pod
Jul 21 00:58:19.337: INFO: Waiting for pod pod-projected-configmaps-40dcbf04-caed-11ea-86e4-0242ac110009 to disappear
Jul 21 00:58:19.351: INFO: Pod pod-projected-configmaps-40dcbf04-caed-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 00:58:19.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6lznq" for this suite.
Jul 21 00:58:25.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 00:58:25.457: INFO: namespace: e2e-tests-projected-6lznq, resource: bindings, ignored listing per whitelist
Jul 21 00:58:25.474: INFO: namespace e2e-tests-projected-6lznq deletion completed in 6.120087979s

• [SLOW TEST:12.356 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 00:58:25.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 21 00:58:25.992: INFO: Waiting up to 5m0s for pod "downwardapi-volume-48712d7b-caed-11ea-86e4-0242ac110009" in namespace "e2e-tests-downward-api-qt25b" to be "success or failure"
Jul 21 00:58:26.177: INFO: Pod "downwardapi-volume-48712d7b-caed-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 184.843089ms
Jul 21 00:58:28.261: INFO: Pod "downwardapi-volume-48712d7b-caed-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.268847245s
Jul 21 00:58:30.264: INFO: Pod "downwardapi-volume-48712d7b-caed-11ea-86e4-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.27134265s
Jul 21 00:58:32.267: INFO: Pod "downwardapi-volume-48712d7b-caed-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.274300337s
STEP: Saw pod success
Jul 21 00:58:32.267: INFO: Pod "downwardapi-volume-48712d7b-caed-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 00:58:32.269: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-48712d7b-caed-11ea-86e4-0242ac110009 container client-container: 
STEP: delete the pod
Jul 21 00:58:32.385: INFO: Waiting for pod downwardapi-volume-48712d7b-caed-11ea-86e4-0242ac110009 to disappear
Jul 21 00:58:32.406: INFO: Pod downwardapi-volume-48712d7b-caed-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 00:58:32.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-qt25b" for this suite.
Jul 21 00:58:38.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 00:58:38.569: INFO: namespace: e2e-tests-downward-api-qt25b, resource: bindings, ignored listing per whitelist
Jul 21 00:58:38.603: INFO: namespace e2e-tests-downward-api-qt25b deletion completed in 6.192664131s

• [SLOW TEST:13.129 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 00:58:38.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 00:58:46.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-x84l2" for this suite.
Jul 21 00:58:52.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 00:58:52.939: INFO: namespace: e2e-tests-kubelet-test-x84l2, resource: bindings, ignored listing per whitelist
Jul 21 00:58:52.958: INFO: namespace e2e-tests-kubelet-test-x84l2 deletion completed in 6.130408693s

• [SLOW TEST:14.355 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 00:58:52.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-td6g
STEP: Creating a pod to test atomic-volume-subpath
Jul 21 00:58:53.432: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-td6g" in namespace "e2e-tests-subpath-jq5lp" to be "success or failure"
Jul 21 00:58:53.446: INFO: Pod "pod-subpath-test-secret-td6g": Phase="Pending", Reason="", readiness=false. Elapsed: 14.382565ms
Jul 21 00:58:55.450: INFO: Pod "pod-subpath-test-secret-td6g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018363781s
Jul 21 00:58:57.526: INFO: Pod "pod-subpath-test-secret-td6g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094087814s
Jul 21 00:58:59.529: INFO: Pod "pod-subpath-test-secret-td6g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0977398s
Jul 21 00:59:01.534: INFO: Pod "pod-subpath-test-secret-td6g": Phase="Running", Reason="", readiness=false. Elapsed: 8.101988344s
Jul 21 00:59:03.538: INFO: Pod "pod-subpath-test-secret-td6g": Phase="Running", Reason="", readiness=false. Elapsed: 10.1062503s
Jul 21 00:59:05.541: INFO: Pod "pod-subpath-test-secret-td6g": Phase="Running", Reason="", readiness=false. Elapsed: 12.109869101s
Jul 21 00:59:07.565: INFO: Pod "pod-subpath-test-secret-td6g": Phase="Running", Reason="", readiness=false. Elapsed: 14.133720508s
Jul 21 00:59:09.569: INFO: Pod "pod-subpath-test-secret-td6g": Phase="Running", Reason="", readiness=false. Elapsed: 16.137795384s
Jul 21 00:59:11.574: INFO: Pod "pod-subpath-test-secret-td6g": Phase="Running", Reason="", readiness=false. Elapsed: 18.142353745s
Jul 21 00:59:13.578: INFO: Pod "pod-subpath-test-secret-td6g": Phase="Running", Reason="", readiness=false. Elapsed: 20.146196039s
Jul 21 00:59:15.582: INFO: Pod "pod-subpath-test-secret-td6g": Phase="Running", Reason="", readiness=false. Elapsed: 22.150375847s
Jul 21 00:59:17.586: INFO: Pod "pod-subpath-test-secret-td6g": Phase="Running", Reason="", readiness=false. Elapsed: 24.154415476s
Jul 21 00:59:19.590: INFO: Pod "pod-subpath-test-secret-td6g": Phase="Running", Reason="", readiness=false. Elapsed: 26.158238147s
Jul 21 00:59:21.594: INFO: Pod "pod-subpath-test-secret-td6g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.162501093s
STEP: Saw pod success
Jul 21 00:59:21.594: INFO: Pod "pod-subpath-test-secret-td6g" satisfied condition "success or failure"
Jul 21 00:59:21.597: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-secret-td6g container test-container-subpath-secret-td6g: 
STEP: delete the pod
Jul 21 00:59:21.624: INFO: Waiting for pod pod-subpath-test-secret-td6g to disappear
Jul 21 00:59:21.628: INFO: Pod pod-subpath-test-secret-td6g no longer exists
STEP: Deleting pod pod-subpath-test-secret-td6g
Jul 21 00:59:21.628: INFO: Deleting pod "pod-subpath-test-secret-td6g" in namespace "e2e-tests-subpath-jq5lp"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 00:59:21.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-jq5lp" for this suite.
Jul 21 00:59:27.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 00:59:27.739: INFO: namespace: e2e-tests-subpath-jq5lp, resource: bindings, ignored listing per whitelist
Jul 21 00:59:27.750: INFO: namespace e2e-tests-subpath-jq5lp deletion completed in 6.117789747s

• [SLOW TEST:34.792 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 00:59:27.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jul 21 00:59:27.939: INFO: Waiting up to 5m0s for pod "pod-6d4fb881-caed-11ea-86e4-0242ac110009" in namespace "e2e-tests-emptydir-qsdlk" to be "success or failure"
Jul 21 00:59:27.946: INFO: Pod "pod-6d4fb881-caed-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 7.380023ms
Jul 21 00:59:29.950: INFO: Pod "pod-6d4fb881-caed-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011427526s
Jul 21 00:59:32.053: INFO: Pod "pod-6d4fb881-caed-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114036615s
Jul 21 00:59:34.057: INFO: Pod "pod-6d4fb881-caed-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.118264843s
STEP: Saw pod success
Jul 21 00:59:34.057: INFO: Pod "pod-6d4fb881-caed-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 00:59:34.060: INFO: Trying to get logs from node hunter-worker pod pod-6d4fb881-caed-11ea-86e4-0242ac110009 container test-container: 
STEP: delete the pod
Jul 21 00:59:34.142: INFO: Waiting for pod pod-6d4fb881-caed-11ea-86e4-0242ac110009 to disappear
Jul 21 00:59:34.156: INFO: Pod pod-6d4fb881-caed-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 00:59:34.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-qsdlk" for this suite.
Jul 21 00:59:40.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 00:59:40.272: INFO: namespace: e2e-tests-emptydir-qsdlk, resource: bindings, ignored listing per whitelist
Jul 21 00:59:40.285: INFO: namespace e2e-tests-emptydir-qsdlk deletion completed in 6.100235575s

• [SLOW TEST:12.534 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 00:59:40.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:00:40.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-vwv7z" for this suite.
Jul 21 01:01:02.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:01:02.851: INFO: namespace: e2e-tests-container-probe-vwv7z, resource: bindings, ignored listing per whitelist
Jul 21 01:01:02.911: INFO: namespace e2e-tests-container-probe-vwv7z deletion completed in 22.111678277s

• [SLOW TEST:82.626 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:01:02.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jul 21 01:01:02.997: INFO: Waiting up to 5m0s for pod "pod-a609581b-caed-11ea-86e4-0242ac110009" in namespace "e2e-tests-emptydir-pkk25" to be "success or failure"
Jul 21 01:01:03.011: INFO: Pod "pod-a609581b-caed-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 13.75269ms
Jul 21 01:01:05.016: INFO: Pod "pod-a609581b-caed-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018151777s
Jul 21 01:01:07.020: INFO: Pod "pod-a609581b-caed-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022658523s
STEP: Saw pod success
Jul 21 01:01:07.020: INFO: Pod "pod-a609581b-caed-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:01:07.024: INFO: Trying to get logs from node hunter-worker pod pod-a609581b-caed-11ea-86e4-0242ac110009 container test-container: 
STEP: delete the pod
Jul 21 01:01:07.045: INFO: Waiting for pod pod-a609581b-caed-11ea-86e4-0242ac110009 to disappear
Jul 21 01:01:07.049: INFO: Pod pod-a609581b-caed-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:01:07.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-pkk25" for this suite.
Jul 21 01:01:13.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:01:13.129: INFO: namespace: e2e-tests-emptydir-pkk25, resource: bindings, ignored listing per whitelist
Jul 21 01:01:13.146: INFO: namespace e2e-tests-emptydir-pkk25 deletion completed in 6.093141542s

• [SLOW TEST:10.235 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:01:13.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 21 01:01:13.286: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ac2a72db-caed-11ea-86e4-0242ac110009" in namespace "e2e-tests-projected-r77fb" to be "success or failure"
Jul 21 01:01:13.320: INFO: Pod "downwardapi-volume-ac2a72db-caed-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 33.609849ms
Jul 21 01:01:15.323: INFO: Pod "downwardapi-volume-ac2a72db-caed-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037338905s
Jul 21 01:01:17.328: INFO: Pod "downwardapi-volume-ac2a72db-caed-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042068592s
STEP: Saw pod success
Jul 21 01:01:17.328: INFO: Pod "downwardapi-volume-ac2a72db-caed-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:01:17.331: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-ac2a72db-caed-11ea-86e4-0242ac110009 container client-container: 
STEP: delete the pod
Jul 21 01:01:17.403: INFO: Waiting for pod downwardapi-volume-ac2a72db-caed-11ea-86e4-0242ac110009 to disappear
Jul 21 01:01:17.409: INFO: Pod downwardapi-volume-ac2a72db-caed-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:01:17.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-r77fb" for this suite.
Jul 21 01:01:23.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:01:23.462: INFO: namespace: e2e-tests-projected-r77fb, resource: bindings, ignored listing per whitelist
Jul 21 01:01:23.499: INFO: namespace e2e-tests-projected-r77fb deletion completed in 6.085720912s

• [SLOW TEST:10.353 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:01:23.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jul 21 01:01:30.351: INFO: 0 pods remaining
Jul 21 01:01:30.351: INFO: 0 pods has nil DeletionTimestamp
Jul 21 01:01:30.351: INFO: 
Jul 21 01:01:32.078: INFO: 0 pods remaining
Jul 21 01:01:32.078: INFO: 0 pods has nil DeletionTimestamp
Jul 21 01:01:32.078: INFO: 
STEP: Gathering metrics
W0721 01:01:32.965442       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 21 01:01:32.965: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:01:32.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-nc7f4" for this suite.
Jul 21 01:01:43.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:01:43.120: INFO: namespace: e2e-tests-gc-nc7f4, resource: bindings, ignored listing per whitelist
Jul 21 01:01:43.123: INFO: namespace e2e-tests-gc-nc7f4 deletion completed in 10.154802566s

• [SLOW TEST:19.624 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:01:43.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-be080ba4-caed-11ea-86e4-0242ac110009
STEP: Creating a pod to test consume secrets
Jul 21 01:01:43.262: INFO: Waiting up to 5m0s for pod "pod-secrets-be0874e8-caed-11ea-86e4-0242ac110009" in namespace "e2e-tests-secrets-mzm9n" to be "success or failure"
Jul 21 01:01:43.289: INFO: Pod "pod-secrets-be0874e8-caed-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 26.329146ms
Jul 21 01:01:45.354: INFO: Pod "pod-secrets-be0874e8-caed-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091377629s
Jul 21 01:01:47.550: INFO: Pod "pod-secrets-be0874e8-caed-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.287390972s
STEP: Saw pod success
Jul 21 01:01:47.550: INFO: Pod "pod-secrets-be0874e8-caed-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:01:47.552: INFO: Trying to get logs from node hunter-worker pod pod-secrets-be0874e8-caed-11ea-86e4-0242ac110009 container secret-volume-test: 
STEP: delete the pod
Jul 21 01:01:47.776: INFO: Waiting for pod pod-secrets-be0874e8-caed-11ea-86e4-0242ac110009 to disappear
Jul 21 01:01:47.793: INFO: Pod pod-secrets-be0874e8-caed-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:01:47.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-mzm9n" for this suite.
Jul 21 01:01:53.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:01:53.897: INFO: namespace: e2e-tests-secrets-mzm9n, resource: bindings, ignored listing per whitelist
Jul 21 01:01:53.903: INFO: namespace e2e-tests-secrets-mzm9n deletion completed in 6.106776308s

• [SLOW TEST:10.779 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:01:53.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-h27x
STEP: Creating a pod to test atomic-volume-subpath
Jul 21 01:01:54.064: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-h27x" in namespace "e2e-tests-subpath-f242x" to be "success or failure"
Jul 21 01:01:54.081: INFO: Pod "pod-subpath-test-configmap-h27x": Phase="Pending", Reason="", readiness=false. Elapsed: 16.841801ms
Jul 21 01:01:56.085: INFO: Pod "pod-subpath-test-configmap-h27x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020270749s
Jul 21 01:01:58.089: INFO: Pod "pod-subpath-test-configmap-h27x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024583296s
Jul 21 01:02:00.093: INFO: Pod "pod-subpath-test-configmap-h27x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028474313s
Jul 21 01:02:02.098: INFO: Pod "pod-subpath-test-configmap-h27x": Phase="Running", Reason="", readiness=false. Elapsed: 8.033071618s
Jul 21 01:02:04.102: INFO: Pod "pod-subpath-test-configmap-h27x": Phase="Running", Reason="", readiness=false. Elapsed: 10.037444754s
Jul 21 01:02:06.106: INFO: Pod "pod-subpath-test-configmap-h27x": Phase="Running", Reason="", readiness=false. Elapsed: 12.041285825s
Jul 21 01:02:08.109: INFO: Pod "pod-subpath-test-configmap-h27x": Phase="Running", Reason="", readiness=false. Elapsed: 14.044780983s
Jul 21 01:02:10.114: INFO: Pod "pod-subpath-test-configmap-h27x": Phase="Running", Reason="", readiness=false. Elapsed: 16.049482702s
Jul 21 01:02:12.118: INFO: Pod "pod-subpath-test-configmap-h27x": Phase="Running", Reason="", readiness=false. Elapsed: 18.053771848s
Jul 21 01:02:14.123: INFO: Pod "pod-subpath-test-configmap-h27x": Phase="Running", Reason="", readiness=false. Elapsed: 20.058148541s
Jul 21 01:02:16.127: INFO: Pod "pod-subpath-test-configmap-h27x": Phase="Running", Reason="", readiness=false. Elapsed: 22.062383613s
Jul 21 01:02:18.131: INFO: Pod "pod-subpath-test-configmap-h27x": Phase="Running", Reason="", readiness=false. Elapsed: 24.066840134s
Jul 21 01:02:20.136: INFO: Pod "pod-subpath-test-configmap-h27x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.071039769s
STEP: Saw pod success
Jul 21 01:02:20.136: INFO: Pod "pod-subpath-test-configmap-h27x" satisfied condition "success or failure"
Jul 21 01:02:20.139: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-h27x container test-container-subpath-configmap-h27x: 
STEP: delete the pod
Jul 21 01:02:20.193: INFO: Waiting for pod pod-subpath-test-configmap-h27x to disappear
Jul 21 01:02:20.207: INFO: Pod pod-subpath-test-configmap-h27x no longer exists
STEP: Deleting pod pod-subpath-test-configmap-h27x
Jul 21 01:02:20.207: INFO: Deleting pod "pod-subpath-test-configmap-h27x" in namespace "e2e-tests-subpath-f242x"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:02:20.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-f242x" for this suite.
Jul 21 01:02:26.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:02:26.498: INFO: namespace: e2e-tests-subpath-f242x, resource: bindings, ignored listing per whitelist
Jul 21 01:02:26.634: INFO: namespace e2e-tests-subpath-f242x deletion completed in 6.420392092s

• [SLOW TEST:32.731 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:02:26.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jul 21 01:02:27.039: INFO: Waiting up to 5m0s for pod "downward-api-d821470e-caed-11ea-86e4-0242ac110009" in namespace "e2e-tests-downward-api-kbkh4" to be "success or failure"
Jul 21 01:02:27.107: INFO: Pod "downward-api-d821470e-caed-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 67.90792ms
Jul 21 01:02:29.139: INFO: Pod "downward-api-d821470e-caed-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099997342s
Jul 21 01:02:31.282: INFO: Pod "downward-api-d821470e-caed-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.243610783s
Jul 21 01:02:33.286: INFO: Pod "downward-api-d821470e-caed-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.24727915s
STEP: Saw pod success
Jul 21 01:02:33.286: INFO: Pod "downward-api-d821470e-caed-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:02:33.289: INFO: Trying to get logs from node hunter-worker pod downward-api-d821470e-caed-11ea-86e4-0242ac110009 container dapi-container: 
STEP: delete the pod
Jul 21 01:02:33.326: INFO: Waiting for pod downward-api-d821470e-caed-11ea-86e4-0242ac110009 to disappear
Jul 21 01:02:33.339: INFO: Pod downward-api-d821470e-caed-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:02:33.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-kbkh4" for this suite.
Jul 21 01:02:39.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:02:39.422: INFO: namespace: e2e-tests-downward-api-kbkh4, resource: bindings, ignored listing per whitelist
Jul 21 01:02:39.441: INFO: namespace e2e-tests-downward-api-kbkh4 deletion completed in 6.098243933s

• [SLOW TEST:12.807 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:02:39.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul 21 01:02:39.899: INFO: Waiting up to 5m0s for pod "pod-dfc56eb4-caed-11ea-86e4-0242ac110009" in namespace "e2e-tests-emptydir-4x86z" to be "success or failure"
Jul 21 01:02:39.902: INFO: Pod "pod-dfc56eb4-caed-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.731876ms
Jul 21 01:02:41.905: INFO: Pod "pod-dfc56eb4-caed-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006191591s
Jul 21 01:02:43.909: INFO: Pod "pod-dfc56eb4-caed-11ea-86e4-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.009632418s
Jul 21 01:02:45.913: INFO: Pod "pod-dfc56eb4-caed-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013774899s
STEP: Saw pod success
Jul 21 01:02:45.913: INFO: Pod "pod-dfc56eb4-caed-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:02:45.915: INFO: Trying to get logs from node hunter-worker2 pod pod-dfc56eb4-caed-11ea-86e4-0242ac110009 container test-container: 
STEP: delete the pod
Jul 21 01:02:45.947: INFO: Waiting for pod pod-dfc56eb4-caed-11ea-86e4-0242ac110009 to disappear
Jul 21 01:02:45.951: INFO: Pod pod-dfc56eb4-caed-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:02:45.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-4x86z" for this suite.
Jul 21 01:02:53.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:02:54.025: INFO: namespace: e2e-tests-emptydir-4x86z, resource: bindings, ignored listing per whitelist
Jul 21 01:02:54.089: INFO: namespace e2e-tests-emptydir-4x86z deletion completed in 8.135118935s

• [SLOW TEST:14.648 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:02:54.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Jul 21 01:02:54.174: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jul 21 01:02:54.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2n48f'
Jul 21 01:02:57.653: INFO: stderr: ""
Jul 21 01:02:57.653: INFO: stdout: "service/redis-slave created\n"
Jul 21 01:02:57.653: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jul 21 01:02:57.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2n48f'
Jul 21 01:02:57.937: INFO: stderr: ""
Jul 21 01:02:57.937: INFO: stdout: "service/redis-master created\n"
Jul 21 01:02:57.937: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jul 21 01:02:57.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2n48f'
Jul 21 01:02:58.265: INFO: stderr: ""
Jul 21 01:02:58.266: INFO: stdout: "service/frontend created\n"
Jul 21 01:02:58.266: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jul 21 01:02:58.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2n48f'
Jul 21 01:02:58.549: INFO: stderr: ""
Jul 21 01:02:58.549: INFO: stdout: "deployment.extensions/frontend created\n"
Jul 21 01:02:58.549: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jul 21 01:02:58.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2n48f'
Jul 21 01:02:58.867: INFO: stderr: ""
Jul 21 01:02:58.867: INFO: stdout: "deployment.extensions/redis-master created\n"
Jul 21 01:02:58.867: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jul 21 01:02:58.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2n48f'
Jul 21 01:02:59.153: INFO: stderr: ""
Jul 21 01:02:59.153: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Jul 21 01:02:59.153: INFO: Waiting for all frontend pods to be Running.
Jul 21 01:03:09.204: INFO: Waiting for frontend to serve content.
Jul 21 01:03:09.224: INFO: Trying to add a new entry to the guestbook.
Jul 21 01:03:09.238: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jul 21 01:03:09.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-2n48f'
Jul 21 01:03:09.415: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 21 01:03:09.415: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jul 21 01:03:09.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-2n48f'
Jul 21 01:03:09.560: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 21 01:03:09.560: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jul 21 01:03:09.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-2n48f'
Jul 21 01:03:09.734: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 21 01:03:09.734: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jul 21 01:03:09.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-2n48f'
Jul 21 01:03:09.878: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 21 01:03:09.878: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jul 21 01:03:09.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-2n48f'
Jul 21 01:03:10.443: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 21 01:03:10.443: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jul 21 01:03:10.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-2n48f'
Jul 21 01:03:10.832: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 21 01:03:10.832: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:03:10.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-2n48f" for this suite.
Jul 21 01:03:49.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:03:49.286: INFO: namespace: e2e-tests-kubectl-2n48f, resource: bindings, ignored listing per whitelist
Jul 21 01:03:49.324: INFO: namespace e2e-tests-kubectl-2n48f deletion completed in 38.436220793s

• [SLOW TEST:55.235 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:03:49.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Jul 21 01:03:49.433: INFO: Waiting up to 5m0s for pod "var-expansion-0938ec21-caee-11ea-86e4-0242ac110009" in namespace "e2e-tests-var-expansion-v5lkt" to be "success or failure"
Jul 21 01:03:49.438: INFO: Pod "var-expansion-0938ec21-caee-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.827468ms
Jul 21 01:03:51.442: INFO: Pod "var-expansion-0938ec21-caee-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009442291s
Jul 21 01:03:53.446: INFO: Pod "var-expansion-0938ec21-caee-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01305534s
STEP: Saw pod success
Jul 21 01:03:53.446: INFO: Pod "var-expansion-0938ec21-caee-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:03:53.448: INFO: Trying to get logs from node hunter-worker pod var-expansion-0938ec21-caee-11ea-86e4-0242ac110009 container dapi-container: 
STEP: delete the pod
Jul 21 01:03:53.675: INFO: Waiting for pod var-expansion-0938ec21-caee-11ea-86e4-0242ac110009 to disappear
Jul 21 01:03:53.690: INFO: Pod var-expansion-0938ec21-caee-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:03:53.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-v5lkt" for this suite.
Jul 21 01:03:59.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:03:59.749: INFO: namespace: e2e-tests-var-expansion-v5lkt, resource: bindings, ignored listing per whitelist
Jul 21 01:03:59.777: INFO: namespace e2e-tests-var-expansion-v5lkt deletion completed in 6.083641488s

• [SLOW TEST:10.452 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:03:59.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 21 01:03:59.877: INFO: Creating ReplicaSet my-hostname-basic-0f78045f-caee-11ea-86e4-0242ac110009
Jul 21 01:03:59.912: INFO: Pod name my-hostname-basic-0f78045f-caee-11ea-86e4-0242ac110009: Found 0 pods out of 1
Jul 21 01:04:04.917: INFO: Pod name my-hostname-basic-0f78045f-caee-11ea-86e4-0242ac110009: Found 1 pods out of 1
Jul 21 01:04:04.917: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-0f78045f-caee-11ea-86e4-0242ac110009" is running
Jul 21 01:04:04.920: INFO: Pod "my-hostname-basic-0f78045f-caee-11ea-86e4-0242ac110009-4n2b8" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-21 01:03:59 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-21 01:04:04 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-21 01:04:04 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-21 01:03:59 +0000 UTC Reason: Message:}])
Jul 21 01:04:04.920: INFO: Trying to dial the pod
Jul 21 01:04:09.931: INFO: Controller my-hostname-basic-0f78045f-caee-11ea-86e4-0242ac110009: Got expected result from replica 1 [my-hostname-basic-0f78045f-caee-11ea-86e4-0242ac110009-4n2b8]: "my-hostname-basic-0f78045f-caee-11ea-86e4-0242ac110009-4n2b8", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:04:09.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-wbrpj" for this suite.
Jul 21 01:04:15.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:04:15.974: INFO: namespace: e2e-tests-replicaset-wbrpj, resource: bindings, ignored listing per whitelist
Jul 21 01:04:16.026: INFO: namespace e2e-tests-replicaset-wbrpj deletion completed in 6.091527636s

• [SLOW TEST:16.249 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:04:16.026: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-1931255c-caee-11ea-86e4-0242ac110009
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-1931255c-caee-11ea-86e4-0242ac110009
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:04:24.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-f8ch6" for this suite.
Jul 21 01:04:46.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:04:46.382: INFO: namespace: e2e-tests-projected-f8ch6, resource: bindings, ignored listing per whitelist
Jul 21 01:04:46.413: INFO: namespace e2e-tests-projected-f8ch6 deletion completed in 22.080872478s

• [SLOW TEST:30.387 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:04:46.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Jul 21 01:04:50.694: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:05:15.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-tvmkr" for this suite.
Jul 21 01:05:21.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:05:21.876: INFO: namespace: e2e-tests-namespaces-tvmkr, resource: bindings, ignored listing per whitelist
Jul 21 01:05:21.934: INFO: namespace e2e-tests-namespaces-tvmkr deletion completed in 6.10601466s
STEP: Destroying namespace "e2e-tests-nsdeletetest-7fmvr" for this suite.
Jul 21 01:05:21.937: INFO: Namespace e2e-tests-nsdeletetest-7fmvr was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-92kpd" for this suite.
Jul 21 01:05:28.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:05:28.054: INFO: namespace: e2e-tests-nsdeletetest-92kpd, resource: bindings, ignored listing per whitelist
Jul 21 01:05:28.076: INFO: namespace e2e-tests-nsdeletetest-92kpd deletion completed in 6.1396797s

• [SLOW TEST:41.663 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:05:28.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jul 21 01:05:32.774: INFO: Successfully updated pod "annotationupdate4419afca-caee-11ea-86e4-0242ac110009"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:05:36.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-l24xf" for this suite.
Jul 21 01:05:59.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:05:59.199: INFO: namespace: e2e-tests-downward-api-l24xf, resource: bindings, ignored listing per whitelist
Jul 21 01:05:59.220: INFO: namespace e2e-tests-downward-api-l24xf deletion completed in 22.382145955s

• [SLOW TEST:31.143 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:05:59.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul 21 01:05:59.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-r87rd'
Jul 21 01:05:59.447: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul 21 01:05:59.447: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jul 21 01:05:59.527: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Jul 21 01:05:59.536: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jul 21 01:05:59.594: INFO: scanned /root for discovery docs: 
Jul 21 01:05:59.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-r87rd'
Jul 21 01:06:15.410: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jul 21 01:06:15.410: INFO: stdout: "Created e2e-test-nginx-rc-5db14396766eeeec3a0cb5da2ebe0ab8\nScaling up e2e-test-nginx-rc-5db14396766eeeec3a0cb5da2ebe0ab8 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-5db14396766eeeec3a0cb5da2ebe0ab8 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-5db14396766eeeec3a0cb5da2ebe0ab8 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jul 21 01:06:15.410: INFO: stdout: "Created e2e-test-nginx-rc-5db14396766eeeec3a0cb5da2ebe0ab8\nScaling up e2e-test-nginx-rc-5db14396766eeeec3a0cb5da2ebe0ab8 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-5db14396766eeeec3a0cb5da2ebe0ab8 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-5db14396766eeeec3a0cb5da2ebe0ab8 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jul 21 01:06:15.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-r87rd'
Jul 21 01:06:15.495: INFO: stderr: ""
Jul 21 01:06:15.495: INFO: stdout: "e2e-test-nginx-rc-5db14396766eeeec3a0cb5da2ebe0ab8-n5968 "
Jul 21 01:06:15.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-5db14396766eeeec3a0cb5da2ebe0ab8-n5968 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r87rd'
Jul 21 01:06:15.581: INFO: stderr: ""
Jul 21 01:06:15.581: INFO: stdout: "true"
Jul 21 01:06:15.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-5db14396766eeeec3a0cb5da2ebe0ab8-n5968 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r87rd'
Jul 21 01:06:15.692: INFO: stderr: ""
Jul 21 01:06:15.692: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jul 21 01:06:15.692: INFO: e2e-test-nginx-rc-5db14396766eeeec3a0cb5da2ebe0ab8-n5968 is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Jul 21 01:06:15.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-r87rd'
Jul 21 01:06:15.846: INFO: stderr: ""
Jul 21 01:06:15.846: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:06:15.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-r87rd" for this suite.
Jul 21 01:06:38.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:06:38.296: INFO: namespace: e2e-tests-kubectl-r87rd, resource: bindings, ignored listing per whitelist
Jul 21 01:06:38.350: INFO: namespace e2e-tests-kubectl-r87rd deletion completed in 22.187424075s

• [SLOW TEST:39.129 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:06:38.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:07:13.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-pqgqj" for this suite.
Jul 21 01:07:21.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:07:21.732: INFO: namespace: e2e-tests-container-runtime-pqgqj, resource: bindings, ignored listing per whitelist
Jul 21 01:07:21.795: INFO: namespace e2e-tests-container-runtime-pqgqj deletion completed in 8.108074441s

• [SLOW TEST:43.445 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:07:21.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jul 21 01:07:21.905: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-86k7q,SelfLink:/api/v1/namespaces/e2e-tests-watch-86k7q/configmaps/e2e-watch-test-watch-closed,UID:87de2b1a-caee-11ea-b2c9-0242ac120008,ResourceVersion:1918589,Generation:0,CreationTimestamp:2020-07-21 01:07:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul 21 01:07:21.906: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-86k7q,SelfLink:/api/v1/namespaces/e2e-tests-watch-86k7q/configmaps/e2e-watch-test-watch-closed,UID:87de2b1a-caee-11ea-b2c9-0242ac120008,ResourceVersion:1918590,Generation:0,CreationTimestamp:2020-07-21 01:07:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jul 21 01:07:21.982: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-86k7q,SelfLink:/api/v1/namespaces/e2e-tests-watch-86k7q/configmaps/e2e-watch-test-watch-closed,UID:87de2b1a-caee-11ea-b2c9-0242ac120008,ResourceVersion:1918591,Generation:0,CreationTimestamp:2020-07-21 01:07:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul 21 01:07:21.982: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-86k7q,SelfLink:/api/v1/namespaces/e2e-tests-watch-86k7q/configmaps/e2e-watch-test-watch-closed,UID:87de2b1a-caee-11ea-b2c9-0242ac120008,ResourceVersion:1918592,Generation:0,CreationTimestamp:2020-07-21 01:07:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:07:21.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-86k7q" for this suite.
Jul 21 01:07:28.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:07:28.029: INFO: namespace: e2e-tests-watch-86k7q, resource: bindings, ignored listing per whitelist
Jul 21 01:07:28.149: INFO: namespace e2e-tests-watch-86k7q deletion completed in 6.149330192s

• [SLOW TEST:6.354 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:07:28.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 21 01:07:32.356: INFO: Waiting up to 5m0s for pod "client-envvars-8e19e527-caee-11ea-86e4-0242ac110009" in namespace "e2e-tests-pods-gxqvs" to be "success or failure"
Jul 21 01:07:32.437: INFO: Pod "client-envvars-8e19e527-caee-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 80.73893ms
Jul 21 01:07:34.440: INFO: Pod "client-envvars-8e19e527-caee-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084181948s
Jul 21 01:07:36.444: INFO: Pod "client-envvars-8e19e527-caee-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08799138s
STEP: Saw pod success
Jul 21 01:07:36.444: INFO: Pod "client-envvars-8e19e527-caee-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:07:36.446: INFO: Trying to get logs from node hunter-worker pod client-envvars-8e19e527-caee-11ea-86e4-0242ac110009 container env3cont: 
STEP: delete the pod
Jul 21 01:07:36.546: INFO: Waiting for pod client-envvars-8e19e527-caee-11ea-86e4-0242ac110009 to disappear
Jul 21 01:07:36.634: INFO: Pod client-envvars-8e19e527-caee-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:07:36.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-gxqvs" for this suite.
Jul 21 01:08:18.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:08:18.756: INFO: namespace: e2e-tests-pods-gxqvs, resource: bindings, ignored listing per whitelist
Jul 21 01:08:18.807: INFO: namespace e2e-tests-pods-gxqvs deletion completed in 42.169021217s

• [SLOW TEST:50.658 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:08:18.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jul 21 01:08:18.959: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul 21 01:08:18.967: INFO: Waiting for terminating namespaces to be deleted...
Jul 21 01:08:18.969: INFO: 
Logging pods the kubelet thinks is on node hunter-worker before test
Jul 21 01:08:18.973: INFO: kindnet-2w5m4 from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded)
Jul 21 01:08:18.973: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul 21 01:08:18.973: INFO: kube-proxy-8wnps from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded)
Jul 21 01:08:18.973: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 21 01:08:18.973: INFO: 
Logging pods the kubelet thinks is on node hunter-worker2 before test
Jul 21 01:08:18.977: INFO: kube-proxy-b6f6s from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded)
Jul 21 01:08:18.977: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 21 01:08:18.977: INFO: kindnet-hpnvh from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded)
Jul 21 01:08:18.977: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-worker
STEP: verifying the node has the label node hunter-worker2
Jul 21 01:08:19.078: INFO: Pod kindnet-2w5m4 requesting resource cpu=100m on Node hunter-worker
Jul 21 01:08:19.078: INFO: Pod kindnet-hpnvh requesting resource cpu=100m on Node hunter-worker2
Jul 21 01:08:19.078: INFO: Pod kube-proxy-8wnps requesting resource cpu=0m on Node hunter-worker
Jul 21 01:08:19.078: INFO: Pod kube-proxy-b6f6s requesting resource cpu=0m on Node hunter-worker2
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a9f7003c-caee-11ea-86e4-0242ac110009.16239f2acb576009], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-t5ws7/filler-pod-a9f7003c-caee-11ea-86e4-0242ac110009 to hunter-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a9f7003c-caee-11ea-86e4-0242ac110009.16239f2b1fa7d319], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a9f7003c-caee-11ea-86e4-0242ac110009.16239f2b8171b660], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a9f7003c-caee-11ea-86e4-0242ac110009.16239f2ba3fe8ce5], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a9f7c63b-caee-11ea-86e4-0242ac110009.16239f2acc2a5050], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-t5ws7/filler-pod-a9f7c63b-caee-11ea-86e4-0242ac110009 to hunter-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a9f7c63b-caee-11ea-86e4-0242ac110009.16239f2b6d2fffd4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a9f7c63b-caee-11ea-86e4-0242ac110009.16239f2bd23a76ff], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a9f7c63b-caee-11ea-86e4-0242ac110009.16239f2bea14398c], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.16239f2c322c8c6d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node hunter-worker2
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node hunter-worker
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:08:26.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-t5ws7" for this suite.
Jul 21 01:08:34.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:08:34.555: INFO: namespace: e2e-tests-sched-pred-t5ws7, resource: bindings, ignored listing per whitelist
Jul 21 01:08:34.570: INFO: namespace e2e-tests-sched-pred-t5ws7 deletion completed in 8.101892667s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:15.763 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:08:34.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jul 21 01:08:36.295: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-c6nw7,SelfLink:/api/v1/namespaces/e2e-tests-watch-c6nw7/configmaps/e2e-watch-test-resource-version,UID:b3c1ac3d-caee-11ea-b2c9-0242ac120008,ResourceVersion:1918844,Generation:0,CreationTimestamp:2020-07-21 01:08:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul 21 01:08:36.295: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-c6nw7,SelfLink:/api/v1/namespaces/e2e-tests-watch-c6nw7/configmaps/e2e-watch-test-resource-version,UID:b3c1ac3d-caee-11ea-b2c9-0242ac120008,ResourceVersion:1918846,Generation:0,CreationTimestamp:2020-07-21 01:08:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:08:36.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-c6nw7" for this suite.
Jul 21 01:08:42.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:08:42.442: INFO: namespace: e2e-tests-watch-c6nw7, resource: bindings, ignored listing per whitelist
Jul 21 01:08:42.491: INFO: namespace e2e-tests-watch-c6nw7 deletion completed in 6.136149218s

• [SLOW TEST:7.920 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:08:42.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jul 21 01:08:52.772: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-zq4j9 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 21 01:08:52.772: INFO: >>> kubeConfig: /root/.kube/config
I0721 01:08:52.808591       6 log.go:172] (0xc00168a370) (0xc0024139a0) Create stream
I0721 01:08:52.808637       6 log.go:172] (0xc00168a370) (0xc0024139a0) Stream added, broadcasting: 1
I0721 01:08:52.811631       6 log.go:172] (0xc00168a370) Reply frame received for 1
I0721 01:08:52.811694       6 log.go:172] (0xc00168a370) (0xc0019020a0) Create stream
I0721 01:08:52.811724       6 log.go:172] (0xc00168a370) (0xc0019020a0) Stream added, broadcasting: 3
I0721 01:08:52.813083       6 log.go:172] (0xc00168a370) Reply frame received for 3
I0721 01:08:52.813141       6 log.go:172] (0xc00168a370) (0xc002413a40) Create stream
I0721 01:08:52.813171       6 log.go:172] (0xc00168a370) (0xc002413a40) Stream added, broadcasting: 5
I0721 01:08:52.814312       6 log.go:172] (0xc00168a370) Reply frame received for 5
I0721 01:08:52.979432       6 log.go:172] (0xc00168a370) Data frame received for 5
I0721 01:08:52.979482       6 log.go:172] (0xc002413a40) (5) Data frame handling
I0721 01:08:52.979521       6 log.go:172] (0xc00168a370) Data frame received for 3
I0721 01:08:52.979587       6 log.go:172] (0xc0019020a0) (3) Data frame handling
I0721 01:08:52.979617       6 log.go:172] (0xc0019020a0) (3) Data frame sent
I0721 01:08:52.979642       6 log.go:172] (0xc00168a370) Data frame received for 3
I0721 01:08:52.979683       6 log.go:172] (0xc0019020a0) (3) Data frame handling
I0721 01:08:52.981876       6 log.go:172] (0xc00168a370) Data frame received for 1
I0721 01:08:52.981901       6 log.go:172] (0xc0024139a0) (1) Data frame handling
I0721 01:08:52.981917       6 log.go:172] (0xc0024139a0) (1) Data frame sent
I0721 01:08:52.981930       6 log.go:172] (0xc00168a370) (0xc0024139a0) Stream removed, broadcasting: 1
I0721 01:08:52.982035       6 log.go:172] (0xc00168a370) Go away received
I0721 01:08:52.982295       6 log.go:172] (0xc00168a370) (0xc0024139a0) Stream removed, broadcasting: 1
I0721 01:08:52.982342       6 log.go:172] (0xc00168a370) (0xc0019020a0) Stream removed, broadcasting: 3
I0721 01:08:52.982361       6 log.go:172] (0xc00168a370) (0xc002413a40) Stream removed, broadcasting: 5
Jul 21 01:08:52.982: INFO: Exec stderr: ""
Jul 21 01:08:52.982: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-zq4j9 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 21 01:08:52.982: INFO: >>> kubeConfig: /root/.kube/config
I0721 01:08:53.097397       6 log.go:172] (0xc0000ebd90) (0xc0019023c0) Create stream
I0721 01:08:53.097434       6 log.go:172] (0xc0000ebd90) (0xc0019023c0) Stream added, broadcasting: 1
I0721 01:08:53.099905       6 log.go:172] (0xc0000ebd90) Reply frame received for 1
I0721 01:08:53.099966       6 log.go:172] (0xc0000ebd90) (0xc0019025a0) Create stream
I0721 01:08:53.099995       6 log.go:172] (0xc0000ebd90) (0xc0019025a0) Stream added, broadcasting: 3
I0721 01:08:53.101031       6 log.go:172] (0xc0000ebd90) Reply frame received for 3
I0721 01:08:53.101061       6 log.go:172] (0xc0000ebd90) (0xc0022fa3c0) Create stream
I0721 01:08:53.101077       6 log.go:172] (0xc0000ebd90) (0xc0022fa3c0) Stream added, broadcasting: 5
I0721 01:08:53.101891       6 log.go:172] (0xc0000ebd90) Reply frame received for 5
I0721 01:08:53.166714       6 log.go:172] (0xc0000ebd90) Data frame received for 3
I0721 01:08:53.166755       6 log.go:172] (0xc0019025a0) (3) Data frame handling
I0721 01:08:53.166769       6 log.go:172] (0xc0019025a0) (3) Data frame sent
I0721 01:08:53.166779       6 log.go:172] (0xc0000ebd90) Data frame received for 3
I0721 01:08:53.166791       6 log.go:172] (0xc0019025a0) (3) Data frame handling
I0721 01:08:53.166811       6 log.go:172] (0xc0000ebd90) Data frame received for 5
I0721 01:08:53.166819       6 log.go:172] (0xc0022fa3c0) (5) Data frame handling
I0721 01:08:53.168327       6 log.go:172] (0xc0000ebd90) Data frame received for 1
I0721 01:08:53.168347       6 log.go:172] (0xc0019023c0) (1) Data frame handling
I0721 01:08:53.168373       6 log.go:172] (0xc0019023c0) (1) Data frame sent
I0721 01:08:53.168440       6 log.go:172] (0xc0000ebd90) (0xc0019023c0) Stream removed, broadcasting: 1
I0721 01:08:53.168489       6 log.go:172] (0xc0000ebd90) Go away received
I0721 01:08:53.168612       6 log.go:172] (0xc0000ebd90) (0xc0019023c0) Stream removed, broadcasting: 1
I0721 01:08:53.168640       6 log.go:172] (0xc0000ebd90) (0xc0019025a0) Stream removed, broadcasting: 3
I0721 01:08:53.168656       6 log.go:172] (0xc0000ebd90) (0xc0022fa3c0) Stream removed, broadcasting: 5
Jul 21 01:08:53.168: INFO: Exec stderr: ""
Jul 21 01:08:53.168: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-zq4j9 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 21 01:08:53.168: INFO: >>> kubeConfig: /root/.kube/config
I0721 01:08:53.199759       6 log.go:172] (0xc001fd4370) (0xc001902a00) Create stream
I0721 01:08:53.199788       6 log.go:172] (0xc001fd4370) (0xc001902a00) Stream added, broadcasting: 1
I0721 01:08:53.202292       6 log.go:172] (0xc001fd4370) Reply frame received for 1
I0721 01:08:53.202319       6 log.go:172] (0xc001fd4370) (0xc0022fa460) Create stream
I0721 01:08:53.202327       6 log.go:172] (0xc001fd4370) (0xc0022fa460) Stream added, broadcasting: 3
I0721 01:08:53.203198       6 log.go:172] (0xc001fd4370) Reply frame received for 3
I0721 01:08:53.203249       6 log.go:172] (0xc001fd4370) (0xc002413ae0) Create stream
I0721 01:08:53.203265       6 log.go:172] (0xc001fd4370) (0xc002413ae0) Stream added, broadcasting: 5
I0721 01:08:53.204227       6 log.go:172] (0xc001fd4370) Reply frame received for 5
I0721 01:08:53.264938       6 log.go:172] (0xc001fd4370) Data frame received for 3
I0721 01:08:53.264971       6 log.go:172] (0xc0022fa460) (3) Data frame handling
I0721 01:08:53.265000       6 log.go:172] (0xc0022fa460) (3) Data frame sent
I0721 01:08:53.265021       6 log.go:172] (0xc001fd4370) Data frame received for 3
I0721 01:08:53.265036       6 log.go:172] (0xc0022fa460) (3) Data frame handling
I0721 01:08:53.265065       6 log.go:172] (0xc001fd4370) Data frame received for 5
I0721 01:08:53.265134       6 log.go:172] (0xc002413ae0) (5) Data frame handling
I0721 01:08:53.266288       6 log.go:172] (0xc001fd4370) Data frame received for 1
I0721 01:08:53.266312       6 log.go:172] (0xc001902a00) (1) Data frame handling
I0721 01:08:53.266340       6 log.go:172] (0xc001902a00) (1) Data frame sent
I0721 01:08:53.266466       6 log.go:172] (0xc001fd4370) (0xc001902a00) Stream removed, broadcasting: 1
I0721 01:08:53.266550       6 log.go:172] (0xc001fd4370) (0xc001902a00) Stream removed, broadcasting: 1
I0721 01:08:53.266565       6 log.go:172] (0xc001fd4370) (0xc0022fa460) Stream removed, broadcasting: 3
I0721 01:08:53.266577       6 log.go:172] (0xc001fd4370) (0xc002413ae0) Stream removed, broadcasting: 5
Jul 21 01:08:53.266: INFO: Exec stderr: ""
Jul 21 01:08:53.266: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-zq4j9 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 21 01:08:53.266: INFO: >>> kubeConfig: /root/.kube/config
I0721 01:08:53.268361       6 log.go:172] (0xc001fd4370) Go away received
I0721 01:08:53.292823       6 log.go:172] (0xc00168a6e0) (0xc002413c20) Create stream
I0721 01:08:53.292860       6 log.go:172] (0xc00168a6e0) (0xc002413c20) Stream added, broadcasting: 1
I0721 01:08:53.296966       6 log.go:172] (0xc00168a6e0) Reply frame received for 1
I0721 01:08:53.297008       6 log.go:172] (0xc00168a6e0) (0xc0022fa640) Create stream
I0721 01:08:53.297029       6 log.go:172] (0xc00168a6e0) (0xc0022fa640) Stream added, broadcasting: 3
I0721 01:08:53.299173       6 log.go:172] (0xc00168a6e0) Reply frame received for 3
I0721 01:08:53.299205       6 log.go:172] (0xc00168a6e0) (0xc0018380a0) Create stream
I0721 01:08:53.299217       6 log.go:172] (0xc00168a6e0) (0xc0018380a0) Stream added, broadcasting: 5
I0721 01:08:53.300370       6 log.go:172] (0xc00168a6e0) Reply frame received for 5
I0721 01:08:53.372976       6 log.go:172] (0xc00168a6e0) Data frame received for 5
I0721 01:08:53.373008       6 log.go:172] (0xc0018380a0) (5) Data frame handling
I0721 01:08:53.373065       6 log.go:172] (0xc00168a6e0) Data frame received for 3
I0721 01:08:53.373099       6 log.go:172] (0xc0022fa640) (3) Data frame handling
I0721 01:08:53.373124       6 log.go:172] (0xc0022fa640) (3) Data frame sent
I0721 01:08:53.373140       6 log.go:172] (0xc00168a6e0) Data frame received for 3
I0721 01:08:53.373166       6 log.go:172] (0xc0022fa640) (3) Data frame handling
I0721 01:08:53.374509       6 log.go:172] (0xc00168a6e0) Data frame received for 1
I0721 01:08:53.374563       6 log.go:172] (0xc002413c20) (1) Data frame handling
I0721 01:08:53.374603       6 log.go:172] (0xc002413c20) (1) Data frame sent
I0721 01:08:53.374627       6 log.go:172] (0xc00168a6e0) (0xc002413c20) Stream removed, broadcasting: 1
I0721 01:08:53.374656       6 log.go:172] (0xc00168a6e0) Go away received
I0721 01:08:53.374736       6 log.go:172] (0xc00168a6e0) (0xc002413c20) Stream removed, broadcasting: 1
I0721 01:08:53.374755       6 log.go:172] (0xc00168a6e0) (0xc0022fa640) Stream removed, broadcasting: 3
I0721 01:08:53.374766       6 log.go:172] (0xc00168a6e0) (0xc0018380a0) Stream removed, broadcasting: 5
Jul 21 01:08:53.374: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jul 21 01:08:53.374: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-zq4j9 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 21 01:08:53.374: INFO: >>> kubeConfig: /root/.kube/config
I0721 01:08:53.403570       6 log.go:172] (0xc001fd4840) (0xc001902dc0) Create stream
I0721 01:08:53.403600       6 log.go:172] (0xc001fd4840) (0xc001902dc0) Stream added, broadcasting: 1
I0721 01:08:53.406443       6 log.go:172] (0xc001fd4840) Reply frame received for 1
I0721 01:08:53.406501       6 log.go:172] (0xc001fd4840) (0xc00158f400) Create stream
I0721 01:08:53.406517       6 log.go:172] (0xc001fd4840) (0xc00158f400) Stream added, broadcasting: 3
I0721 01:08:53.407519       6 log.go:172] (0xc001fd4840) Reply frame received for 3
I0721 01:08:53.407556       6 log.go:172] (0xc001fd4840) (0xc00158f4a0) Create stream
I0721 01:08:53.407569       6 log.go:172] (0xc001fd4840) (0xc00158f4a0) Stream added, broadcasting: 5
I0721 01:08:53.408653       6 log.go:172] (0xc001fd4840) Reply frame received for 5
I0721 01:08:53.464332       6 log.go:172] (0xc001fd4840) Data frame received for 5
I0721 01:08:53.464363       6 log.go:172] (0xc00158f4a0) (5) Data frame handling
I0721 01:08:53.464391       6 log.go:172] (0xc001fd4840) Data frame received for 3
I0721 01:08:53.464409       6 log.go:172] (0xc00158f400) (3) Data frame handling
I0721 01:08:53.464456       6 log.go:172] (0xc00158f400) (3) Data frame sent
I0721 01:08:53.464492       6 log.go:172] (0xc001fd4840) Data frame received for 3
I0721 01:08:53.464507       6 log.go:172] (0xc00158f400) (3) Data frame handling
I0721 01:08:53.465914       6 log.go:172] (0xc001fd4840) Data frame received for 1
I0721 01:08:53.465934       6 log.go:172] (0xc001902dc0) (1) Data frame handling
I0721 01:08:53.465944       6 log.go:172] (0xc001902dc0) (1) Data frame sent
I0721 01:08:53.465953       6 log.go:172] (0xc001fd4840) (0xc001902dc0) Stream removed, broadcasting: 1
I0721 01:08:53.465966       6 log.go:172] (0xc001fd4840) Go away received
I0721 01:08:53.466098       6 log.go:172] (0xc001fd4840) (0xc001902dc0) Stream removed, broadcasting: 1
I0721 01:08:53.466126       6 log.go:172] (0xc001fd4840) (0xc00158f400) Stream removed, broadcasting: 3
I0721 01:08:53.466142       6 log.go:172] (0xc001fd4840) (0xc00158f4a0) Stream removed, broadcasting: 5
Jul 21 01:08:53.466: INFO: Exec stderr: ""
Jul 21 01:08:53.466: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-zq4j9 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 21 01:08:53.466: INFO: >>> kubeConfig: /root/.kube/config
I0721 01:08:53.490782       6 log.go:172] (0xc001fd4b00) (0xc001902f00) Create stream
I0721 01:08:53.490822       6 log.go:172] (0xc001fd4b00) (0xc001902f00) Stream added, broadcasting: 1
I0721 01:08:53.492385       6 log.go:172] (0xc001fd4b00) Reply frame received for 1
I0721 01:08:53.492433       6 log.go:172] (0xc001fd4b00) (0xc001903040) Create stream
I0721 01:08:53.492442       6 log.go:172] (0xc001fd4b00) (0xc001903040) Stream added, broadcasting: 3
I0721 01:08:53.493310       6 log.go:172] (0xc001fd4b00) Reply frame received for 3
I0721 01:08:53.493350       6 log.go:172] (0xc001fd4b00) (0xc00158f540) Create stream
I0721 01:08:53.493361       6 log.go:172] (0xc001fd4b00) (0xc00158f540) Stream added, broadcasting: 5
I0721 01:08:53.494178       6 log.go:172] (0xc001fd4b00) Reply frame received for 5
I0721 01:08:53.549652       6 log.go:172] (0xc001fd4b00) Data frame received for 3
I0721 01:08:53.549683       6 log.go:172] (0xc001903040) (3) Data frame handling
I0721 01:08:53.549713       6 log.go:172] (0xc001903040) (3) Data frame sent
I0721 01:08:53.549730       6 log.go:172] (0xc001fd4b00) Data frame received for 3
I0721 01:08:53.549736       6 log.go:172] (0xc001903040) (3) Data frame handling
I0721 01:08:53.550036       6 log.go:172] (0xc001fd4b00) Data frame received for 5
I0721 01:08:53.550066       6 log.go:172] (0xc00158f540) (5) Data frame handling
I0721 01:08:53.551267       6 log.go:172] (0xc001fd4b00) Data frame received for 1
I0721 01:08:53.551297       6 log.go:172] (0xc001902f00) (1) Data frame handling
I0721 01:08:53.551324       6 log.go:172] (0xc001902f00) (1) Data frame sent
I0721 01:08:53.551348       6 log.go:172] (0xc001fd4b00) (0xc001902f00) Stream removed, broadcasting: 1
I0721 01:08:53.551417       6 log.go:172] (0xc001fd4b00) Go away received
I0721 01:08:53.551451       6 log.go:172] (0xc001fd4b00) (0xc001902f00) Stream removed, broadcasting: 1
I0721 01:08:53.551495       6 log.go:172] (0xc001fd4b00) (0xc001903040) Stream removed, broadcasting: 3
I0721 01:08:53.551516       6 log.go:172] (0xc001fd4b00) (0xc00158f540) Stream removed, broadcasting: 5
Jul 21 01:08:53.551: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jul 21 01:08:53.551: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-zq4j9 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 21 01:08:53.551: INFO: >>> kubeConfig: /root/.kube/config
I0721 01:08:53.577164       6 log.go:172] (0xc001fd4fd0) (0xc0019032c0) Create stream
I0721 01:08:53.577193       6 log.go:172] (0xc001fd4fd0) (0xc0019032c0) Stream added, broadcasting: 1
I0721 01:08:53.579614       6 log.go:172] (0xc001fd4fd0) Reply frame received for 1
I0721 01:08:53.579649       6 log.go:172] (0xc001fd4fd0) (0xc001903360) Create stream
I0721 01:08:53.579664       6 log.go:172] (0xc001fd4fd0) (0xc001903360) Stream added, broadcasting: 3
I0721 01:08:53.580589       6 log.go:172] (0xc001fd4fd0) Reply frame received for 3
I0721 01:08:53.580617       6 log.go:172] (0xc001fd4fd0) (0xc0019034a0) Create stream
I0721 01:08:53.580628       6 log.go:172] (0xc001fd4fd0) (0xc0019034a0) Stream added, broadcasting: 5
I0721 01:08:53.581468       6 log.go:172] (0xc001fd4fd0) Reply frame received for 5
I0721 01:08:53.634849       6 log.go:172] (0xc001fd4fd0) Data frame received for 5
I0721 01:08:53.634898       6 log.go:172] (0xc0019034a0) (5) Data frame handling
I0721 01:08:53.634933       6 log.go:172] (0xc001fd4fd0) Data frame received for 3
I0721 01:08:53.634950       6 log.go:172] (0xc001903360) (3) Data frame handling
I0721 01:08:53.634967       6 log.go:172] (0xc001903360) (3) Data frame sent
I0721 01:08:53.634982       6 log.go:172] (0xc001fd4fd0) Data frame received for 3
I0721 01:08:53.634996       6 log.go:172] (0xc001903360) (3) Data frame handling
I0721 01:08:53.636694       6 log.go:172] (0xc001fd4fd0) Data frame received for 1
I0721 01:08:53.636719       6 log.go:172] (0xc0019032c0) (1) Data frame handling
I0721 01:08:53.636850       6 log.go:172] (0xc0019032c0) (1) Data frame sent
I0721 01:08:53.636879       6 log.go:172] (0xc001fd4fd0) (0xc0019032c0) Stream removed, broadcasting: 1
I0721 01:08:53.636908       6 log.go:172] (0xc001fd4fd0) Go away received
I0721 01:08:53.637070       6 log.go:172] (0xc001fd4fd0) (0xc0019032c0) Stream removed, broadcasting: 1
I0721 01:08:53.637099       6 log.go:172] (0xc001fd4fd0) (0xc001903360) Stream removed, broadcasting: 3
I0721 01:08:53.637108       6 log.go:172] (0xc001fd4fd0) (0xc0019034a0) Stream removed, broadcasting: 5
Jul 21 01:08:53.637: INFO: Exec stderr: ""
Jul 21 01:08:53.637: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-zq4j9 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 21 01:08:53.637: INFO: >>> kubeConfig: /root/.kube/config
I0721 01:08:53.665567       6 log.go:172] (0xc00168adc0) (0xc0021c4140) Create stream
I0721 01:08:53.665592       6 log.go:172] (0xc00168adc0) (0xc0021c4140) Stream added, broadcasting: 1
I0721 01:08:53.670441       6 log.go:172] (0xc00168adc0) Reply frame received for 1
I0721 01:08:53.670500       6 log.go:172] (0xc00168adc0) (0xc00158f5e0) Create stream
I0721 01:08:53.670518       6 log.go:172] (0xc00168adc0) (0xc00158f5e0) Stream added, broadcasting: 3
I0721 01:08:53.671993       6 log.go:172] (0xc00168adc0) Reply frame received for 3
I0721 01:08:53.672045       6 log.go:172] (0xc00168adc0) (0xc001838280) Create stream
I0721 01:08:53.672059       6 log.go:172] (0xc00168adc0) (0xc001838280) Stream added, broadcasting: 5
I0721 01:08:53.673948       6 log.go:172] (0xc00168adc0) Reply frame received for 5
I0721 01:08:53.736554       6 log.go:172] (0xc00168adc0) Data frame received for 5
I0721 01:08:53.736583       6 log.go:172] (0xc001838280) (5) Data frame handling
I0721 01:08:53.736605       6 log.go:172] (0xc00168adc0) Data frame received for 3
I0721 01:08:53.736614       6 log.go:172] (0xc00158f5e0) (3) Data frame handling
I0721 01:08:53.736624       6 log.go:172] (0xc00158f5e0) (3) Data frame sent
I0721 01:08:53.736631       6 log.go:172] (0xc00168adc0) Data frame received for 3
I0721 01:08:53.736647       6 log.go:172] (0xc00158f5e0) (3) Data frame handling
I0721 01:08:53.738383       6 log.go:172] (0xc00168adc0) Data frame received for 1
I0721 01:08:53.738403       6 log.go:172] (0xc0021c4140) (1) Data frame handling
I0721 01:08:53.738427       6 log.go:172] (0xc0021c4140) (1) Data frame sent
I0721 01:08:53.738511       6 log.go:172] (0xc00168adc0) (0xc0021c4140) Stream removed, broadcasting: 1
I0721 01:08:53.738589       6 log.go:172] (0xc00168adc0) Go away received
I0721 01:08:53.738634       6 log.go:172] (0xc00168adc0) (0xc0021c4140) Stream removed, broadcasting: 1
I0721 01:08:53.738668       6 log.go:172] (0xc00168adc0) (0xc00158f5e0) Stream removed, broadcasting: 3
I0721 01:08:53.738682       6 log.go:172] (0xc00168adc0) (0xc001838280) Stream removed, broadcasting: 5
Jul 21 01:08:53.738: INFO: Exec stderr: ""
Jul 21 01:08:53.738: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-zq4j9 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 21 01:08:53.738: INFO: >>> kubeConfig: /root/.kube/config
I0721 01:08:53.770206       6 log.go:172] (0xc000c1c420) (0xc0022fa960) Create stream
I0721 01:08:53.770250       6 log.go:172] (0xc000c1c420) (0xc0022fa960) Stream added, broadcasting: 1
I0721 01:08:53.771956       6 log.go:172] (0xc000c1c420) Reply frame received for 1
I0721 01:08:53.771985       6 log.go:172] (0xc000c1c420) (0xc0022faa00) Create stream
I0721 01:08:53.771995       6 log.go:172] (0xc000c1c420) (0xc0022faa00) Stream added, broadcasting: 3
I0721 01:08:53.773149       6 log.go:172] (0xc000c1c420) Reply frame received for 3
I0721 01:08:53.773168       6 log.go:172] (0xc000c1c420) (0xc001903540) Create stream
I0721 01:08:53.773178       6 log.go:172] (0xc000c1c420) (0xc001903540) Stream added, broadcasting: 5
I0721 01:08:53.774151       6 log.go:172] (0xc000c1c420) Reply frame received for 5
I0721 01:08:53.842116       6 log.go:172] (0xc000c1c420) Data frame received for 5
I0721 01:08:53.842144       6 log.go:172] (0xc001903540) (5) Data frame handling
I0721 01:08:53.842166       6 log.go:172] (0xc000c1c420) Data frame received for 3
I0721 01:08:53.842183       6 log.go:172] (0xc0022faa00) (3) Data frame handling
I0721 01:08:53.842196       6 log.go:172] (0xc0022faa00) (3) Data frame sent
I0721 01:08:53.842206       6 log.go:172] (0xc000c1c420) Data frame received for 3
I0721 01:08:53.842214       6 log.go:172] (0xc0022faa00) (3) Data frame handling
I0721 01:08:53.843695       6 log.go:172] (0xc000c1c420) Data frame received for 1
I0721 01:08:53.843726       6 log.go:172] (0xc0022fa960) (1) Data frame handling
I0721 01:08:53.843749       6 log.go:172] (0xc0022fa960) (1) Data frame sent
I0721 01:08:53.843772       6 log.go:172] (0xc000c1c420) (0xc0022fa960) Stream removed, broadcasting: 1
I0721 01:08:53.843795       6 log.go:172] (0xc000c1c420) Go away received
I0721 01:08:53.843897       6 log.go:172] (0xc000c1c420) (0xc0022fa960) Stream removed, broadcasting: 1
I0721 01:08:53.843924       6 log.go:172] (0xc000c1c420) (0xc0022faa00) Stream removed, broadcasting: 3
I0721 01:08:53.843950       6 log.go:172] (0xc000c1c420) (0xc001903540) Stream removed, broadcasting: 5
Jul 21 01:08:53.843: INFO: Exec stderr: ""
Jul 21 01:08:53.844: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-zq4j9 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 21 01:08:53.844: INFO: >>> kubeConfig: /root/.kube/config
I0721 01:08:54.016650       6 log.go:172] (0xc000675ad0) (0xc00158f860) Create stream
I0721 01:08:54.016697       6 log.go:172] (0xc000675ad0) (0xc00158f860) Stream added, broadcasting: 1
I0721 01:08:54.019952       6 log.go:172] (0xc000675ad0) Reply frame received for 1
I0721 01:08:54.020002       6 log.go:172] (0xc000675ad0) (0xc0018383c0) Create stream
I0721 01:08:54.020016       6 log.go:172] (0xc000675ad0) (0xc0018383c0) Stream added, broadcasting: 3
I0721 01:08:54.021009       6 log.go:172] (0xc000675ad0) Reply frame received for 3
I0721 01:08:54.021043       6 log.go:172] (0xc000675ad0) (0xc0019035e0) Create stream
I0721 01:08:54.021054       6 log.go:172] (0xc000675ad0) (0xc0019035e0) Stream added, broadcasting: 5
I0721 01:08:54.021890       6 log.go:172] (0xc000675ad0) Reply frame received for 5
I0721 01:08:54.084043       6 log.go:172] (0xc000675ad0) Data frame received for 3
I0721 01:08:54.084090       6 log.go:172] (0xc0018383c0) (3) Data frame handling
I0721 01:08:54.084113       6 log.go:172] (0xc0018383c0) (3) Data frame sent
I0721 01:08:54.084129       6 log.go:172] (0xc000675ad0) Data frame received for 3
I0721 01:08:54.084142       6 log.go:172] (0xc0018383c0) (3) Data frame handling
I0721 01:08:54.084395       6 log.go:172] (0xc000675ad0) Data frame received for 5
I0721 01:08:54.084431       6 log.go:172] (0xc0019035e0) (5) Data frame handling
I0721 01:08:54.086510       6 log.go:172] (0xc000675ad0) Data frame received for 1
I0721 01:08:54.086538       6 log.go:172] (0xc00158f860) (1) Data frame handling
I0721 01:08:54.086552       6 log.go:172] (0xc00158f860) (1) Data frame sent
I0721 01:08:54.086568       6 log.go:172] (0xc000675ad0) (0xc00158f860) Stream removed, broadcasting: 1
I0721 01:08:54.086591       6 log.go:172] (0xc000675ad0) Go away received
I0721 01:08:54.086680       6 log.go:172] (0xc000675ad0) (0xc00158f860) Stream removed, broadcasting: 1
I0721 01:08:54.086699       6 log.go:172] (0xc000675ad0) (0xc0018383c0) Stream removed, broadcasting: 3
I0721 01:08:54.086706       6 log.go:172] (0xc000675ad0) (0xc0019035e0) Stream removed, broadcasting: 5
Jul 21 01:08:54.086: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:08:54.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-zq4j9" for this suite.
Jul 21 01:09:40.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:09:40.232: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-zq4j9, resource: bindings, ignored listing per whitelist
Jul 21 01:09:40.295: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-zq4j9 deletion completed in 46.204802604s

• [SLOW TEST:57.804 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:09:40.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jul 21 01:09:40.679: INFO: Waiting up to 5m0s for pod "pod-da998232-caee-11ea-86e4-0242ac110009" in namespace "e2e-tests-emptydir-ggqzw" to be "success or failure"
Jul 21 01:09:40.702: INFO: Pod "pod-da998232-caee-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 22.82798ms
Jul 21 01:09:42.782: INFO: Pod "pod-da998232-caee-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102698704s
Jul 21 01:09:44.918: INFO: Pod "pod-da998232-caee-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.238960677s
Jul 21 01:09:46.922: INFO: Pod "pod-da998232-caee-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.242792025s
STEP: Saw pod success
Jul 21 01:09:46.922: INFO: Pod "pod-da998232-caee-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:09:46.933: INFO: Trying to get logs from node hunter-worker2 pod pod-da998232-caee-11ea-86e4-0242ac110009 container test-container: 
STEP: delete the pod
Jul 21 01:09:47.137: INFO: Waiting for pod pod-da998232-caee-11ea-86e4-0242ac110009 to disappear
Jul 21 01:09:47.151: INFO: Pod pod-da998232-caee-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:09:47.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-ggqzw" for this suite.
Jul 21 01:09:53.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:09:53.268: INFO: namespace: e2e-tests-emptydir-ggqzw, resource: bindings, ignored listing per whitelist
Jul 21 01:09:53.321: INFO: namespace e2e-tests-emptydir-ggqzw deletion completed in 6.166428143s

• [SLOW TEST:13.026 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:09:53.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-ldwqb
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Jul 21 01:09:53.459: INFO: Found 0 stateful pods, waiting for 3
Jul 21 01:10:03.614: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 21 01:10:03.614: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 21 01:10:03.614: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jul 21 01:10:13.463: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 21 01:10:13.463: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 21 01:10:13.463: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jul 21 01:10:13.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ldwqb ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 21 01:10:13.697: INFO: stderr: "I0721 01:10:13.586568    1735 log.go:172] (0xc000138630) (0xc0008b8640) Create stream\nI0721 01:10:13.586631    1735 log.go:172] (0xc000138630) (0xc0008b8640) Stream added, broadcasting: 1\nI0721 01:10:13.590822    1735 log.go:172] (0xc000138630) Reply frame received for 1\nI0721 01:10:13.590864    1735 log.go:172] (0xc000138630) (0xc000891d60) Create stream\nI0721 01:10:13.590877    1735 log.go:172] (0xc000138630) (0xc000891d60) Stream added, broadcasting: 3\nI0721 01:10:13.591885    1735 log.go:172] (0xc000138630) Reply frame received for 3\nI0721 01:10:13.591938    1735 log.go:172] (0xc000138630) (0xc0007f1540) Create stream\nI0721 01:10:13.591957    1735 log.go:172] (0xc000138630) (0xc0007f1540) Stream added, broadcasting: 5\nI0721 01:10:13.592933    1735 log.go:172] (0xc000138630) Reply frame received for 5\nI0721 01:10:13.689603    1735 log.go:172] (0xc000138630) Data frame received for 3\nI0721 01:10:13.689627    1735 log.go:172] (0xc000891d60) (3) Data frame handling\nI0721 01:10:13.689640    1735 log.go:172] (0xc000891d60) (3) Data frame sent\nI0721 01:10:13.689648    1735 log.go:172] (0xc000138630) Data frame received for 3\nI0721 01:10:13.689654    1735 log.go:172] (0xc000891d60) (3) Data frame handling\nI0721 01:10:13.689843    1735 log.go:172] (0xc000138630) Data frame received for 5\nI0721 01:10:13.689868    1735 log.go:172] (0xc0007f1540) (5) Data frame handling\nI0721 01:10:13.692237    1735 log.go:172] (0xc000138630) Data frame received for 1\nI0721 01:10:13.692258    1735 log.go:172] (0xc0008b8640) (1) Data frame handling\nI0721 01:10:13.692273    1735 log.go:172] (0xc0008b8640) (1) Data frame sent\nI0721 01:10:13.692281    1735 log.go:172] (0xc000138630) (0xc0008b8640) Stream removed, broadcasting: 1\nI0721 01:10:13.692290    1735 log.go:172] (0xc000138630) Go away received\nI0721 01:10:13.692570    1735 log.go:172] (0xc000138630) (0xc0008b8640) Stream removed, broadcasting: 1\nI0721 01:10:13.692598    1735 log.go:172] (0xc000138630) (0xc000891d60) Stream removed, broadcasting: 3\nI0721 01:10:13.692612    1735 log.go:172] (0xc000138630) (0xc0007f1540) Stream removed, broadcasting: 5\n"
Jul 21 01:10:13.697: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 21 01:10:13.697: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jul 21 01:10:23.730: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jul 21 01:10:33.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ldwqb ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 21 01:10:33.947: INFO: stderr: "I0721 01:10:33.873381    1758 log.go:172] (0xc0008682c0) (0xc00074a640) Create stream\nI0721 01:10:33.873449    1758 log.go:172] (0xc0008682c0) (0xc00074a640) Stream added, broadcasting: 1\nI0721 01:10:33.875937    1758 log.go:172] (0xc0008682c0) Reply frame received for 1\nI0721 01:10:33.875993    1758 log.go:172] (0xc0008682c0) (0xc0007fcdc0) Create stream\nI0721 01:10:33.876011    1758 log.go:172] (0xc0008682c0) (0xc0007fcdc0) Stream added, broadcasting: 3\nI0721 01:10:33.877118    1758 log.go:172] (0xc0008682c0) Reply frame received for 3\nI0721 01:10:33.877186    1758 log.go:172] (0xc0008682c0) (0xc0007fcf00) Create stream\nI0721 01:10:33.877214    1758 log.go:172] (0xc0008682c0) (0xc0007fcf00) Stream added, broadcasting: 5\nI0721 01:10:33.878357    1758 log.go:172] (0xc0008682c0) Reply frame received for 5\nI0721 01:10:33.940175    1758 log.go:172] (0xc0008682c0) Data frame received for 5\nI0721 01:10:33.940207    1758 log.go:172] (0xc0007fcf00) (5) Data frame handling\nI0721 01:10:33.940224    1758 log.go:172] (0xc0008682c0) Data frame received for 3\nI0721 01:10:33.940228    1758 log.go:172] (0xc0007fcdc0) (3) Data frame handling\nI0721 01:10:33.940234    1758 log.go:172] (0xc0007fcdc0) (3) Data frame sent\nI0721 01:10:33.940238    1758 log.go:172] (0xc0008682c0) Data frame received for 3\nI0721 01:10:33.940242    1758 log.go:172] (0xc0007fcdc0) (3) Data frame handling\nI0721 01:10:33.941771    1758 log.go:172] (0xc0008682c0) Data frame received for 1\nI0721 01:10:33.941784    1758 log.go:172] (0xc00074a640) (1) Data frame handling\nI0721 01:10:33.941798    1758 log.go:172] (0xc00074a640) (1) Data frame sent\nI0721 01:10:33.941811    1758 log.go:172] (0xc0008682c0) (0xc00074a640) Stream removed, broadcasting: 1\nI0721 01:10:33.941823    1758 log.go:172] (0xc0008682c0) Go away received\nI0721 01:10:33.942044    1758 log.go:172] (0xc0008682c0) (0xc00074a640) Stream removed, broadcasting: 1\nI0721 01:10:33.942073    1758 log.go:172] (0xc0008682c0) (0xc0007fcdc0) Stream removed, broadcasting: 3\nI0721 01:10:33.942087    1758 log.go:172] (0xc0008682c0) (0xc0007fcf00) Stream removed, broadcasting: 5\n"
Jul 21 01:10:33.948: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul 21 01:10:33.948: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul 21 01:10:43.964: INFO: Waiting for StatefulSet e2e-tests-statefulset-ldwqb/ss2 to complete update
Jul 21 01:10:43.964: INFO: Waiting for Pod e2e-tests-statefulset-ldwqb/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul 21 01:10:43.964: INFO: Waiting for Pod e2e-tests-statefulset-ldwqb/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul 21 01:10:43.965: INFO: Waiting for Pod e2e-tests-statefulset-ldwqb/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul 21 01:10:53.982: INFO: Waiting for StatefulSet e2e-tests-statefulset-ldwqb/ss2 to complete update
Jul 21 01:10:53.982: INFO: Waiting for Pod e2e-tests-statefulset-ldwqb/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul 21 01:10:53.982: INFO: Waiting for Pod e2e-tests-statefulset-ldwqb/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul 21 01:11:03.971: INFO: Waiting for StatefulSet e2e-tests-statefulset-ldwqb/ss2 to complete update
Jul 21 01:11:03.971: INFO: Waiting for Pod e2e-tests-statefulset-ldwqb/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Rolling back to a previous revision
Jul 21 01:11:13.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ldwqb ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 21 01:11:14.223: INFO: stderr: "I0721 01:11:14.106050    1780 log.go:172] (0xc0002e0420) (0xc000732640) Create stream\nI0721 01:11:14.106109    1780 log.go:172] (0xc0002e0420) (0xc000732640) Stream added, broadcasting: 1\nI0721 01:11:14.108276    1780 log.go:172] (0xc0002e0420) Reply frame received for 1\nI0721 01:11:14.108305    1780 log.go:172] (0xc0002e0420) (0xc0007326e0) Create stream\nI0721 01:11:14.108313    1780 log.go:172] (0xc0002e0420) (0xc0007326e0) Stream added, broadcasting: 3\nI0721 01:11:14.109475    1780 log.go:172] (0xc0002e0420) Reply frame received for 3\nI0721 01:11:14.109521    1780 log.go:172] (0xc0002e0420) (0xc0007cac80) Create stream\nI0721 01:11:14.109532    1780 log.go:172] (0xc0002e0420) (0xc0007cac80) Stream added, broadcasting: 5\nI0721 01:11:14.110386    1780 log.go:172] (0xc0002e0420) Reply frame received for 5\nI0721 01:11:14.216251    1780 log.go:172] (0xc0002e0420) Data frame received for 3\nI0721 01:11:14.216295    1780 log.go:172] (0xc0007326e0) (3) Data frame handling\nI0721 01:11:14.216323    1780 log.go:172] (0xc0007326e0) (3) Data frame sent\nI0721 01:11:14.216412    1780 log.go:172] (0xc0002e0420) Data frame received for 5\nI0721 01:11:14.216458    1780 log.go:172] (0xc0007cac80) (5) Data frame handling\nI0721 01:11:14.216489    1780 log.go:172] (0xc0002e0420) Data frame received for 3\nI0721 01:11:14.216501    1780 log.go:172] (0xc0007326e0) (3) Data frame handling\nI0721 01:11:14.218909    1780 log.go:172] (0xc0002e0420) Data frame received for 1\nI0721 01:11:14.218952    1780 log.go:172] (0xc000732640) (1) Data frame handling\nI0721 01:11:14.218965    1780 log.go:172] (0xc000732640) (1) Data frame sent\nI0721 01:11:14.218999    1780 log.go:172] (0xc0002e0420) (0xc000732640) Stream removed, broadcasting: 1\nI0721 01:11:14.219039    1780 log.go:172] (0xc0002e0420) Go away received\nI0721 01:11:14.219446    1780 log.go:172] (0xc0002e0420) (0xc000732640) Stream removed, broadcasting: 1\nI0721 01:11:14.219481    1780 log.go:172] (0xc0002e0420) (0xc0007326e0) Stream removed, broadcasting: 3\nI0721 01:11:14.219495    1780 log.go:172] (0xc0002e0420) (0xc0007cac80) Stream removed, broadcasting: 5\n"
Jul 21 01:11:14.223: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 21 01:11:14.223: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul 21 01:11:24.253: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jul 21 01:11:34.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ldwqb ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 21 01:11:34.540: INFO: stderr: "I0721 01:11:34.440147    1803 log.go:172] (0xc0001386e0) (0xc00071e640) Create stream\nI0721 01:11:34.440223    1803 log.go:172] (0xc0001386e0) (0xc00071e640) Stream added, broadcasting: 1\nI0721 01:11:34.442952    1803 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0721 01:11:34.443030    1803 log.go:172] (0xc0001386e0) (0xc00071e6e0) Create stream\nI0721 01:11:34.443049    1803 log.go:172] (0xc0001386e0) (0xc00071e6e0) Stream added, broadcasting: 3\nI0721 01:11:34.444075    1803 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0721 01:11:34.444136    1803 log.go:172] (0xc0001386e0) (0xc00062cd20) Create stream\nI0721 01:11:34.444159    1803 log.go:172] (0xc0001386e0) (0xc00062cd20) Stream added, broadcasting: 5\nI0721 01:11:34.445259    1803 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0721 01:11:34.533798    1803 log.go:172] (0xc0001386e0) Data frame received for 5\nI0721 01:11:34.533855    1803 log.go:172] (0xc0001386e0) Data frame received for 3\nI0721 01:11:34.533904    1803 log.go:172] (0xc00071e6e0) (3) Data frame handling\nI0721 01:11:34.533934    1803 log.go:172] (0xc00071e6e0) (3) Data frame sent\nI0721 01:11:34.533952    1803 log.go:172] (0xc0001386e0) Data frame received for 3\nI0721 01:11:34.533962    1803 log.go:172] (0xc00071e6e0) (3) Data frame handling\nI0721 01:11:34.533999    1803 log.go:172] (0xc00062cd20) (5) Data frame handling\nI0721 01:11:34.535760    1803 log.go:172] (0xc0001386e0) Data frame received for 1\nI0721 01:11:34.535787    1803 log.go:172] (0xc00071e640) (1) Data frame handling\nI0721 01:11:34.535800    1803 log.go:172] (0xc00071e640) (1) Data frame sent\nI0721 01:11:34.535814    1803 log.go:172] (0xc0001386e0) (0xc00071e640) Stream removed, broadcasting: 1\nI0721 01:11:34.535829    1803 log.go:172] (0xc0001386e0) Go away received\nI0721 01:11:34.536127    1803 log.go:172] (0xc0001386e0) (0xc00071e640) Stream removed, broadcasting: 1\nI0721 01:11:34.536151    1803 log.go:172] (0xc0001386e0) (0xc00071e6e0) Stream removed, broadcasting: 3\nI0721 01:11:34.536166    1803 log.go:172] (0xc0001386e0) (0xc00062cd20) Stream removed, broadcasting: 5\n"
Jul 21 01:11:34.541: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul 21 01:11:34.541: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul 21 01:11:44.561: INFO: Waiting for StatefulSet e2e-tests-statefulset-ldwqb/ss2 to complete update
Jul 21 01:11:44.561: INFO: Waiting for Pod e2e-tests-statefulset-ldwqb/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jul 21 01:11:44.561: INFO: Waiting for Pod e2e-tests-statefulset-ldwqb/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jul 21 01:11:44.561: INFO: Waiting for Pod e2e-tests-statefulset-ldwqb/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jul 21 01:11:54.570: INFO: Waiting for StatefulSet e2e-tests-statefulset-ldwqb/ss2 to complete update
Jul 21 01:11:54.570: INFO: Waiting for Pod e2e-tests-statefulset-ldwqb/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jul 21 01:12:04.570: INFO: Waiting for StatefulSet e2e-tests-statefulset-ldwqb/ss2 to complete update
Jul 21 01:12:04.570: INFO: Waiting for Pod e2e-tests-statefulset-ldwqb/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jul 21 01:12:14.570: INFO: Deleting all statefulset in ns e2e-tests-statefulset-ldwqb
Jul 21 01:12:14.573: INFO: Scaling statefulset ss2 to 0
Jul 21 01:12:44.650: INFO: Waiting for statefulset status.replicas updated to 0
Jul 21 01:12:44.652: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:12:44.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-ldwqb" for this suite.
Jul 21 01:12:52.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:12:52.785: INFO: namespace: e2e-tests-statefulset-ldwqb, resource: bindings, ignored listing per whitelist
Jul 21 01:12:52.870: INFO: namespace e2e-tests-statefulset-ldwqb deletion completed in 8.146289129s

• [SLOW TEST:179.549 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:12:52.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jul 21 01:13:03.193: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 21 01:13:03.197: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 21 01:13:05.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 21 01:13:05.201: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 21 01:13:07.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 21 01:13:07.202: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 21 01:13:09.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 21 01:13:09.201: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 21 01:13:11.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 21 01:13:11.202: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 21 01:13:13.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 21 01:13:13.214: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 21 01:13:15.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 21 01:13:15.201: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 21 01:13:17.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 21 01:13:17.201: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 21 01:13:19.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 21 01:13:19.460: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 21 01:13:21.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 21 01:13:21.206: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 21 01:13:23.197: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 21 01:13:23.226: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:13:23.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-txmqd" for this suite.
Jul 21 01:13:47.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:13:47.320: INFO: namespace: e2e-tests-container-lifecycle-hook-txmqd, resource: bindings, ignored listing per whitelist
Jul 21 01:13:47.366: INFO: namespace e2e-tests-container-lifecycle-hook-txmqd deletion completed in 24.135523531s

• [SLOW TEST:54.495 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:13:47.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0721 01:14:27.631189       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 21 01:14:27.631: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:14:27.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-xz822" for this suite.
Jul 21 01:14:37.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:14:37.690: INFO: namespace: e2e-tests-gc-xz822, resource: bindings, ignored listing per whitelist
Jul 21 01:14:37.818: INFO: namespace e2e-tests-gc-xz822 deletion completed in 10.183571575s

• [SLOW TEST:50.451 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:14:37.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Jul 21 01:14:38.500: INFO: Waiting up to 5m0s for pod "client-containers-8c1968ba-caef-11ea-86e4-0242ac110009" in namespace "e2e-tests-containers-tsb89" to be "success or failure"
Jul 21 01:14:38.673: INFO: Pod "client-containers-8c1968ba-caef-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 172.867186ms
Jul 21 01:14:40.678: INFO: Pod "client-containers-8c1968ba-caef-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177621849s
Jul 21 01:14:42.706: INFO: Pod "client-containers-8c1968ba-caef-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.205412797s
STEP: Saw pod success
Jul 21 01:14:42.706: INFO: Pod "client-containers-8c1968ba-caef-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:14:42.709: INFO: Trying to get logs from node hunter-worker pod client-containers-8c1968ba-caef-11ea-86e4-0242ac110009 container test-container: 
STEP: delete the pod
Jul 21 01:14:42.733: INFO: Waiting for pod client-containers-8c1968ba-caef-11ea-86e4-0242ac110009 to disappear
Jul 21 01:14:42.737: INFO: Pod client-containers-8c1968ba-caef-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:14:42.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-tsb89" for this suite.
Jul 21 01:14:48.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:14:48.792: INFO: namespace: e2e-tests-containers-tsb89, resource: bindings, ignored listing per whitelist
Jul 21 01:14:48.822: INFO: namespace e2e-tests-containers-tsb89 deletion completed in 6.081433764s

• [SLOW TEST:11.004 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:14:48.822: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Jul 21 01:14:53.110: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-9251cc52-caef-11ea-86e4-0242ac110009", GenerateName:"", Namespace:"e2e-tests-pods-7cnhc", SelfLink:"/api/v1/namespaces/e2e-tests-pods-7cnhc/pods/pod-submit-remove-9251cc52-caef-11ea-86e4-0242ac110009", UID:"92547443-caef-11ea-b2c9-0242ac120008", ResourceVersion:"1920264", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63730890888, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"905026097"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-2nnrn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0020a8540), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2nnrn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001c174d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002033bc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001c175d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001c175f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001c175f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001c175fc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730890889, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730890892, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730890892, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730890888, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.4", PodIP:"10.244.2.184", StartTime:(*v1.Time)(0xc0015a8780), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0015a87a0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://b2f69d3670cb6e77159425b7f96a81f3435937d1e3c769217518de7bc3ad0925"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:15:07.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-7cnhc" for this suite.
Jul 21 01:15:13.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:15:13.722: INFO: namespace: e2e-tests-pods-7cnhc, resource: bindings, ignored listing per whitelist
Jul 21 01:15:13.742: INFO: namespace e2e-tests-pods-7cnhc deletion completed in 6.10654847s

• [SLOW TEST:24.920 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:15:13.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Jul 21 01:15:13.986: INFO: Waiting up to 5m0s for pod "pod-a13cb7f6-caef-11ea-86e4-0242ac110009" in namespace "e2e-tests-emptydir-d44br" to be "success or failure"
Jul 21 01:15:13.996: INFO: Pod "pod-a13cb7f6-caef-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 9.469755ms
Jul 21 01:15:15.999: INFO: Pod "pod-a13cb7f6-caef-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012543897s
Jul 21 01:15:18.014: INFO: Pod "pod-a13cb7f6-caef-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027651152s
STEP: Saw pod success
Jul 21 01:15:18.014: INFO: Pod "pod-a13cb7f6-caef-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:15:18.016: INFO: Trying to get logs from node hunter-worker2 pod pod-a13cb7f6-caef-11ea-86e4-0242ac110009 container test-container: 
STEP: delete the pod
Jul 21 01:15:18.379: INFO: Waiting for pod pod-a13cb7f6-caef-11ea-86e4-0242ac110009 to disappear
Jul 21 01:15:18.457: INFO: Pod pod-a13cb7f6-caef-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:15:18.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-d44br" for this suite.
Jul 21 01:15:24.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:15:24.620: INFO: namespace: e2e-tests-emptydir-d44br, resource: bindings, ignored listing per whitelist
Jul 21 01:15:24.620: INFO: namespace e2e-tests-emptydir-d44br deletion completed in 6.159011332s

• [SLOW TEST:10.878 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:15:24.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 21 01:15:24.854: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:15:26.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-bfkmt" for this suite.
Jul 21 01:15:32.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:15:32.449: INFO: namespace: e2e-tests-custom-resource-definition-bfkmt, resource: bindings, ignored listing per whitelist
Jul 21 01:15:32.504: INFO: namespace e2e-tests-custom-resource-definition-bfkmt deletion completed in 6.389738606s

• [SLOW TEST:7.884 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:15:32.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Jul 21 01:15:32.829: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:15:32.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-v7glt" for this suite.
Jul 21 01:15:39.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:15:39.285: INFO: namespace: e2e-tests-kubectl-v7glt, resource: bindings, ignored listing per whitelist
Jul 21 01:15:39.324: INFO: namespace e2e-tests-kubectl-v7glt deletion completed in 6.237258602s

• [SLOW TEST:6.819 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:15:39.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-w69lc/configmap-test-b0807373-caef-11ea-86e4-0242ac110009
STEP: Creating a pod to test consume configMaps
Jul 21 01:15:39.633: INFO: Waiting up to 5m0s for pod "pod-configmaps-b08db31a-caef-11ea-86e4-0242ac110009" in namespace "e2e-tests-configmap-w69lc" to be "success or failure"
Jul 21 01:15:39.670: INFO: Pod "pod-configmaps-b08db31a-caef-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 37.13673ms
Jul 21 01:15:41.725: INFO: Pod "pod-configmaps-b08db31a-caef-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092181003s
Jul 21 01:15:43.733: INFO: Pod "pod-configmaps-b08db31a-caef-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.10039426s
STEP: Saw pod success
Jul 21 01:15:43.733: INFO: Pod "pod-configmaps-b08db31a-caef-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:15:43.735: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-b08db31a-caef-11ea-86e4-0242ac110009 container env-test: 
STEP: delete the pod
Jul 21 01:15:43.769: INFO: Waiting for pod pod-configmaps-b08db31a-caef-11ea-86e4-0242ac110009 to disappear
Jul 21 01:15:43.982: INFO: Pod pod-configmaps-b08db31a-caef-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:15:43.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-w69lc" for this suite.
Jul 21 01:15:50.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:15:50.110: INFO: namespace: e2e-tests-configmap-w69lc, resource: bindings, ignored listing per whitelist
Jul 21 01:15:50.110: INFO: namespace e2e-tests-configmap-w69lc deletion completed in 6.124839552s

• [SLOW TEST:10.786 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:15:50.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-b6f0204a-caef-11ea-86e4-0242ac110009
STEP: Creating a pod to test consume secrets
Jul 21 01:15:50.394: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b6f6d0a6-caef-11ea-86e4-0242ac110009" in namespace "e2e-tests-projected-2chjj" to be "success or failure"
Jul 21 01:15:50.398: INFO: Pod "pod-projected-secrets-b6f6d0a6-caef-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.943627ms
Jul 21 01:15:52.402: INFO: Pod "pod-projected-secrets-b6f6d0a6-caef-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007805787s
Jul 21 01:15:54.462: INFO: Pod "pod-projected-secrets-b6f6d0a6-caef-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068039564s
STEP: Saw pod success
Jul 21 01:15:54.462: INFO: Pod "pod-projected-secrets-b6f6d0a6-caef-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:15:54.465: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-b6f6d0a6-caef-11ea-86e4-0242ac110009 container projected-secret-volume-test: 
STEP: delete the pod
Jul 21 01:15:54.631: INFO: Waiting for pod pod-projected-secrets-b6f6d0a6-caef-11ea-86e4-0242ac110009 to disappear
Jul 21 01:15:54.661: INFO: Pod pod-projected-secrets-b6f6d0a6-caef-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:15:54.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2chjj" for this suite.
Jul 21 01:16:00.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:16:00.757: INFO: namespace: e2e-tests-projected-2chjj, resource: bindings, ignored listing per whitelist
Jul 21 01:16:00.801: INFO: namespace e2e-tests-projected-2chjj deletion completed in 6.11534347s

• [SLOW TEST:10.691 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:16:00.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jul 21 01:16:02.119: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:16:10.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-gpdqd" for this suite.
Jul 21 01:16:16.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:16:16.512: INFO: namespace: e2e-tests-init-container-gpdqd, resource: bindings, ignored listing per whitelist
Jul 21 01:16:16.520: INFO: namespace e2e-tests-init-container-gpdqd deletion completed in 6.087603648s

• [SLOW TEST:15.718 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:16:16.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-zskxh
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zskxh to expose endpoints map[]
Jul 21 01:16:16.662: INFO: Get endpoints failed (16.7873ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jul 21 01:16:17.666: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zskxh exposes endpoints map[] (1.020572391s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-zskxh
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zskxh to expose endpoints map[pod1:[100]]
Jul 21 01:16:21.737: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zskxh exposes endpoints map[pod1:[100]] (4.06406388s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-zskxh
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zskxh to expose endpoints map[pod1:[100] pod2:[101]]
Jul 21 01:16:24.780: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zskxh exposes endpoints map[pod1:[100] pod2:[101]] (3.039411882s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-zskxh
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zskxh to expose endpoints map[pod2:[101]]
Jul 21 01:16:25.805: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zskxh exposes endpoints map[pod2:[101]] (1.021669511s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-zskxh
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zskxh to expose endpoints map[]
Jul 21 01:16:26.873: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zskxh exposes endpoints map[] (1.062254516s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:16:27.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-zskxh" for this suite.
Jul 21 01:16:49.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:16:49.266: INFO: namespace: e2e-tests-services-zskxh, resource: bindings, ignored listing per whitelist
Jul 21 01:16:49.276: INFO: namespace e2e-tests-services-zskxh deletion completed in 22.229614105s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:32.756 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:16:49.276: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Jul 21 01:16:49.404: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-dkfq4" to be "success or failure"
Jul 21 01:16:49.411: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.874888ms
Jul 21 01:16:51.521: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117522881s
Jul 21 01:16:53.525: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121537698s
Jul 21 01:16:55.528: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.124550991s
STEP: Saw pod success
Jul 21 01:16:55.528: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jul 21 01:16:55.530: INFO: Trying to get logs from node hunter-worker2 pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jul 21 01:16:55.549: INFO: Waiting for pod pod-host-path-test to disappear
Jul 21 01:16:55.569: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:16:55.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-dkfq4" for this suite.
Jul 21 01:17:01.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:17:01.603: INFO: namespace: e2e-tests-hostpath-dkfq4, resource: bindings, ignored listing per whitelist
Jul 21 01:17:01.668: INFO: namespace e2e-tests-hostpath-dkfq4 deletion completed in 6.095055498s

• [SLOW TEST:12.392 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:17:01.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Jul 21 01:17:01.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jul 21 01:17:01.942: INFO: stderr: ""
Jul 21 01:17:01.942: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:17:01.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-tqb2r" for this suite.
Jul 21 01:17:07.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:17:08.022: INFO: namespace: e2e-tests-kubectl-tqb2r, resource: bindings, ignored listing per whitelist
Jul 21 01:17:08.077: INFO: namespace e2e-tests-kubectl-tqb2r deletion completed in 6.131181298s

• [SLOW TEST:6.410 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:17:08.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-e557c250-caef-11ea-86e4-0242ac110009
STEP: Creating secret with name s-test-opt-upd-e557c2d1-caef-11ea-86e4-0242ac110009
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-e557c250-caef-11ea-86e4-0242ac110009
STEP: Updating secret s-test-opt-upd-e557c2d1-caef-11ea-86e4-0242ac110009
STEP: Creating secret with name s-test-opt-create-e557c308-caef-11ea-86e4-0242ac110009
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:17:16.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-2zpwm" for this suite.
Jul 21 01:17:38.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:17:38.375: INFO: namespace: e2e-tests-secrets-2zpwm, resource: bindings, ignored listing per whitelist
Jul 21 01:17:38.482: INFO: namespace e2e-tests-secrets-2zpwm deletion completed in 22.15778217s

• [SLOW TEST:30.404 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:17:38.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 21 01:17:38.620: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jul 21 01:17:38.627: INFO: Number of nodes with available pods: 0
Jul 21 01:17:38.627: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jul 21 01:17:38.662: INFO: Number of nodes with available pods: 0
Jul 21 01:17:38.662: INFO: Node hunter-worker is running more than one daemon pod
Jul 21 01:17:39.667: INFO: Number of nodes with available pods: 0
Jul 21 01:17:39.667: INFO: Node hunter-worker is running more than one daemon pod
Jul 21 01:17:40.667: INFO: Number of nodes with available pods: 0
Jul 21 01:17:40.667: INFO: Node hunter-worker is running more than one daemon pod
Jul 21 01:17:41.667: INFO: Number of nodes with available pods: 1
Jul 21 01:17:41.667: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jul 21 01:17:41.703: INFO: Number of nodes with available pods: 1
Jul 21 01:17:41.703: INFO: Number of running nodes: 0, number of available pods: 1
Jul 21 01:17:42.707: INFO: Number of nodes with available pods: 0
Jul 21 01:17:42.707: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jul 21 01:17:42.718: INFO: Number of nodes with available pods: 0
Jul 21 01:17:42.718: INFO: Node hunter-worker is running more than one daemon pod
Jul 21 01:17:43.722: INFO: Number of nodes with available pods: 0
Jul 21 01:17:43.722: INFO: Node hunter-worker is running more than one daemon pod
Jul 21 01:17:44.722: INFO: Number of nodes with available pods: 0
Jul 21 01:17:44.722: INFO: Node hunter-worker is running more than one daemon pod
Jul 21 01:17:45.722: INFO: Number of nodes with available pods: 0
Jul 21 01:17:45.722: INFO: Node hunter-worker is running more than one daemon pod
Jul 21 01:17:46.722: INFO: Number of nodes with available pods: 0
Jul 21 01:17:46.722: INFO: Node hunter-worker is running more than one daemon pod
Jul 21 01:17:47.722: INFO: Number of nodes with available pods: 0
Jul 21 01:17:47.722: INFO: Node hunter-worker is running more than one daemon pod
Jul 21 01:17:48.722: INFO: Number of nodes with available pods: 0
Jul 21 01:17:48.722: INFO: Node hunter-worker is running more than one daemon pod
Jul 21 01:17:49.722: INFO: Number of nodes with available pods: 0
Jul 21 01:17:49.722: INFO: Node hunter-worker is running more than one daemon pod
Jul 21 01:17:50.722: INFO: Number of nodes with available pods: 0
Jul 21 01:17:50.722: INFO: Node hunter-worker is running more than one daemon pod
Jul 21 01:17:51.722: INFO: Number of nodes with available pods: 0
Jul 21 01:17:51.722: INFO: Node hunter-worker is running more than one daemon pod
Jul 21 01:17:52.723: INFO: Number of nodes with available pods: 0
Jul 21 01:17:52.723: INFO: Node hunter-worker is running more than one daemon pod
Jul 21 01:17:53.722: INFO: Number of nodes with available pods: 0
Jul 21 01:17:53.722: INFO: Node hunter-worker is running more than one daemon pod
Jul 21 01:17:54.722: INFO: Number of nodes with available pods: 0
Jul 21 01:17:54.722: INFO: Node hunter-worker is running more than one daemon pod
Jul 21 01:17:55.722: INFO: Number of nodes with available pods: 0
Jul 21 01:17:55.722: INFO: Node hunter-worker is running more than one daemon pod
Jul 21 01:17:56.721: INFO: Number of nodes with available pods: 0
Jul 21 01:17:56.721: INFO: Node hunter-worker is running more than one daemon pod
Jul 21 01:17:57.730: INFO: Number of nodes with available pods: 0
Jul 21 01:17:57.730: INFO: Node hunter-worker is running more than one daemon pod
Jul 21 01:17:58.721: INFO: Number of nodes with available pods: 0
Jul 21 01:17:58.721: INFO: Node hunter-worker is running more than one daemon pod
Jul 21 01:17:59.722: INFO: Number of nodes with available pods: 0
Jul 21 01:17:59.722: INFO: Node hunter-worker is running more than one daemon pod
Jul 21 01:18:00.721: INFO: Number of nodes with available pods: 1
Jul 21 01:18:00.721: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-zzthl, will wait for the garbage collector to delete the pods
Jul 21 01:18:00.786: INFO: Deleting DaemonSet.extensions daemon-set took: 6.300507ms
Jul 21 01:18:00.887: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.231436ms
Jul 21 01:18:07.733: INFO: Number of nodes with available pods: 0
Jul 21 01:18:07.733: INFO: Number of running nodes: 0, number of available pods: 0
Jul 21 01:18:07.736: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-zzthl/daemonsets","resourceVersion":"1920983"},"items":null}

Jul 21 01:18:07.738: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-zzthl/pods","resourceVersion":"1920983"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:18:07.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-zzthl" for this suite.
Jul 21 01:18:13.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:18:13.998: INFO: namespace: e2e-tests-daemonsets-zzthl, resource: bindings, ignored listing per whitelist
Jul 21 01:18:14.044: INFO: namespace e2e-tests-daemonsets-zzthl deletion completed in 6.133515408s

• [SLOW TEST:35.562 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:18:14.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:18:18.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-sl494" for this suite.
Jul 21 01:19:08.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:19:08.356: INFO: namespace: e2e-tests-kubelet-test-sl494, resource: bindings, ignored listing per whitelist
Jul 21 01:19:08.435: INFO: namespace e2e-tests-kubelet-test-sl494 deletion completed in 50.181865248s

• [SLOW TEST:54.391 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:19:08.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-2d141262-caf0-11ea-86e4-0242ac110009
STEP: Creating a pod to test consume secrets
Jul 21 01:19:08.582: INFO: Waiting up to 5m0s for pod "pod-secrets-2d15c9e0-caf0-11ea-86e4-0242ac110009" in namespace "e2e-tests-secrets-8f49h" to be "success or failure"
Jul 21 01:19:08.594: INFO: Pod "pod-secrets-2d15c9e0-caf0-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 11.930112ms
Jul 21 01:19:10.614: INFO: Pod "pod-secrets-2d15c9e0-caf0-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032371778s
Jul 21 01:19:12.618: INFO: Pod "pod-secrets-2d15c9e0-caf0-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036000027s
STEP: Saw pod success
Jul 21 01:19:12.618: INFO: Pod "pod-secrets-2d15c9e0-caf0-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:19:12.621: INFO: Trying to get logs from node hunter-worker pod pod-secrets-2d15c9e0-caf0-11ea-86e4-0242ac110009 container secret-volume-test: 
STEP: delete the pod
Jul 21 01:19:12.663: INFO: Waiting for pod pod-secrets-2d15c9e0-caf0-11ea-86e4-0242ac110009 to disappear
Jul 21 01:19:12.696: INFO: Pod pod-secrets-2d15c9e0-caf0-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:19:12.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-8f49h" for this suite.
Jul 21 01:19:18.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:19:18.752: INFO: namespace: e2e-tests-secrets-8f49h, resource: bindings, ignored listing per whitelist
Jul 21 01:19:18.798: INFO: namespace e2e-tests-secrets-8f49h deletion completed in 6.098303161s

• [SLOW TEST:10.363 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:19:18.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul 21 01:19:18.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-hm2zg'
Jul 21 01:19:21.683: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul 21 01:19:21.683: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Jul 21 01:19:23.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-hm2zg'
Jul 21 01:19:23.825: INFO: stderr: ""
Jul 21 01:19:23.825: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:19:23.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-hm2zg" for this suite.
Jul 21 01:19:29.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:19:30.043: INFO: namespace: e2e-tests-kubectl-hm2zg, resource: bindings, ignored listing per whitelist
Jul 21 01:19:30.062: INFO: namespace e2e-tests-kubectl-hm2zg deletion completed in 6.178886681s

• [SLOW TEST:11.264 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:19:30.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-3a06c34d-caf0-11ea-86e4-0242ac110009
STEP: Creating a pod to test consume secrets
Jul 21 01:19:30.455: INFO: Waiting up to 5m0s for pod "pod-secrets-3a099e2d-caf0-11ea-86e4-0242ac110009" in namespace "e2e-tests-secrets-dgsgb" to be "success or failure"
Jul 21 01:19:30.723: INFO: Pod "pod-secrets-3a099e2d-caf0-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 267.676832ms
Jul 21 01:19:32.836: INFO: Pod "pod-secrets-3a099e2d-caf0-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.380676663s
Jul 21 01:19:34.840: INFO: Pod "pod-secrets-3a099e2d-caf0-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.384387035s
STEP: Saw pod success
Jul 21 01:19:34.840: INFO: Pod "pod-secrets-3a099e2d-caf0-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:19:34.843: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-3a099e2d-caf0-11ea-86e4-0242ac110009 container secret-env-test: 
STEP: delete the pod
Jul 21 01:19:34.913: INFO: Waiting for pod pod-secrets-3a099e2d-caf0-11ea-86e4-0242ac110009 to disappear
Jul 21 01:19:34.980: INFO: Pod pod-secrets-3a099e2d-caf0-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:19:34.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-dgsgb" for this suite.
Jul 21 01:19:41.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:19:41.137: INFO: namespace: e2e-tests-secrets-dgsgb, resource: bindings, ignored listing per whitelist
Jul 21 01:19:41.170: INFO: namespace e2e-tests-secrets-dgsgb deletion completed in 6.18602744s

• [SLOW TEST:11.107 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:19:41.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul 21 01:19:41.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-mbczz'
Jul 21 01:19:41.460: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul 21 01:19:41.460: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Jul 21 01:19:41.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-mbczz'
Jul 21 01:19:41.581: INFO: stderr: ""
Jul 21 01:19:41.581: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:19:41.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-mbczz" for this suite.
Jul 21 01:20:03.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:20:03.677: INFO: namespace: e2e-tests-kubectl-mbczz, resource: bindings, ignored listing per whitelist
Jul 21 01:20:03.706: INFO: namespace e2e-tests-kubectl-mbczz deletion completed in 22.089319791s

• [SLOW TEST:22.536 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:20:03.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Jul 21 01:20:03.832: INFO: Waiting up to 5m0s for pod "client-containers-4e012614-caf0-11ea-86e4-0242ac110009" in namespace "e2e-tests-containers-5ws2d" to be "success or failure"
Jul 21 01:20:03.846: INFO: Pod "client-containers-4e012614-caf0-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 13.819881ms
Jul 21 01:20:05.850: INFO: Pod "client-containers-4e012614-caf0-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017520429s
Jul 21 01:20:07.872: INFO: Pod "client-containers-4e012614-caf0-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03947121s
STEP: Saw pod success
Jul 21 01:20:07.872: INFO: Pod "client-containers-4e012614-caf0-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:20:07.875: INFO: Trying to get logs from node hunter-worker pod client-containers-4e012614-caf0-11ea-86e4-0242ac110009 container test-container: 
STEP: delete the pod
Jul 21 01:20:07.902: INFO: Waiting for pod client-containers-4e012614-caf0-11ea-86e4-0242ac110009 to disappear
Jul 21 01:20:07.919: INFO: Pod client-containers-4e012614-caf0-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:20:07.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-5ws2d" for this suite.
Jul 21 01:20:13.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:20:13.975: INFO: namespace: e2e-tests-containers-5ws2d, resource: bindings, ignored listing per whitelist
Jul 21 01:20:14.026: INFO: namespace e2e-tests-containers-5ws2d deletion completed in 6.104250579s

• [SLOW TEST:10.321 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:20:14.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-5431ee0a-caf0-11ea-86e4-0242ac110009
STEP: Creating secret with name secret-projected-all-test-volume-5431edd8-caf0-11ea-86e4-0242ac110009
STEP: Creating a pod to test Check all projections for projected volume plugin
Jul 21 01:20:14.201: INFO: Waiting up to 5m0s for pod "projected-volume-5431ed5f-caf0-11ea-86e4-0242ac110009" in namespace "e2e-tests-projected-f4nww" to be "success or failure"
Jul 21 01:20:14.216: INFO: Pod "projected-volume-5431ed5f-caf0-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 15.356144ms
Jul 21 01:20:16.220: INFO: Pod "projected-volume-5431ed5f-caf0-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019097051s
Jul 21 01:20:18.225: INFO: Pod "projected-volume-5431ed5f-caf0-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023883634s
STEP: Saw pod success
Jul 21 01:20:18.225: INFO: Pod "projected-volume-5431ed5f-caf0-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:20:18.228: INFO: Trying to get logs from node hunter-worker pod projected-volume-5431ed5f-caf0-11ea-86e4-0242ac110009 container projected-all-volume-test: 
STEP: delete the pod
Jul 21 01:20:18.287: INFO: Waiting for pod projected-volume-5431ed5f-caf0-11ea-86e4-0242ac110009 to disappear
Jul 21 01:20:18.349: INFO: Pod projected-volume-5431ed5f-caf0-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:20:18.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-f4nww" for this suite.
Jul 21 01:20:24.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:20:24.399: INFO: namespace: e2e-tests-projected-f4nww, resource: bindings, ignored listing per whitelist
Jul 21 01:20:24.448: INFO: namespace e2e-tests-projected-f4nww deletion completed in 6.09231963s

• [SLOW TEST:10.422 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:20:24.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-s4fc8
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 21 01:20:24.527: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul 21 01:20:52.732: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.195:8080/dial?request=hostName&protocol=udp&host=10.244.1.211&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-s4fc8 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 21 01:20:52.732: INFO: >>> kubeConfig: /root/.kube/config
I0721 01:20:52.759247       6 log.go:172] (0xc00168a370) (0xc001ac5220) Create stream
I0721 01:20:52.759276       6 log.go:172] (0xc00168a370) (0xc001ac5220) Stream added, broadcasting: 1
I0721 01:20:52.761883       6 log.go:172] (0xc00168a370) Reply frame received for 1
I0721 01:20:52.761915       6 log.go:172] (0xc00168a370) (0xc001ac5400) Create stream
I0721 01:20:52.761929       6 log.go:172] (0xc00168a370) (0xc001ac5400) Stream added, broadcasting: 3
I0721 01:20:52.762663       6 log.go:172] (0xc00168a370) Reply frame received for 3
I0721 01:20:52.762684       6 log.go:172] (0xc00168a370) (0xc002369a40) Create stream
I0721 01:20:52.762690       6 log.go:172] (0xc00168a370) (0xc002369a40) Stream added, broadcasting: 5
I0721 01:20:52.763293       6 log.go:172] (0xc00168a370) Reply frame received for 5
I0721 01:20:52.832199       6 log.go:172] (0xc00168a370) Data frame received for 3
I0721 01:20:52.832246       6 log.go:172] (0xc001ac5400) (3) Data frame handling
I0721 01:20:52.832282       6 log.go:172] (0xc001ac5400) (3) Data frame sent
I0721 01:20:52.832522       6 log.go:172] (0xc00168a370) Data frame received for 3
I0721 01:20:52.832579       6 log.go:172] (0xc00168a370) Data frame received for 5
I0721 01:20:52.832633       6 log.go:172] (0xc002369a40) (5) Data frame handling
I0721 01:20:52.832686       6 log.go:172] (0xc001ac5400) (3) Data frame handling
I0721 01:20:52.834679       6 log.go:172] (0xc00168a370) Data frame received for 1
I0721 01:20:52.834702       6 log.go:172] (0xc001ac5220) (1) Data frame handling
I0721 01:20:52.834724       6 log.go:172] (0xc001ac5220) (1) Data frame sent
I0721 01:20:52.834734       6 log.go:172] (0xc00168a370) (0xc001ac5220) Stream removed, broadcasting: 1
I0721 01:20:52.834798       6 log.go:172] (0xc00168a370) Go away received
I0721 01:20:52.834909       6 log.go:172] (0xc00168a370) (0xc001ac5220) Stream removed, broadcasting: 1
I0721 01:20:52.834932       6 log.go:172] (0xc00168a370) (0xc001ac5400) Stream removed, broadcasting: 3
I0721 01:20:52.834939       6 log.go:172] (0xc00168a370) (0xc002369a40) Stream removed, broadcasting: 5
Jul 21 01:20:52.834: INFO: Waiting for endpoints: map[]
Jul 21 01:20:52.838: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.195:8080/dial?request=hostName&protocol=udp&host=10.244.2.194&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-s4fc8 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 21 01:20:52.839: INFO: >>> kubeConfig: /root/.kube/config
I0721 01:20:52.871616       6 log.go:172] (0xc000675970) (0xc0025c0780) Create stream
I0721 01:20:52.871645       6 log.go:172] (0xc000675970) (0xc0025c0780) Stream added, broadcasting: 1
I0721 01:20:52.875576       6 log.go:172] (0xc000675970) Reply frame received for 1
I0721 01:20:52.875623       6 log.go:172] (0xc000675970) (0xc0025c0820) Create stream
I0721 01:20:52.875644       6 log.go:172] (0xc000675970) (0xc0025c0820) Stream added, broadcasting: 3
I0721 01:20:52.876693       6 log.go:172] (0xc000675970) Reply frame received for 3
I0721 01:20:52.876820       6 log.go:172] (0xc000675970) (0xc001ac54a0) Create stream
I0721 01:20:52.876835       6 log.go:172] (0xc000675970) (0xc001ac54a0) Stream added, broadcasting: 5
I0721 01:20:52.878099       6 log.go:172] (0xc000675970) Reply frame received for 5
I0721 01:20:52.932945       6 log.go:172] (0xc000675970) Data frame received for 5
I0721 01:20:52.932980       6 log.go:172] (0xc001ac54a0) (5) Data frame handling
I0721 01:20:52.932999       6 log.go:172] (0xc000675970) Data frame received for 3
I0721 01:20:52.933005       6 log.go:172] (0xc0025c0820) (3) Data frame handling
I0721 01:20:52.933014       6 log.go:172] (0xc0025c0820) (3) Data frame sent
I0721 01:20:52.933026       6 log.go:172] (0xc000675970) Data frame received for 3
I0721 01:20:52.933039       6 log.go:172] (0xc0025c0820) (3) Data frame handling
I0721 01:20:52.934408       6 log.go:172] (0xc000675970) Data frame received for 1
I0721 01:20:52.934431       6 log.go:172] (0xc0025c0780) (1) Data frame handling
I0721 01:20:52.934440       6 log.go:172] (0xc0025c0780) (1) Data frame sent
I0721 01:20:52.934451       6 log.go:172] (0xc000675970) (0xc0025c0780) Stream removed, broadcasting: 1
I0721 01:20:52.934476       6 log.go:172] (0xc000675970) Go away received
I0721 01:20:52.934589       6 log.go:172] (0xc000675970) (0xc0025c0780) Stream removed, broadcasting: 1
I0721 01:20:52.934603       6 log.go:172] (0xc000675970) (0xc0025c0820) Stream removed, broadcasting: 3
I0721 01:20:52.934611       6 log.go:172] (0xc000675970) (0xc001ac54a0) Stream removed, broadcasting: 5
Jul 21 01:20:52.934: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:20:52.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-s4fc8" for this suite.
Jul 21 01:21:16.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:21:16.977: INFO: namespace: e2e-tests-pod-network-test-s4fc8, resource: bindings, ignored listing per whitelist
Jul 21 01:21:17.029: INFO: namespace e2e-tests-pod-network-test-s4fc8 deletion completed in 24.091982459s

• [SLOW TEST:52.581 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:21:17.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-rmxgt
Jul 21 01:21:21.126: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-rmxgt
STEP: checking the pod's current state and verifying that restartCount is present
Jul 21 01:21:21.130: INFO: Initial restart count of pod liveness-exec is 0
Jul 21 01:22:15.272: INFO: Restart count of pod e2e-tests-container-probe-rmxgt/liveness-exec is now 1 (54.142322068s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:22:15.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-rmxgt" for this suite.
Jul 21 01:22:22.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:22:22.069: INFO: namespace: e2e-tests-container-probe-rmxgt, resource: bindings, ignored listing per whitelist
Jul 21 01:22:22.123: INFO: namespace e2e-tests-container-probe-rmxgt deletion completed in 6.373779763s

• [SLOW TEST:65.093 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:22:22.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 21 01:22:22.228: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jul 21 01:22:22.278: INFO: Pod name sample-pod: Found 0 pods out of 1
Jul 21 01:22:27.282: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul 21 01:22:27.282: INFO: Creating deployment "test-rolling-update-deployment"
Jul 21 01:22:27.287: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jul 21 01:22:27.302: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jul 21 01:22:29.310: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jul 21 01:22:29.313: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730891347, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730891347, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730891347, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730891347, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 21 01:22:31.316: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730891347, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730891347, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730891347, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730891347, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 21 01:22:33.316: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jul 21 01:22:33.322: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-8f9sd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8f9sd/deployments/test-rolling-update-deployment,UID:a388eae8-caf0-11ea-b2c9-0242ac120008,ResourceVersion:1921835,Generation:1,CreationTimestamp:2020-07-21 01:22:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-07-21 01:22:27 +0000 UTC 2020-07-21 01:22:27 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-07-21 01:22:32 +0000 UTC 2020-07-21 01:22:27 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jul 21 01:22:33.325: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-8f9sd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8f9sd/replicasets/test-rolling-update-deployment-75db98fb4c,UID:a38c7cb3-caf0-11ea-b2c9-0242ac120008,ResourceVersion:1921826,Generation:1,CreationTimestamp:2020-07-21 01:22:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment a388eae8-caf0-11ea-b2c9-0242ac120008 0xc001db7db7 0xc001db7db8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jul 21 01:22:33.325: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jul 21 01:22:33.325: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-8f9sd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8f9sd/replicasets/test-rolling-update-controller,UID:a085b14f-caf0-11ea-b2c9-0242ac120008,ResourceVersion:1921834,Generation:2,CreationTimestamp:2020-07-21 01:22:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment a388eae8-caf0-11ea-b2c9-0242ac120008 0xc001db7c77 0xc001db7c78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul 21 01:22:33.327: INFO: Pod "test-rolling-update-deployment-75db98fb4c-ct67w" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-ct67w,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-8f9sd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8f9sd/pods/test-rolling-update-deployment-75db98fb4c-ct67w,UID:a38d2e41-caf0-11ea-b2c9-0242ac120008,ResourceVersion:1921825,Generation:0,CreationTimestamp:2020-07-21 01:22:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c a38c7cb3-caf0-11ea-b2c9-0242ac120008 0xc000386f67 0xc000386f68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkf56 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkf56,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-fkf56 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000386fe0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000387010}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:22:27 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:22:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:22:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:22:27 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.197,StartTime:2020-07-21 01:22:27 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-07-21 01:22:31 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://256c8c077beb0962c60be135697feb15ecc7e2431c03687ef2fc06fd16cdc929}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:22:33.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-8f9sd" for this suite.
Jul 21 01:22:39.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:22:39.458: INFO: namespace: e2e-tests-deployment-8f9sd, resource: bindings, ignored listing per whitelist
Jul 21 01:22:39.496: INFO: namespace e2e-tests-deployment-8f9sd deletion completed in 6.166025438s

• [SLOW TEST:17.373 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:22:39.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 21 01:22:39.829: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ab0233f9-caf0-11ea-86e4-0242ac110009" in namespace "e2e-tests-downward-api-fbntq" to be "success or failure"
Jul 21 01:22:39.838: INFO: Pod "downwardapi-volume-ab0233f9-caf0-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 9.287609ms
Jul 21 01:22:41.910: INFO: Pod "downwardapi-volume-ab0233f9-caf0-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081483296s
Jul 21 01:22:43.914: INFO: Pod "downwardapi-volume-ab0233f9-caf0-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085230771s
STEP: Saw pod success
Jul 21 01:22:43.914: INFO: Pod "downwardapi-volume-ab0233f9-caf0-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:22:43.917: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-ab0233f9-caf0-11ea-86e4-0242ac110009 container client-container: 
STEP: delete the pod
Jul 21 01:22:44.051: INFO: Waiting for pod downwardapi-volume-ab0233f9-caf0-11ea-86e4-0242ac110009 to disappear
Jul 21 01:22:44.180: INFO: Pod downwardapi-volume-ab0233f9-caf0-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:22:44.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-fbntq" for this suite.
Jul 21 01:22:50.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:22:50.324: INFO: namespace: e2e-tests-downward-api-fbntq, resource: bindings, ignored listing per whitelist
Jul 21 01:22:50.326: INFO: namespace e2e-tests-downward-api-fbntq deletion completed in 6.142375782s

• [SLOW TEST:10.829 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:22:50.326: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jul 21 01:22:50.493: INFO: Waiting up to 5m0s for pod "downward-api-b1551d60-caf0-11ea-86e4-0242ac110009" in namespace "e2e-tests-downward-api-vw7f6" to be "success or failure"
Jul 21 01:22:50.503: INFO: Pod "downward-api-b1551d60-caf0-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 10.075195ms
Jul 21 01:22:52.507: INFO: Pod "downward-api-b1551d60-caf0-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01394907s
Jul 21 01:22:54.511: INFO: Pod "downward-api-b1551d60-caf0-11ea-86e4-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.018041276s
Jul 21 01:22:56.515: INFO: Pod "downward-api-b1551d60-caf0-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021783461s
STEP: Saw pod success
Jul 21 01:22:56.515: INFO: Pod "downward-api-b1551d60-caf0-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:22:56.518: INFO: Trying to get logs from node hunter-worker2 pod downward-api-b1551d60-caf0-11ea-86e4-0242ac110009 container dapi-container: 
STEP: delete the pod
Jul 21 01:22:56.539: INFO: Waiting for pod downward-api-b1551d60-caf0-11ea-86e4-0242ac110009 to disappear
Jul 21 01:22:56.561: INFO: Pod downward-api-b1551d60-caf0-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:22:56.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-vw7f6" for this suite.
Jul 21 01:23:02.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:23:02.632: INFO: namespace: e2e-tests-downward-api-vw7f6, resource: bindings, ignored listing per whitelist
Jul 21 01:23:02.683: INFO: namespace e2e-tests-downward-api-vw7f6 deletion completed in 6.118032315s

• [SLOW TEST:12.357 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:23:02.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-b8b0d7ed-caf0-11ea-86e4-0242ac110009
STEP: Creating a pod to test consume configMaps
Jul 21 01:23:02.799: INFO: Waiting up to 5m0s for pod "pod-configmaps-b8b2e73e-caf0-11ea-86e4-0242ac110009" in namespace "e2e-tests-configmap-8xz2f" to be "success or failure"
Jul 21 01:23:02.803: INFO: Pod "pod-configmaps-b8b2e73e-caf0-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.778171ms
Jul 21 01:23:04.893: INFO: Pod "pod-configmaps-b8b2e73e-caf0-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093658047s
Jul 21 01:23:06.897: INFO: Pod "pod-configmaps-b8b2e73e-caf0-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097478855s
STEP: Saw pod success
Jul 21 01:23:06.897: INFO: Pod "pod-configmaps-b8b2e73e-caf0-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:23:06.899: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-b8b2e73e-caf0-11ea-86e4-0242ac110009 container configmap-volume-test: 
STEP: delete the pod
Jul 21 01:23:07.200: INFO: Waiting for pod pod-configmaps-b8b2e73e-caf0-11ea-86e4-0242ac110009 to disappear
Jul 21 01:23:07.256: INFO: Pod pod-configmaps-b8b2e73e-caf0-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:23:07.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-8xz2f" for this suite.
Jul 21 01:23:13.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:23:13.391: INFO: namespace: e2e-tests-configmap-8xz2f, resource: bindings, ignored listing per whitelist
Jul 21 01:23:13.435: INFO: namespace e2e-tests-configmap-8xz2f deletion completed in 6.173936756s

• [SLOW TEST:10.751 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:23:13.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul 21 01:23:13.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-b6mh7'
Jul 21 01:23:13.649: INFO: stderr: ""
Jul 21 01:23:13.649: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Jul 21 01:23:13.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-b6mh7'
Jul 21 01:23:27.580: INFO: stderr: ""
Jul 21 01:23:27.580: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:23:27.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-b6mh7" for this suite.
Jul 21 01:23:33.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:23:33.648: INFO: namespace: e2e-tests-kubectl-b6mh7, resource: bindings, ignored listing per whitelist
Jul 21 01:23:33.692: INFO: namespace e2e-tests-kubectl-b6mh7 deletion completed in 6.092475384s

• [SLOW TEST:20.257 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:23:33.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jul 21 01:23:33.835: INFO: Waiting up to 5m0s for pod "pod-cb32efe1-caf0-11ea-86e4-0242ac110009" in namespace "e2e-tests-emptydir-vfh92" to be "success or failure"
Jul 21 01:23:33.859: INFO: Pod "pod-cb32efe1-caf0-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 23.265709ms
Jul 21 01:23:36.073: INFO: Pod "pod-cb32efe1-caf0-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.237190593s
Jul 21 01:23:38.077: INFO: Pod "pod-cb32efe1-caf0-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.241199734s
STEP: Saw pod success
Jul 21 01:23:38.077: INFO: Pod "pod-cb32efe1-caf0-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:23:38.079: INFO: Trying to get logs from node hunter-worker2 pod pod-cb32efe1-caf0-11ea-86e4-0242ac110009 container test-container: 
STEP: delete the pod
Jul 21 01:23:38.338: INFO: Waiting for pod pod-cb32efe1-caf0-11ea-86e4-0242ac110009 to disappear
Jul 21 01:23:38.397: INFO: Pod pod-cb32efe1-caf0-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:23:38.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-vfh92" for this suite.
Jul 21 01:23:44.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:23:44.620: INFO: namespace: e2e-tests-emptydir-vfh92, resource: bindings, ignored listing per whitelist
Jul 21 01:23:44.620: INFO: namespace e2e-tests-emptydir-vfh92 deletion completed in 6.218488369s

• [SLOW TEST:10.927 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:23:44.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-94m9g
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-94m9g
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-94m9g
Jul 21 01:23:44.813: INFO: Found 0 stateful pods, waiting for 1
Jul 21 01:23:54.818: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jul 21 01:23:54.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-94m9g ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 21 01:23:55.068: INFO: stderr: "I0721 01:23:54.949370    2003 log.go:172] (0xc0003382c0) (0xc0008165a0) Create stream\nI0721 01:23:54.949430    2003 log.go:172] (0xc0003382c0) (0xc0008165a0) Stream added, broadcasting: 1\nI0721 01:23:54.951763    2003 log.go:172] (0xc0003382c0) Reply frame received for 1\nI0721 01:23:54.951791    2003 log.go:172] (0xc0003382c0) (0xc000666dc0) Create stream\nI0721 01:23:54.951798    2003 log.go:172] (0xc0003382c0) (0xc000666dc0) Stream added, broadcasting: 3\nI0721 01:23:54.952664    2003 log.go:172] (0xc0003382c0) Reply frame received for 3\nI0721 01:23:54.952693    2003 log.go:172] (0xc0003382c0) (0xc000816640) Create stream\nI0721 01:23:54.952701    2003 log.go:172] (0xc0003382c0) (0xc000816640) Stream added, broadcasting: 5\nI0721 01:23:54.953813    2003 log.go:172] (0xc0003382c0) Reply frame received for 5\nI0721 01:23:55.061230    2003 log.go:172] (0xc0003382c0) Data frame received for 3\nI0721 01:23:55.061284    2003 log.go:172] (0xc000666dc0) (3) Data frame handling\nI0721 01:23:55.061320    2003 log.go:172] (0xc000666dc0) (3) Data frame sent\nI0721 01:23:55.061338    2003 log.go:172] (0xc0003382c0) Data frame received for 3\nI0721 01:23:55.061353    2003 log.go:172] (0xc000666dc0) (3) Data frame handling\nI0721 01:23:55.061400    2003 log.go:172] (0xc0003382c0) Data frame received for 5\nI0721 01:23:55.061414    2003 log.go:172] (0xc000816640) (5) Data frame handling\nI0721 01:23:55.063300    2003 log.go:172] (0xc0003382c0) Data frame received for 1\nI0721 01:23:55.063312    2003 log.go:172] (0xc0008165a0) (1) Data frame handling\nI0721 01:23:55.063318    2003 log.go:172] (0xc0008165a0) (1) Data frame sent\nI0721 01:23:55.063527    2003 log.go:172] (0xc0003382c0) (0xc0008165a0) Stream removed, broadcasting: 1\nI0721 01:23:55.063746    2003 log.go:172] (0xc0003382c0) Go away received\nI0721 01:23:55.063798    2003 log.go:172] (0xc0003382c0) (0xc0008165a0) Stream removed, broadcasting: 1\nI0721 01:23:55.063849    2003 log.go:172] (0xc0003382c0) (0xc000666dc0) Stream removed, broadcasting: 3\nI0721 01:23:55.063872    2003 log.go:172] (0xc0003382c0) (0xc000816640) Stream removed, broadcasting: 5\n"
Jul 21 01:23:55.068: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 21 01:23:55.068: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul 21 01:23:55.078: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jul 21 01:24:05.082: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul 21 01:24:05.082: INFO: Waiting for statefulset status.replicas updated to 0
Jul 21 01:24:05.164: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul 21 01:24:05.164: INFO: ss-0  hunter-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:23:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:23:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:23:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:23:44 +0000 UTC  }]
Jul 21 01:24:05.164: INFO: ss-1                  Pending         []
Jul 21 01:24:05.164: INFO: 
Jul 21 01:24:05.164: INFO: StatefulSet ss has not reached scale 3, at 2
Jul 21 01:24:06.169: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.934076276s
Jul 21 01:24:07.219: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.929127694s
Jul 21 01:24:08.434: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.879416381s
Jul 21 01:24:09.547: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.66433806s
Jul 21 01:24:10.746: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.550747055s
Jul 21 01:24:11.783: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.35227345s
Jul 21 01:24:12.788: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.314915131s
Jul 21 01:24:13.797: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.309741975s
Jul 21 01:24:14.802: INFO: Verifying statefulset ss doesn't scale past 3 for another 301.107771ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-94m9g
Jul 21 01:24:15.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-94m9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 21 01:24:16.027: INFO: stderr: "I0721 01:24:15.963571    2026 log.go:172] (0xc0007da160) (0xc0006f45a0) Create stream\nI0721 01:24:15.963622    2026 log.go:172] (0xc0007da160) (0xc0006f45a0) Stream added, broadcasting: 1\nI0721 01:24:15.965629    2026 log.go:172] (0xc0007da160) Reply frame received for 1\nI0721 01:24:15.965665    2026 log.go:172] (0xc0007da160) (0xc0000eec80) Create stream\nI0721 01:24:15.965675    2026 log.go:172] (0xc0007da160) (0xc0000eec80) Stream added, broadcasting: 3\nI0721 01:24:15.966389    2026 log.go:172] (0xc0007da160) Reply frame received for 3\nI0721 01:24:15.966409    2026 log.go:172] (0xc0007da160) (0xc0006f4640) Create stream\nI0721 01:24:15.966420    2026 log.go:172] (0xc0007da160) (0xc0006f4640) Stream added, broadcasting: 5\nI0721 01:24:15.967156    2026 log.go:172] (0xc0007da160) Reply frame received for 5\nI0721 01:24:16.021345    2026 log.go:172] (0xc0007da160) Data frame received for 5\nI0721 01:24:16.021376    2026 log.go:172] (0xc0006f4640) (5) Data frame handling\nI0721 01:24:16.021396    2026 log.go:172] (0xc0007da160) Data frame received for 3\nI0721 01:24:16.021402    2026 log.go:172] (0xc0000eec80) (3) Data frame handling\nI0721 01:24:16.021409    2026 log.go:172] (0xc0000eec80) (3) Data frame sent\nI0721 01:24:16.021416    2026 log.go:172] (0xc0007da160) Data frame received for 3\nI0721 01:24:16.021421    2026 log.go:172] (0xc0000eec80) (3) Data frame handling\nI0721 01:24:16.022994    2026 log.go:172] (0xc0007da160) Data frame received for 1\nI0721 01:24:16.023016    2026 log.go:172] (0xc0006f45a0) (1) Data frame handling\nI0721 01:24:16.023030    2026 log.go:172] (0xc0006f45a0) (1) Data frame sent\nI0721 01:24:16.023048    2026 log.go:172] (0xc0007da160) (0xc0006f45a0) Stream removed, broadcasting: 1\nI0721 01:24:16.023062    2026 log.go:172] (0xc0007da160) Go away received\nI0721 01:24:16.023368    2026 log.go:172] (0xc0007da160) (0xc0006f45a0) Stream removed, broadcasting: 1\nI0721 01:24:16.023399    2026 log.go:172] (0xc0007da160) (0xc0000eec80) Stream removed, broadcasting: 3\nI0721 01:24:16.023417    2026 log.go:172] (0xc0007da160) (0xc0006f4640) Stream removed, broadcasting: 5\n"
Jul 21 01:24:16.027: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul 21 01:24:16.027: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul 21 01:24:16.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-94m9g ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 21 01:24:16.257: INFO: stderr: "I0721 01:24:16.176224    2049 log.go:172] (0xc00081e2c0) (0xc000718640) Create stream\nI0721 01:24:16.176306    2049 log.go:172] (0xc00081e2c0) (0xc000718640) Stream added, broadcasting: 1\nI0721 01:24:16.179091    2049 log.go:172] (0xc00081e2c0) Reply frame received for 1\nI0721 01:24:16.179134    2049 log.go:172] (0xc00081e2c0) (0xc000560c80) Create stream\nI0721 01:24:16.179151    2049 log.go:172] (0xc00081e2c0) (0xc000560c80) Stream added, broadcasting: 3\nI0721 01:24:16.180263    2049 log.go:172] (0xc00081e2c0) Reply frame received for 3\nI0721 01:24:16.180311    2049 log.go:172] (0xc00081e2c0) (0xc000560dc0) Create stream\nI0721 01:24:16.180326    2049 log.go:172] (0xc00081e2c0) (0xc000560dc0) Stream added, broadcasting: 5\nI0721 01:24:16.181488    2049 log.go:172] (0xc00081e2c0) Reply frame received for 5\nI0721 01:24:16.251211    2049 log.go:172] (0xc00081e2c0) Data frame received for 5\nI0721 01:24:16.251244    2049 log.go:172] (0xc000560dc0) (5) Data frame handling\nI0721 01:24:16.251256    2049 log.go:172] (0xc000560dc0) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0721 01:24:16.251268    2049 log.go:172] (0xc00081e2c0) Data frame received for 5\nI0721 01:24:16.251366    2049 log.go:172] (0xc000560dc0) (5) Data frame handling\nI0721 01:24:16.251397    2049 log.go:172] (0xc00081e2c0) Data frame received for 3\nI0721 01:24:16.251411    2049 log.go:172] (0xc000560c80) (3) Data frame handling\nI0721 01:24:16.251426    2049 log.go:172] (0xc000560c80) (3) Data frame sent\nI0721 01:24:16.251438    2049 log.go:172] (0xc00081e2c0) Data frame received for 3\nI0721 01:24:16.251450    2049 log.go:172] (0xc000560c80) (3) Data frame handling\nI0721 01:24:16.253245    2049 log.go:172] (0xc00081e2c0) Data frame received for 1\nI0721 01:24:16.253274    2049 log.go:172] (0xc000718640) (1) Data frame handling\nI0721 01:24:16.253290    2049 log.go:172] (0xc000718640) (1) Data frame sent\nI0721 01:24:16.253307    2049 log.go:172] (0xc00081e2c0) (0xc000718640) Stream removed, broadcasting: 1\nI0721 01:24:16.253333    2049 log.go:172] (0xc00081e2c0) Go away received\nI0721 01:24:16.253662    2049 log.go:172] (0xc00081e2c0) (0xc000718640) Stream removed, broadcasting: 1\nI0721 01:24:16.253694    2049 log.go:172] (0xc00081e2c0) (0xc000560c80) Stream removed, broadcasting: 3\nI0721 01:24:16.253718    2049 log.go:172] (0xc00081e2c0) (0xc000560dc0) Stream removed, broadcasting: 5\n"
Jul 21 01:24:16.257: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul 21 01:24:16.257: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul 21 01:24:16.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-94m9g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 21 01:24:16.445: INFO: stderr: "I0721 01:24:16.382143    2072 log.go:172] (0xc000138840) (0xc000671220) Create stream\nI0721 01:24:16.382192    2072 log.go:172] (0xc000138840) (0xc000671220) Stream added, broadcasting: 1\nI0721 01:24:16.384202    2072 log.go:172] (0xc000138840) Reply frame received for 1\nI0721 01:24:16.384261    2072 log.go:172] (0xc000138840) (0xc000798000) Create stream\nI0721 01:24:16.384277    2072 log.go:172] (0xc000138840) (0xc000798000) Stream added, broadcasting: 3\nI0721 01:24:16.385404    2072 log.go:172] (0xc000138840) Reply frame received for 3\nI0721 01:24:16.385440    2072 log.go:172] (0xc000138840) (0xc0007980a0) Create stream\nI0721 01:24:16.385450    2072 log.go:172] (0xc000138840) (0xc0007980a0) Stream added, broadcasting: 5\nI0721 01:24:16.386371    2072 log.go:172] (0xc000138840) Reply frame received for 5\nI0721 01:24:16.435851    2072 log.go:172] (0xc000138840) Data frame received for 3\nI0721 01:24:16.435890    2072 log.go:172] (0xc000798000) (3) Data frame handling\nI0721 01:24:16.435902    2072 log.go:172] (0xc000798000) (3) Data frame sent\nI0721 01:24:16.435924    2072 log.go:172] (0xc000138840) Data frame received for 5\nI0721 01:24:16.435932    2072 log.go:172] (0xc0007980a0) (5) Data frame handling\nI0721 01:24:16.435943    2072 log.go:172] (0xc0007980a0) (5) Data frame sent\nI0721 01:24:16.435955    2072 log.go:172] (0xc000138840) Data frame received for 5\nI0721 01:24:16.435965    2072 log.go:172] (0xc0007980a0) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0721 01:24:16.435975    2072 log.go:172] (0xc000138840) Data frame received for 3\nI0721 01:24:16.435983    2072 log.go:172] (0xc000798000) (3) Data frame handling\nI0721 01:24:16.437665    2072 log.go:172] (0xc000138840) Data frame received for 1\nI0721 01:24:16.437700    2072 log.go:172] (0xc000671220) (1) Data frame handling\nI0721 01:24:16.437721    2072 log.go:172] (0xc000671220) (1) Data frame sent\nI0721 01:24:16.437741    2072 log.go:172] (0xc000138840) (0xc000671220) Stream removed, broadcasting: 1\nI0721 01:24:16.437782    2072 log.go:172] (0xc000138840) Go away received\nI0721 01:24:16.438053    2072 log.go:172] (0xc000138840) (0xc000671220) Stream removed, broadcasting: 1\nI0721 01:24:16.438077    2072 log.go:172] (0xc000138840) (0xc000798000) Stream removed, broadcasting: 3\nI0721 01:24:16.438098    2072 log.go:172] (0xc000138840) (0xc0007980a0) Stream removed, broadcasting: 5\n"
Jul 21 01:24:16.445: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul 21 01:24:16.445: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul 21 01:24:16.449: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Jul 21 01:24:26.454: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 21 01:24:26.454: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 21 01:24:26.454: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jul 21 01:24:26.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-94m9g ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 21 01:24:26.694: INFO: stderr: "I0721 01:24:26.596000    2094 log.go:172] (0xc000138840) (0xc00071e640) Create stream\nI0721 01:24:26.596048    2094 log.go:172] (0xc000138840) (0xc00071e640) Stream added, broadcasting: 1\nI0721 01:24:26.598367    2094 log.go:172] (0xc000138840) Reply frame received for 1\nI0721 01:24:26.598413    2094 log.go:172] (0xc000138840) (0xc0007acd20) Create stream\nI0721 01:24:26.598428    2094 log.go:172] (0xc000138840) (0xc0007acd20) Stream added, broadcasting: 3\nI0721 01:24:26.599444    2094 log.go:172] (0xc000138840) Reply frame received for 3\nI0721 01:24:26.599496    2094 log.go:172] (0xc000138840) (0xc00071e6e0) Create stream\nI0721 01:24:26.599510    2094 log.go:172] (0xc000138840) (0xc00071e6e0) Stream added, broadcasting: 5\nI0721 01:24:26.600446    2094 log.go:172] (0xc000138840) Reply frame received for 5\nI0721 01:24:26.686938    2094 log.go:172] (0xc000138840) Data frame received for 3\nI0721 01:24:26.686987    2094 log.go:172] (0xc0007acd20) (3) Data frame handling\nI0721 01:24:26.687000    2094 log.go:172] (0xc0007acd20) (3) Data frame sent\nI0721 01:24:26.687040    2094 log.go:172] (0xc000138840) Data frame received for 5\nI0721 01:24:26.687108    2094 log.go:172] (0xc00071e6e0) (5) Data frame handling\nI0721 01:24:26.687148    2094 log.go:172] (0xc000138840) Data frame received for 3\nI0721 01:24:26.687168    2094 log.go:172] (0xc0007acd20) (3) Data frame handling\nI0721 01:24:26.688627    2094 log.go:172] (0xc000138840) Data frame received for 1\nI0721 01:24:26.688651    2094 log.go:172] (0xc00071e640) (1) Data frame handling\nI0721 01:24:26.688662    2094 log.go:172] (0xc00071e640) (1) Data frame sent\nI0721 01:24:26.688684    2094 log.go:172] (0xc000138840) (0xc00071e640) Stream removed, broadcasting: 1\nI0721 01:24:26.688720    2094 log.go:172] (0xc000138840) Go away received\nI0721 01:24:26.689043    2094 log.go:172] (0xc000138840) (0xc00071e640) Stream removed, broadcasting: 1\nI0721 01:24:26.689067    2094 log.go:172] (0xc000138840) (0xc0007acd20) Stream removed, broadcasting: 3\nI0721 01:24:26.689079    2094 log.go:172] (0xc000138840) (0xc00071e6e0) Stream removed, broadcasting: 5\n"
Jul 21 01:24:26.694: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 21 01:24:26.694: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul 21 01:24:26.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-94m9g ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 21 01:24:26.920: INFO: stderr: "I0721 01:24:26.821073    2115 log.go:172] (0xc000138790) (0xc0002a54a0) Create stream\nI0721 01:24:26.821128    2115 log.go:172] (0xc000138790) (0xc0002a54a0) Stream added, broadcasting: 1\nI0721 01:24:26.822884    2115 log.go:172] (0xc000138790) Reply frame received for 1\nI0721 01:24:26.822948    2115 log.go:172] (0xc000138790) (0xc0002a5540) Create stream\nI0721 01:24:26.822963    2115 log.go:172] (0xc000138790) (0xc0002a5540) Stream added, broadcasting: 3\nI0721 01:24:26.823666    2115 log.go:172] (0xc000138790) Reply frame received for 3\nI0721 01:24:26.823706    2115 log.go:172] (0xc000138790) (0xc000674000) Create stream\nI0721 01:24:26.823731    2115 log.go:172] (0xc000138790) (0xc000674000) Stream added, broadcasting: 5\nI0721 01:24:26.824520    2115 log.go:172] (0xc000138790) Reply frame received for 5\nI0721 01:24:26.912152    2115 log.go:172] (0xc000138790) Data frame received for 3\nI0721 01:24:26.912202    2115 log.go:172] (0xc0002a5540) (3) Data frame handling\nI0721 01:24:26.912237    2115 log.go:172] (0xc0002a5540) (3) Data frame sent\nI0721 01:24:26.912252    2115 log.go:172] (0xc000138790) Data frame received for 3\nI0721 01:24:26.912266    2115 log.go:172] (0xc0002a5540) (3) Data frame handling\nI0721 01:24:26.912367    2115 log.go:172] (0xc000138790) Data frame received for 5\nI0721 01:24:26.912419    2115 log.go:172] (0xc000674000) (5) Data frame handling\nI0721 01:24:26.914288    2115 log.go:172] (0xc000138790) Data frame received for 1\nI0721 01:24:26.914328    2115 log.go:172] (0xc0002a54a0) (1) Data frame handling\nI0721 01:24:26.914354    2115 log.go:172] (0xc0002a54a0) (1) Data frame sent\nI0721 01:24:26.914383    2115 log.go:172] (0xc000138790) (0xc0002a54a0) Stream removed, broadcasting: 1\nI0721 01:24:26.914416    2115 log.go:172] (0xc000138790) Go away received\nI0721 01:24:26.914664    2115 log.go:172] (0xc000138790) (0xc0002a54a0) Stream removed, broadcasting: 1\nI0721 01:24:26.914692    2115 log.go:172] (0xc000138790) (0xc0002a5540) Stream removed, broadcasting: 3\nI0721 01:24:26.914704    2115 log.go:172] (0xc000138790) (0xc000674000) Stream removed, broadcasting: 5\n"
Jul 21 01:24:26.920: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 21 01:24:26.920: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul 21 01:24:26.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-94m9g ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 21 01:24:27.141: INFO: stderr: "I0721 01:24:27.048216    2137 log.go:172] (0xc000138630) (0xc00072c640) Create stream\nI0721 01:24:27.048278    2137 log.go:172] (0xc000138630) (0xc00072c640) Stream added, broadcasting: 1\nI0721 01:24:27.051054    2137 log.go:172] (0xc000138630) Reply frame received for 1\nI0721 01:24:27.051111    2137 log.go:172] (0xc000138630) (0xc000606f00) Create stream\nI0721 01:24:27.051133    2137 log.go:172] (0xc000138630) (0xc000606f00) Stream added, broadcasting: 3\nI0721 01:24:27.052420    2137 log.go:172] (0xc000138630) Reply frame received for 3\nI0721 01:24:27.052461    2137 log.go:172] (0xc000138630) (0xc000392000) Create stream\nI0721 01:24:27.052474    2137 log.go:172] (0xc000138630) (0xc000392000) Stream added, broadcasting: 5\nI0721 01:24:27.053724    2137 log.go:172] (0xc000138630) Reply frame received for 5\nI0721 01:24:27.133969    2137 log.go:172] (0xc000138630) Data frame received for 3\nI0721 01:24:27.134007    2137 log.go:172] (0xc000606f00) (3) Data frame handling\nI0721 01:24:27.134038    2137 log.go:172] (0xc000606f00) (3) Data frame sent\nI0721 01:24:27.134256    2137 log.go:172] (0xc000138630) Data frame received for 5\nI0721 01:24:27.134293    2137 log.go:172] (0xc000392000) (5) Data frame handling\nI0721 01:24:27.134347    2137 log.go:172] (0xc000138630) Data frame received for 3\nI0721 01:24:27.134384    2137 log.go:172] (0xc000606f00) (3) Data frame handling\nI0721 01:24:27.136206    2137 log.go:172] (0xc000138630) Data frame received for 1\nI0721 01:24:27.136230    2137 log.go:172] (0xc00072c640) (1) Data frame handling\nI0721 01:24:27.136244    2137 log.go:172] (0xc00072c640) (1) Data frame sent\nI0721 01:24:27.136270    2137 log.go:172] (0xc000138630) (0xc00072c640) Stream removed, broadcasting: 1\nI0721 01:24:27.136290    2137 log.go:172] (0xc000138630) Go away received\nI0721 01:24:27.136598    2137 log.go:172] (0xc000138630) (0xc00072c640) Stream removed, broadcasting: 1\nI0721 01:24:27.136625    2137 log.go:172] (0xc000138630) (0xc000606f00) Stream removed, broadcasting: 3\nI0721 01:24:27.136641    2137 log.go:172] (0xc000138630) (0xc000392000) Stream removed, broadcasting: 5\n"
Jul 21 01:24:27.141: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 21 01:24:27.141: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul 21 01:24:27.141: INFO: Waiting for statefulset status.replicas updated to 0
Jul 21 01:24:27.145: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Jul 21 01:24:37.153: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul 21 01:24:37.153: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jul 21 01:24:37.153: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jul 21 01:24:37.172: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul 21 01:24:37.172: INFO: ss-0  hunter-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:23:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:23:44 +0000 UTC  }]
Jul 21 01:24:37.172: INFO: ss-1  hunter-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:05 +0000 UTC  }]
Jul 21 01:24:37.172: INFO: ss-2  hunter-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:05 +0000 UTC  }]
Jul 21 01:24:37.172: INFO: 
Jul 21 01:24:37.172: INFO: StatefulSet ss has not reached scale 0, at 3
Jul 21 01:24:38.182: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul 21 01:24:38.182: INFO: ss-0  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:23:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:23:44 +0000 UTC  }]
Jul 21 01:24:38.182: INFO: ss-1  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:05 +0000 UTC  }]
Jul 21 01:24:38.182: INFO: ss-2  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:05 +0000 UTC  }]
Jul 21 01:24:38.182: INFO: 
Jul 21 01:24:38.182: INFO: StatefulSet ss has not reached scale 0, at 3
Jul 21 01:24:39.187: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul 21 01:24:39.187: INFO: ss-0  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:23:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:23:44 +0000 UTC  }]
Jul 21 01:24:39.187: INFO: ss-1  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:05 +0000 UTC  }]
Jul 21 01:24:39.187: INFO: ss-2  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:05 +0000 UTC  }]
Jul 21 01:24:39.187: INFO: 
Jul 21 01:24:39.187: INFO: StatefulSet ss has not reached scale 0, at 3
Jul 21 01:24:40.192: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jul 21 01:24:40.193: INFO: ss-1  hunter-worker  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:05 +0000 UTC  }]
Jul 21 01:24:40.193: INFO: ss-2  hunter-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:24:05 +0000 UTC  }]
Jul 21 01:24:40.193: INFO: 
Jul 21 01:24:40.193: INFO: StatefulSet ss has not reached scale 0, at 2
Jul 21 01:24:41.196: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.967603282s
Jul 21 01:24:42.201: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.96375237s
Jul 21 01:24:43.205: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.959682859s
Jul 21 01:24:44.209: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.955064403s
Jul 21 01:24:45.214: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.950694741s
Jul 21 01:24:46.218: INFO: Verifying statefulset ss doesn't scale past 0 for another 946.401096ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-94m9g
Jul 21 01:24:47.222: INFO: Scaling statefulset ss to 0
Jul 21 01:24:47.232: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jul 21 01:24:47.235: INFO: Deleting all statefulset in ns e2e-tests-statefulset-94m9g
Jul 21 01:24:47.237: INFO: Scaling statefulset ss to 0
Jul 21 01:24:47.245: INFO: Waiting for statefulset status.replicas updated to 0
Jul 21 01:24:47.247: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:24:47.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-94m9g" for this suite.
Jul 21 01:24:53.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:24:53.377: INFO: namespace: e2e-tests-statefulset-94m9g, resource: bindings, ignored listing per whitelist
Jul 21 01:24:53.387: INFO: namespace e2e-tests-statefulset-94m9g deletion completed in 6.123180876s

• [SLOW TEST:68.767 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:24:53.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-faaab9c1-caf0-11ea-86e4-0242ac110009
STEP: Creating a pod to test consume secrets
Jul 21 01:24:53.482: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-faabd466-caf0-11ea-86e4-0242ac110009" in namespace "e2e-tests-projected-2wqg2" to be "success or failure"
Jul 21 01:24:53.486: INFO: Pod "pod-projected-secrets-faabd466-caf0-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.808122ms
Jul 21 01:24:55.494: INFO: Pod "pod-projected-secrets-faabd466-caf0-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011604306s
Jul 21 01:24:57.498: INFO: Pod "pod-projected-secrets-faabd466-caf0-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015716828s
STEP: Saw pod success
Jul 21 01:24:57.498: INFO: Pod "pod-projected-secrets-faabd466-caf0-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:24:57.501: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-faabd466-caf0-11ea-86e4-0242ac110009 container secret-volume-test: 
STEP: delete the pod
Jul 21 01:24:57.712: INFO: Waiting for pod pod-projected-secrets-faabd466-caf0-11ea-86e4-0242ac110009 to disappear
Jul 21 01:24:57.716: INFO: Pod pod-projected-secrets-faabd466-caf0-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:24:57.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2wqg2" for this suite.
Jul 21 01:25:03.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:25:03.881: INFO: namespace: e2e-tests-projected-2wqg2, resource: bindings, ignored listing per whitelist
Jul 21 01:25:03.930: INFO: namespace e2e-tests-projected-2wqg2 deletion completed in 6.211217857s

• [SLOW TEST:10.543 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:25:03.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jul 21 01:25:04.060: INFO: Waiting up to 5m0s for pod "pod-00f7df12-caf1-11ea-86e4-0242ac110009" in namespace "e2e-tests-emptydir-rf4f2" to be "success or failure"
Jul 21 01:25:04.064: INFO: Pod "pod-00f7df12-caf1-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.816357ms
Jul 21 01:25:06.207: INFO: Pod "pod-00f7df12-caf1-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146830866s
Jul 21 01:25:08.211: INFO: Pod "pod-00f7df12-caf1-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.151034486s
STEP: Saw pod success
Jul 21 01:25:08.211: INFO: Pod "pod-00f7df12-caf1-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:25:08.214: INFO: Trying to get logs from node hunter-worker pod pod-00f7df12-caf1-11ea-86e4-0242ac110009 container test-container: 
STEP: delete the pod
Jul 21 01:25:08.252: INFO: Waiting for pod pod-00f7df12-caf1-11ea-86e4-0242ac110009 to disappear
Jul 21 01:25:08.319: INFO: Pod pod-00f7df12-caf1-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:25:08.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-rf4f2" for this suite.
Jul 21 01:25:16.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:25:16.372: INFO: namespace: e2e-tests-emptydir-rf4f2, resource: bindings, ignored listing per whitelist
Jul 21 01:25:16.401: INFO: namespace e2e-tests-emptydir-rf4f2 deletion completed in 8.077648415s

• [SLOW TEST:12.471 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:25:16.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jul 21 01:25:16.522: INFO: Pod name pod-release: Found 0 pods out of 1
Jul 21 01:25:21.535: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:25:22.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-w75jv" for this suite.
Jul 21 01:25:30.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:25:30.704: INFO: namespace: e2e-tests-replication-controller-w75jv, resource: bindings, ignored listing per whitelist
Jul 21 01:25:30.727: INFO: namespace e2e-tests-replication-controller-w75jv deletion completed in 8.159231227s

• [SLOW TEST:14.325 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:25:30.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Jul 21 01:25:30.963: INFO: Waiting up to 5m0s for pod "var-expansion-10f956e6-caf1-11ea-86e4-0242ac110009" in namespace "e2e-tests-var-expansion-qn7zx" to be "success or failure"
Jul 21 01:25:30.993: INFO: Pod "var-expansion-10f956e6-caf1-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 30.043295ms
Jul 21 01:25:32.997: INFO: Pod "var-expansion-10f956e6-caf1-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034257034s
Jul 21 01:25:35.001: INFO: Pod "var-expansion-10f956e6-caf1-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038615533s
STEP: Saw pod success
Jul 21 01:25:35.002: INFO: Pod "var-expansion-10f956e6-caf1-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:25:35.005: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-10f956e6-caf1-11ea-86e4-0242ac110009 container dapi-container: 
STEP: delete the pod
Jul 21 01:25:35.046: INFO: Waiting for pod var-expansion-10f956e6-caf1-11ea-86e4-0242ac110009 to disappear
Jul 21 01:25:35.054: INFO: Pod var-expansion-10f956e6-caf1-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:25:35.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-qn7zx" for this suite.
Jul 21 01:25:41.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:25:41.089: INFO: namespace: e2e-tests-var-expansion-qn7zx, resource: bindings, ignored listing per whitelist
Jul 21 01:25:41.157: INFO: namespace e2e-tests-var-expansion-qn7zx deletion completed in 6.100252325s

• [SLOW TEST:10.431 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:25:41.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 21 01:25:41.271: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1725a0b9-caf1-11ea-86e4-0242ac110009" in namespace "e2e-tests-projected-ck964" to be "success or failure"
Jul 21 01:25:41.284: INFO: Pod "downwardapi-volume-1725a0b9-caf1-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 12.523238ms
Jul 21 01:25:43.287: INFO: Pod "downwardapi-volume-1725a0b9-caf1-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015947549s
Jul 21 01:25:45.291: INFO: Pod "downwardapi-volume-1725a0b9-caf1-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019780146s
STEP: Saw pod success
Jul 21 01:25:45.291: INFO: Pod "downwardapi-volume-1725a0b9-caf1-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:25:45.294: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-1725a0b9-caf1-11ea-86e4-0242ac110009 container client-container: 
STEP: delete the pod
Jul 21 01:25:45.351: INFO: Waiting for pod downwardapi-volume-1725a0b9-caf1-11ea-86e4-0242ac110009 to disappear
Jul 21 01:25:45.361: INFO: Pod downwardapi-volume-1725a0b9-caf1-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:25:45.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ck964" for this suite.
Jul 21 01:25:51.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:25:51.447: INFO: namespace: e2e-tests-projected-ck964, resource: bindings, ignored listing per whitelist
Jul 21 01:25:51.456: INFO: namespace e2e-tests-projected-ck964 deletion completed in 6.091282014s

• [SLOW TEST:10.298 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:25:51.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:25:51.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-b9qk6" for this suite.
Jul 21 01:26:13.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:26:13.782: INFO: namespace: e2e-tests-pods-b9qk6, resource: bindings, ignored listing per whitelist
Jul 21 01:26:13.824: INFO: namespace e2e-tests-pods-b9qk6 deletion completed in 22.127729642s

• [SLOW TEST:22.368 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:26:13.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jul 21 01:26:13.932: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul 21 01:26:13.953: INFO: Waiting for terminating namespaces to be deleted...
Jul 21 01:26:13.973: INFO: 
Logging pods the kubelet thinks is on node hunter-worker before test
Jul 21 01:26:13.979: INFO: kube-proxy-8wnps from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded)
Jul 21 01:26:13.979: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 21 01:26:13.979: INFO: kindnet-2w5m4 from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded)
Jul 21 01:26:13.979: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul 21 01:26:13.979: INFO: 
Logging pods the kubelet thinks is on node hunter-worker2 before test
Jul 21 01:26:13.984: INFO: kube-proxy-b6f6s from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded)
Jul 21 01:26:13.984: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 21 01:26:13.984: INFO: kindnet-hpnvh from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded)
Jul 21 01:26:13.984: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.1623a0250c512347], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:26:15.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-q47z6" for this suite.
Jul 21 01:26:21.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:26:21.064: INFO: namespace: e2e-tests-sched-pred-q47z6, resource: bindings, ignored listing per whitelist
Jul 21 01:26:21.131: INFO: namespace e2e-tests-sched-pred-q47z6 deletion completed in 6.123869619s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.307 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:26:21.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-l5jf
STEP: Creating a pod to test atomic-volume-subpath
Jul 21 01:26:21.301: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-l5jf" in namespace "e2e-tests-subpath-ctpf4" to be "success or failure"
Jul 21 01:26:21.305: INFO: Pod "pod-subpath-test-configmap-l5jf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.893391ms
Jul 21 01:26:23.351: INFO: Pod "pod-subpath-test-configmap-l5jf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050058647s
Jul 21 01:26:25.355: INFO: Pod "pod-subpath-test-configmap-l5jf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053725424s
Jul 21 01:26:27.359: INFO: Pod "pod-subpath-test-configmap-l5jf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05772991s
Jul 21 01:26:29.363: INFO: Pod "pod-subpath-test-configmap-l5jf": Phase="Running", Reason="", readiness=true. Elapsed: 8.061688964s
Jul 21 01:26:31.367: INFO: Pod "pod-subpath-test-configmap-l5jf": Phase="Running", Reason="", readiness=false. Elapsed: 10.065627777s
Jul 21 01:26:33.370: INFO: Pod "pod-subpath-test-configmap-l5jf": Phase="Running", Reason="", readiness=false. Elapsed: 12.069283537s
Jul 21 01:26:35.399: INFO: Pod "pod-subpath-test-configmap-l5jf": Phase="Running", Reason="", readiness=false. Elapsed: 14.097867968s
Jul 21 01:26:37.403: INFO: Pod "pod-subpath-test-configmap-l5jf": Phase="Running", Reason="", readiness=false. Elapsed: 16.101618817s
Jul 21 01:26:39.407: INFO: Pod "pod-subpath-test-configmap-l5jf": Phase="Running", Reason="", readiness=false. Elapsed: 18.106065984s
Jul 21 01:26:41.411: INFO: Pod "pod-subpath-test-configmap-l5jf": Phase="Running", Reason="", readiness=false. Elapsed: 20.109622382s
Jul 21 01:26:43.414: INFO: Pod "pod-subpath-test-configmap-l5jf": Phase="Running", Reason="", readiness=false. Elapsed: 22.1133958s
Jul 21 01:26:45.419: INFO: Pod "pod-subpath-test-configmap-l5jf": Phase="Running", Reason="", readiness=false. Elapsed: 24.117576307s
Jul 21 01:26:47.423: INFO: Pod "pod-subpath-test-configmap-l5jf": Phase="Running", Reason="", readiness=false. Elapsed: 26.121541353s
Jul 21 01:26:49.427: INFO: Pod "pod-subpath-test-configmap-l5jf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.125511084s
STEP: Saw pod success
Jul 21 01:26:49.427: INFO: Pod "pod-subpath-test-configmap-l5jf" satisfied condition "success or failure"
Jul 21 01:26:49.430: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-l5jf container test-container-subpath-configmap-l5jf: 
STEP: delete the pod
Jul 21 01:26:49.655: INFO: Waiting for pod pod-subpath-test-configmap-l5jf to disappear
Jul 21 01:26:49.704: INFO: Pod pod-subpath-test-configmap-l5jf no longer exists
STEP: Deleting pod pod-subpath-test-configmap-l5jf
Jul 21 01:26:49.704: INFO: Deleting pod "pod-subpath-test-configmap-l5jf" in namespace "e2e-tests-subpath-ctpf4"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:26:49.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-ctpf4" for this suite.
Jul 21 01:26:55.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:26:55.906: INFO: namespace: e2e-tests-subpath-ctpf4, resource: bindings, ignored listing per whitelist
Jul 21 01:26:55.941: INFO: namespace e2e-tests-subpath-ctpf4 deletion completed in 6.231843325s

• [SLOW TEST:34.810 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:26:55.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul 21 01:26:56.038: INFO: Waiting up to 5m0s for pod "pod-43b74446-caf1-11ea-86e4-0242ac110009" in namespace "e2e-tests-emptydir-7cf8j" to be "success or failure"
Jul 21 01:26:56.064: INFO: Pod "pod-43b74446-caf1-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 25.296663ms
Jul 21 01:26:58.067: INFO: Pod "pod-43b74446-caf1-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028902695s
Jul 21 01:27:00.071: INFO: Pod "pod-43b74446-caf1-11ea-86e4-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.033086452s
Jul 21 01:27:02.076: INFO: Pod "pod-43b74446-caf1-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037386132s
STEP: Saw pod success
Jul 21 01:27:02.076: INFO: Pod "pod-43b74446-caf1-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:27:02.079: INFO: Trying to get logs from node hunter-worker2 pod pod-43b74446-caf1-11ea-86e4-0242ac110009 container test-container: 
STEP: delete the pod
Jul 21 01:27:02.098: INFO: Waiting for pod pod-43b74446-caf1-11ea-86e4-0242ac110009 to disappear
Jul 21 01:27:02.159: INFO: Pod pod-43b74446-caf1-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:27:02.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-7cf8j" for this suite.
Jul 21 01:27:08.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:27:08.536: INFO: namespace: e2e-tests-emptydir-7cf8j, resource: bindings, ignored listing per whitelist
Jul 21 01:27:08.601: INFO: namespace e2e-tests-emptydir-7cf8j deletion completed in 6.437842756s

• [SLOW TEST:12.660 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:27:08.602: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-4b5e6065-caf1-11ea-86e4-0242ac110009
STEP: Creating a pod to test consume secrets
Jul 21 01:27:08.910: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4b5ef0f9-caf1-11ea-86e4-0242ac110009" in namespace "e2e-tests-projected-6xw45" to be "success or failure"
Jul 21 01:27:08.926: INFO: Pod "pod-projected-secrets-4b5ef0f9-caf1-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 15.200564ms
Jul 21 01:27:10.929: INFO: Pod "pod-projected-secrets-4b5ef0f9-caf1-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018805186s
Jul 21 01:27:12.933: INFO: Pod "pod-projected-secrets-4b5ef0f9-caf1-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022894043s
STEP: Saw pod success
Jul 21 01:27:12.933: INFO: Pod "pod-projected-secrets-4b5ef0f9-caf1-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:27:12.936: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-4b5ef0f9-caf1-11ea-86e4-0242ac110009 container projected-secret-volume-test: 
STEP: delete the pod
Jul 21 01:27:12.986: INFO: Waiting for pod pod-projected-secrets-4b5ef0f9-caf1-11ea-86e4-0242ac110009 to disappear
Jul 21 01:27:13.124: INFO: Pod pod-projected-secrets-4b5ef0f9-caf1-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:27:13.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6xw45" for this suite.
Jul 21 01:27:19.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:27:19.256: INFO: namespace: e2e-tests-projected-6xw45, resource: bindings, ignored listing per whitelist
Jul 21 01:27:19.301: INFO: namespace e2e-tests-projected-6xw45 deletion completed in 6.173561804s

• [SLOW TEST:10.699 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:27:19.301: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jul 21 01:27:19.476: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:27:27.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-kb2dd" for this suite.
Jul 21 01:27:33.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:27:33.408: INFO: namespace: e2e-tests-init-container-kb2dd, resource: bindings, ignored listing per whitelist
Jul 21 01:27:33.413: INFO: namespace e2e-tests-init-container-kb2dd deletion completed in 6.075237498s

• [SLOW TEST:14.111 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:27:33.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 21 01:27:33.560: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Jul 21 01:27:33.566: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-ssvl7/daemonsets","resourceVersion":"1923036"},"items":null}

Jul 21 01:27:33.568: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-ssvl7/pods","resourceVersion":"1923036"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:27:33.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-ssvl7" for this suite.
Jul 21 01:27:39.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:27:39.662: INFO: namespace: e2e-tests-daemonsets-ssvl7, resource: bindings, ignored listing per whitelist
Jul 21 01:27:39.675: INFO: namespace e2e-tests-daemonsets-ssvl7 deletion completed in 6.098051428s

S [SKIPPING] [6.262 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Jul 21 01:27:33.560: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:27:39.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-5dd3d965-caf1-11ea-86e4-0242ac110009
STEP: Creating secret with name s-test-opt-upd-5dd3d9d2-caf1-11ea-86e4-0242ac110009
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-5dd3d965-caf1-11ea-86e4-0242ac110009
STEP: Updating secret s-test-opt-upd-5dd3d9d2-caf1-11ea-86e4-0242ac110009
STEP: Creating secret with name s-test-opt-create-5dd3d9e8-caf1-11ea-86e4-0242ac110009
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:29:02.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wq97k" for this suite.
Jul 21 01:29:26.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:29:26.930: INFO: namespace: e2e-tests-projected-wq97k, resource: bindings, ignored listing per whitelist
Jul 21 01:29:26.983: INFO: namespace e2e-tests-projected-wq97k deletion completed in 24.107585006s

• [SLOW TEST:107.307 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:29:26.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-hlc8f
Jul 21 01:29:31.148: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-hlc8f
STEP: checking the pod's current state and verifying that restartCount is present
Jul 21 01:29:31.150: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:33:33.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-hlc8f" for this suite.
Jul 21 01:33:41.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:33:41.367: INFO: namespace: e2e-tests-container-probe-hlc8f, resource: bindings, ignored listing per whitelist
Jul 21 01:33:41.420: INFO: namespace e2e-tests-container-probe-hlc8f deletion completed in 8.20596281s

• [SLOW TEST:254.437 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:33:41.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-356811e5-caf2-11ea-86e4-0242ac110009
STEP: Creating a pod to test consume secrets
Jul 21 01:33:41.592: INFO: Waiting up to 5m0s for pod "pod-secrets-356f6313-caf2-11ea-86e4-0242ac110009" in namespace "e2e-tests-secrets-bqxs9" to be "success or failure"
Jul 21 01:33:41.602: INFO: Pod "pod-secrets-356f6313-caf2-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 10.021084ms
Jul 21 01:33:43.728: INFO: Pod "pod-secrets-356f6313-caf2-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135732791s
Jul 21 01:33:45.733: INFO: Pod "pod-secrets-356f6313-caf2-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.140520463s
STEP: Saw pod success
Jul 21 01:33:45.733: INFO: Pod "pod-secrets-356f6313-caf2-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:33:45.736: INFO: Trying to get logs from node hunter-worker pod pod-secrets-356f6313-caf2-11ea-86e4-0242ac110009 container secret-volume-test: 
STEP: delete the pod
Jul 21 01:33:45.766: INFO: Waiting for pod pod-secrets-356f6313-caf2-11ea-86e4-0242ac110009 to disappear
Jul 21 01:33:45.794: INFO: Pod pod-secrets-356f6313-caf2-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:33:45.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-bqxs9" for this suite.
Jul 21 01:33:51.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:33:52.010: INFO: namespace: e2e-tests-secrets-bqxs9, resource: bindings, ignored listing per whitelist
Jul 21 01:33:52.121: INFO: namespace e2e-tests-secrets-bqxs9 deletion completed in 6.323292355s
STEP: Destroying namespace "e2e-tests-secret-namespace-txdkq" for this suite.
Jul 21 01:33:58.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:33:58.206: INFO: namespace: e2e-tests-secret-namespace-txdkq, resource: bindings, ignored listing per whitelist
Jul 21 01:33:58.244: INFO: namespace e2e-tests-secret-namespace-txdkq deletion completed in 6.122927136s

• [SLOW TEST:16.823 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:33:58.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 21 01:33:58.365: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3f72df95-caf2-11ea-86e4-0242ac110009" in namespace "e2e-tests-projected-w9ns7" to be "success or failure"
Jul 21 01:33:58.388: INFO: Pod "downwardapi-volume-3f72df95-caf2-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 22.916393ms
Jul 21 01:34:00.550: INFO: Pod "downwardapi-volume-3f72df95-caf2-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184228017s
Jul 21 01:34:02.554: INFO: Pod "downwardapi-volume-3f72df95-caf2-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.188513808s
STEP: Saw pod success
Jul 21 01:34:02.554: INFO: Pod "downwardapi-volume-3f72df95-caf2-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:34:02.558: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-3f72df95-caf2-11ea-86e4-0242ac110009 container client-container: 
STEP: delete the pod
Jul 21 01:34:02.607: INFO: Waiting for pod downwardapi-volume-3f72df95-caf2-11ea-86e4-0242ac110009 to disappear
Jul 21 01:34:02.621: INFO: Pod downwardapi-volume-3f72df95-caf2-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:34:02.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-w9ns7" for this suite.
Jul 21 01:34:08.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:34:08.747: INFO: namespace: e2e-tests-projected-w9ns7, resource: bindings, ignored listing per whitelist
Jul 21 01:34:08.759: INFO: namespace e2e-tests-projected-w9ns7 deletion completed in 6.135377395s

• [SLOW TEST:10.515 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:34:08.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 21 01:34:34.958: INFO: Container started at 2020-07-21 01:34:11 +0000 UTC, pod became ready at 2020-07-21 01:34:34 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:34:34.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-bjhwq" for this suite.
Jul 21 01:34:57.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:34:57.100: INFO: namespace: e2e-tests-container-probe-bjhwq, resource: bindings, ignored listing per whitelist
Jul 21 01:34:57.136: INFO: namespace e2e-tests-container-probe-bjhwq deletion completed in 22.174023223s

• [SLOW TEST:48.377 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:34:57.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Jul 21 01:34:57.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-vs7vp run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jul 21 01:35:05.262: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0721 01:35:05.120946    2160 log.go:172] (0xc0009dc0b0) (0xc0006ae320) Create stream\nI0721 01:35:05.120989    2160 log.go:172] (0xc0009dc0b0) (0xc0006ae320) Stream added, broadcasting: 1\nI0721 01:35:05.124949    2160 log.go:172] (0xc0009dc0b0) Reply frame received for 1\nI0721 01:35:05.125049    2160 log.go:172] (0xc0009dc0b0) (0xc000ad6000) Create stream\nI0721 01:35:05.125092    2160 log.go:172] (0xc0009dc0b0) (0xc000ad6000) Stream added, broadcasting: 3\nI0721 01:35:05.126177    2160 log.go:172] (0xc0009dc0b0) Reply frame received for 3\nI0721 01:35:05.126212    2160 log.go:172] (0xc0009dc0b0) (0xc000ad60a0) Create stream\nI0721 01:35:05.126226    2160 log.go:172] (0xc0009dc0b0) (0xc000ad60a0) Stream added, broadcasting: 5\nI0721 01:35:05.127202    2160 log.go:172] (0xc0009dc0b0) Reply frame received for 5\nI0721 01:35:05.127234    2160 log.go:172] (0xc0009dc0b0) (0xc000ad6140) Create stream\nI0721 01:35:05.127244    2160 log.go:172] (0xc0009dc0b0) (0xc000ad6140) Stream added, broadcasting: 7\nI0721 01:35:05.128111    2160 log.go:172] (0xc0009dc0b0) Reply frame received for 7\nI0721 01:35:05.129053    2160 log.go:172] (0xc000ad6000) (3) Writing data frame\nI0721 01:35:05.129207    2160 log.go:172] (0xc000ad6000) (3) Writing data frame\nI0721 01:35:05.130005    2160 log.go:172] (0xc0009dc0b0) Data frame received for 5\nI0721 01:35:05.130032    2160 log.go:172] (0xc000ad60a0) (5) Data frame handling\nI0721 01:35:05.130057    2160 log.go:172] (0xc000ad60a0) (5) Data frame sent\nI0721 01:35:05.130696    2160 log.go:172] (0xc0009dc0b0) Data frame received for 5\nI0721 01:35:05.130712    2160 log.go:172] (0xc000ad60a0) (5) Data frame handling\nI0721 01:35:05.130726    2160 log.go:172] (0xc000ad60a0) (5) Data frame sent\nI0721 01:35:05.175111    2160 log.go:172] (0xc0009dc0b0) Data frame received for 7\nI0721 01:35:05.175154    2160 log.go:172] (0xc0009dc0b0) Data frame received for 5\nI0721 01:35:05.175196    2160 log.go:172] (0xc000ad60a0) (5) Data frame handling\nI0721 01:35:05.175239    2160 log.go:172] (0xc000ad6140) (7) Data frame handling\nI0721 01:35:05.175715    2160 log.go:172] (0xc0009dc0b0) Data frame received for 1\nI0721 01:35:05.175747    2160 log.go:172] (0xc0006ae320) (1) Data frame handling\nI0721 01:35:05.175762    2160 log.go:172] (0xc0006ae320) (1) Data frame sent\nI0721 01:35:05.175784    2160 log.go:172] (0xc0009dc0b0) (0xc0006ae320) Stream removed, broadcasting: 1\nI0721 01:35:05.175810    2160 log.go:172] (0xc0009dc0b0) (0xc000ad6000) Stream removed, broadcasting: 3\nI0721 01:35:05.175931    2160 log.go:172] (0xc0009dc0b0) (0xc0006ae320) Stream removed, broadcasting: 1\nI0721 01:35:05.175965    2160 log.go:172] (0xc0009dc0b0) (0xc000ad6000) Stream removed, broadcasting: 3\nI0721 01:35:05.175978    2160 log.go:172] (0xc0009dc0b0) (0xc000ad60a0) Stream removed, broadcasting: 5\nI0721 01:35:05.175991    2160 log.go:172] (0xc0009dc0b0) (0xc000ad6140) Stream removed, broadcasting: 7\nI0721 01:35:05.176019    2160 log.go:172] (0xc0009dc0b0) Go away received\n"
Jul 21 01:35:05.263: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:35:07.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vs7vp" for this suite.
Jul 21 01:35:21.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:35:21.319: INFO: namespace: e2e-tests-kubectl-vs7vp, resource: bindings, ignored listing per whitelist
Jul 21 01:35:21.445: INFO: namespace e2e-tests-kubectl-vs7vp deletion completed in 14.175095226s

• [SLOW TEST:24.309 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:35:21.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jul 21 01:35:21.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4wbvz'
Jul 21 01:35:22.038: INFO: stderr: ""
Jul 21 01:35:22.038: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jul 21 01:35:23.042: INFO: Selector matched 1 pods for map[app:redis]
Jul 21 01:35:23.042: INFO: Found 0 / 1
Jul 21 01:35:24.083: INFO: Selector matched 1 pods for map[app:redis]
Jul 21 01:35:24.083: INFO: Found 0 / 1
Jul 21 01:35:25.043: INFO: Selector matched 1 pods for map[app:redis]
Jul 21 01:35:25.043: INFO: Found 0 / 1
Jul 21 01:35:26.042: INFO: Selector matched 1 pods for map[app:redis]
Jul 21 01:35:26.043: INFO: Found 1 / 1
Jul 21 01:35:26.043: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jul 21 01:35:26.046: INFO: Selector matched 1 pods for map[app:redis]
Jul 21 01:35:26.046: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul 21 01:35:26.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-jm2w8 --namespace=e2e-tests-kubectl-4wbvz -p {"metadata":{"annotations":{"x":"y"}}}'
Jul 21 01:35:26.161: INFO: stderr: ""
Jul 21 01:35:26.161: INFO: stdout: "pod/redis-master-jm2w8 patched\n"
STEP: checking annotations
Jul 21 01:35:26.164: INFO: Selector matched 1 pods for map[app:redis]
Jul 21 01:35:26.164: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:35:26.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-4wbvz" for this suite.
Jul 21 01:35:48.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:35:48.291: INFO: namespace: e2e-tests-kubectl-4wbvz, resource: bindings, ignored listing per whitelist
Jul 21 01:35:48.294: INFO: namespace e2e-tests-kubectl-4wbvz deletion completed in 22.126650675s

• [SLOW TEST:26.848 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:35:48.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 21 01:35:48.440: INFO: Waiting up to 5m0s for pod "downwardapi-volume-810e4d10-caf2-11ea-86e4-0242ac110009" in namespace "e2e-tests-downward-api-4pnqr" to be "success or failure"
Jul 21 01:35:48.444: INFO: Pod "downwardapi-volume-810e4d10-caf2-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088051ms
Jul 21 01:35:50.448: INFO: Pod "downwardapi-volume-810e4d10-caf2-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008214398s
Jul 21 01:35:52.452: INFO: Pod "downwardapi-volume-810e4d10-caf2-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012474907s
STEP: Saw pod success
Jul 21 01:35:52.452: INFO: Pod "downwardapi-volume-810e4d10-caf2-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:35:52.456: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-810e4d10-caf2-11ea-86e4-0242ac110009 container client-container: 
STEP: delete the pod
Jul 21 01:35:52.481: INFO: Waiting for pod downwardapi-volume-810e4d10-caf2-11ea-86e4-0242ac110009 to disappear
Jul 21 01:35:52.528: INFO: Pod downwardapi-volume-810e4d10-caf2-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:35:52.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-4pnqr" for this suite.
Jul 21 01:35:58.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:35:58.578: INFO: namespace: e2e-tests-downward-api-4pnqr, resource: bindings, ignored listing per whitelist
Jul 21 01:35:58.639: INFO: namespace e2e-tests-downward-api-4pnqr deletion completed in 6.107629467s

• [SLOW TEST:10.345 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:35:58.640: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0721 01:35:59.818220       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 21 01:35:59.818: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:35:59.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-wfvcm" for this suite.
Jul 21 01:36:05.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:36:05.959: INFO: namespace: e2e-tests-gc-wfvcm, resource: bindings, ignored listing per whitelist
Jul 21 01:36:05.972: INFO: namespace e2e-tests-gc-wfvcm deletion completed in 6.152094628s

• [SLOW TEST:7.333 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:36:05.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jul 21 01:36:10.635: INFO: Successfully updated pod "pod-update-8b9563e8-caf2-11ea-86e4-0242ac110009"
STEP: verifying the updated pod is in kubernetes
Jul 21 01:36:10.664: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:36:10.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-vmg8n" for this suite.
Jul 21 01:36:32.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:36:32.703: INFO: namespace: e2e-tests-pods-vmg8n, resource: bindings, ignored listing per whitelist
Jul 21 01:36:32.768: INFO: namespace e2e-tests-pods-vmg8n deletion completed in 22.099942291s

• [SLOW TEST:26.795 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:36:32.768: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-n8fjv/configmap-test-9b963cce-caf2-11ea-86e4-0242ac110009
STEP: Creating a pod to test consume configMaps
Jul 21 01:36:32.979: INFO: Waiting up to 5m0s for pod "pod-configmaps-9b96cc67-caf2-11ea-86e4-0242ac110009" in namespace "e2e-tests-configmap-n8fjv" to be "success or failure"
Jul 21 01:36:32.990: INFO: Pod "pod-configmaps-9b96cc67-caf2-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 10.443159ms
Jul 21 01:36:34.994: INFO: Pod "pod-configmaps-9b96cc67-caf2-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014703329s
Jul 21 01:36:36.998: INFO: Pod "pod-configmaps-9b96cc67-caf2-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018655233s
STEP: Saw pod success
Jul 21 01:36:36.998: INFO: Pod "pod-configmaps-9b96cc67-caf2-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:36:37.001: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-9b96cc67-caf2-11ea-86e4-0242ac110009 container env-test: 
STEP: delete the pod
Jul 21 01:36:37.055: INFO: Waiting for pod pod-configmaps-9b96cc67-caf2-11ea-86e4-0242ac110009 to disappear
Jul 21 01:36:37.069: INFO: Pod pod-configmaps-9b96cc67-caf2-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:36:37.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-n8fjv" for this suite.
Jul 21 01:36:43.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:36:43.094: INFO: namespace: e2e-tests-configmap-n8fjv, resource: bindings, ignored listing per whitelist
Jul 21 01:36:43.162: INFO: namespace e2e-tests-configmap-n8fjv deletion completed in 6.088402056s

• [SLOW TEST:10.394 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:36:43.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Jul 21 01:36:43.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-d5sv4'
Jul 21 01:36:43.538: INFO: stderr: ""
Jul 21 01:36:43.538: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 21 01:36:43.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-d5sv4'
Jul 21 01:36:43.694: INFO: stderr: ""
Jul 21 01:36:43.695: INFO: stdout: "update-demo-nautilus-c7tt5 update-demo-nautilus-hmmlt "
Jul 21 01:36:43.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c7tt5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d5sv4'
Jul 21 01:36:43.795: INFO: stderr: ""
Jul 21 01:36:43.795: INFO: stdout: ""
Jul 21 01:36:43.795: INFO: update-demo-nautilus-c7tt5 is created but not running
Jul 21 01:36:48.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-d5sv4'
Jul 21 01:36:48.906: INFO: stderr: ""
Jul 21 01:36:48.906: INFO: stdout: "update-demo-nautilus-c7tt5 update-demo-nautilus-hmmlt "
Jul 21 01:36:48.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c7tt5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d5sv4'
Jul 21 01:36:49.003: INFO: stderr: ""
Jul 21 01:36:49.003: INFO: stdout: "true"
Jul 21 01:36:49.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c7tt5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d5sv4'
Jul 21 01:36:49.105: INFO: stderr: ""
Jul 21 01:36:49.105: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 21 01:36:49.105: INFO: validating pod update-demo-nautilus-c7tt5
Jul 21 01:36:49.110: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 21 01:36:49.110: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 21 01:36:49.110: INFO: update-demo-nautilus-c7tt5 is verified up and running
Jul 21 01:36:49.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hmmlt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d5sv4'
Jul 21 01:36:49.205: INFO: stderr: ""
Jul 21 01:36:49.205: INFO: stdout: "true"
Jul 21 01:36:49.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hmmlt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d5sv4'
Jul 21 01:36:49.310: INFO: stderr: ""
Jul 21 01:36:49.310: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 21 01:36:49.310: INFO: validating pod update-demo-nautilus-hmmlt
Jul 21 01:36:49.315: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 21 01:36:49.315: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 21 01:36:49.315: INFO: update-demo-nautilus-hmmlt is verified up and running
STEP: rolling-update to new replication controller
Jul 21 01:36:49.318: INFO: scanned /root for discovery docs: 
Jul 21 01:36:49.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-d5sv4'
Jul 21 01:37:11.947: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jul 21 01:37:11.947: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 21 01:37:11.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-d5sv4'
Jul 21 01:37:12.035: INFO: stderr: ""
Jul 21 01:37:12.035: INFO: stdout: "update-demo-kitten-l7cfj update-demo-kitten-ngcht "
Jul 21 01:37:12.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-l7cfj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d5sv4'
Jul 21 01:37:12.137: INFO: stderr: ""
Jul 21 01:37:12.138: INFO: stdout: "true"
Jul 21 01:37:12.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-l7cfj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d5sv4'
Jul 21 01:37:12.239: INFO: stderr: ""
Jul 21 01:37:12.239: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jul 21 01:37:12.239: INFO: validating pod update-demo-kitten-l7cfj
Jul 21 01:37:12.243: INFO: got data: {
  "image": "kitten.jpg"
}

Jul 21 01:37:12.243: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jul 21 01:37:12.243: INFO: update-demo-kitten-l7cfj is verified up and running
Jul 21 01:37:12.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ngcht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d5sv4'
Jul 21 01:37:12.354: INFO: stderr: ""
Jul 21 01:37:12.354: INFO: stdout: "true"
Jul 21 01:37:12.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ngcht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d5sv4'
Jul 21 01:37:12.462: INFO: stderr: ""
Jul 21 01:37:12.462: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jul 21 01:37:12.462: INFO: validating pod update-demo-kitten-ngcht
Jul 21 01:37:12.466: INFO: got data: {
  "image": "kitten.jpg"
}

Jul 21 01:37:12.466: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jul 21 01:37:12.466: INFO: update-demo-kitten-ngcht is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:37:12.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-d5sv4" for this suite.
Jul 21 01:37:36.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:37:36.547: INFO: namespace: e2e-tests-kubectl-d5sv4, resource: bindings, ignored listing per whitelist
Jul 21 01:37:36.555: INFO: namespace e2e-tests-kubectl-d5sv4 deletion completed in 24.085300766s

• [SLOW TEST:53.393 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:37:36.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jul 21 01:37:36.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7vrjl'
Jul 21 01:37:36.932: INFO: stderr: ""
Jul 21 01:37:36.933: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 21 01:37:36.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7vrjl'
Jul 21 01:37:37.018: INFO: stderr: ""
Jul 21 01:37:37.018: INFO: stdout: "update-demo-nautilus-rwxqm update-demo-nautilus-vgwrt "
Jul 21 01:37:37.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rwxqm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7vrjl'
Jul 21 01:37:37.204: INFO: stderr: ""
Jul 21 01:37:37.204: INFO: stdout: ""
Jul 21 01:37:37.204: INFO: update-demo-nautilus-rwxqm is created but not running
Jul 21 01:37:42.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7vrjl'
Jul 21 01:37:42.306: INFO: stderr: ""
Jul 21 01:37:42.306: INFO: stdout: "update-demo-nautilus-rwxqm update-demo-nautilus-vgwrt "
Jul 21 01:37:42.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rwxqm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7vrjl'
Jul 21 01:37:42.401: INFO: stderr: ""
Jul 21 01:37:42.401: INFO: stdout: "true"
Jul 21 01:37:42.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rwxqm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7vrjl'
Jul 21 01:37:42.492: INFO: stderr: ""
Jul 21 01:37:42.492: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 21 01:37:42.492: INFO: validating pod update-demo-nautilus-rwxqm
Jul 21 01:37:42.496: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 21 01:37:42.496: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 21 01:37:42.496: INFO: update-demo-nautilus-rwxqm is verified up and running
Jul 21 01:37:42.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vgwrt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7vrjl'
Jul 21 01:37:42.593: INFO: stderr: ""
Jul 21 01:37:42.593: INFO: stdout: "true"
Jul 21 01:37:42.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vgwrt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7vrjl'
Jul 21 01:37:42.688: INFO: stderr: ""
Jul 21 01:37:42.688: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 21 01:37:42.689: INFO: validating pod update-demo-nautilus-vgwrt
Jul 21 01:37:42.692: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 21 01:37:42.692: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 21 01:37:42.692: INFO: update-demo-nautilus-vgwrt is verified up and running
STEP: using delete to clean up resources
Jul 21 01:37:42.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-7vrjl'
Jul 21 01:37:42.803: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 21 01:37:42.803: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jul 21 01:37:42.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-7vrjl'
Jul 21 01:37:42.905: INFO: stderr: "No resources found.\n"
Jul 21 01:37:42.905: INFO: stdout: ""
Jul 21 01:37:42.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-7vrjl -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul 21 01:37:43.005: INFO: stderr: ""
Jul 21 01:37:43.005: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:37:43.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-7vrjl" for this suite.
Jul 21 01:38:05.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:38:05.054: INFO: namespace: e2e-tests-kubectl-7vrjl, resource: bindings, ignored listing per whitelist
Jul 21 01:38:05.117: INFO: namespace e2e-tests-kubectl-7vrjl deletion completed in 22.108215554s

• [SLOW TEST:28.562 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:38:05.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-dhsg6
Jul 21 01:38:09.315: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-dhsg6
STEP: checking the pod's current state and verifying that restartCount is present
Jul 21 01:38:09.318: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:42:09.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-dhsg6" for this suite.
Jul 21 01:42:15.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:42:16.036: INFO: namespace: e2e-tests-container-probe-dhsg6, resource: bindings, ignored listing per whitelist
Jul 21 01:42:16.061: INFO: namespace e2e-tests-container-probe-dhsg6 deletion completed in 6.095387222s

• [SLOW TEST:250.943 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:42:16.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:42:22.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-n7smc" for this suite.
Jul 21 01:42:28.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:42:28.480: INFO: namespace: e2e-tests-namespaces-n7smc, resource: bindings, ignored listing per whitelist
Jul 21 01:42:28.531: INFO: namespace e2e-tests-namespaces-n7smc deletion completed in 6.084701979s
STEP: Destroying namespace "e2e-tests-nsdeletetest-5pz6w" for this suite.
Jul 21 01:42:28.534: INFO: Namespace e2e-tests-nsdeletetest-5pz6w was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-k5crk" for this suite.
Jul 21 01:42:34.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:42:34.626: INFO: namespace: e2e-tests-nsdeletetest-k5crk, resource: bindings, ignored listing per whitelist
Jul 21 01:42:34.636: INFO: namespace e2e-tests-nsdeletetest-k5crk deletion completed in 6.102101335s

• [SLOW TEST:18.575 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:42:34.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-734c5117-caf3-11ea-86e4-0242ac110009
STEP: Creating configMap with name cm-test-opt-upd-734c5175-caf3-11ea-86e4-0242ac110009
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-734c5117-caf3-11ea-86e4-0242ac110009
STEP: Updating configmap cm-test-opt-upd-734c5175-caf3-11ea-86e4-0242ac110009
STEP: Creating configMap with name cm-test-opt-create-734c519a-caf3-11ea-86e4-0242ac110009
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:44:07.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-kvxbq" for this suite.
Jul 21 01:44:31.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:44:31.821: INFO: namespace: e2e-tests-projected-kvxbq, resource: bindings, ignored listing per whitelist
Jul 21 01:44:31.900: INFO: namespace e2e-tests-projected-kvxbq deletion completed in 24.13550488s

• [SLOW TEST:117.263 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:44:31.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jul 21 01:44:36.562: INFO: Successfully updated pod "labelsupdateb91e7e9a-caf3-11ea-86e4-0242ac110009"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:44:40.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-6zrwj" for this suite.
Jul 21 01:45:02.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:45:02.694: INFO: namespace: e2e-tests-downward-api-6zrwj, resource: bindings, ignored listing per whitelist
Jul 21 01:45:02.717: INFO: namespace e2e-tests-downward-api-6zrwj deletion completed in 22.100742844s

• [SLOW TEST:30.817 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:45:02.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-cb7b8ce4-caf3-11ea-86e4-0242ac110009
STEP: Creating a pod to test consume configMaps
Jul 21 01:45:02.821: INFO: Waiting up to 5m0s for pod "pod-configmaps-cb7c484a-caf3-11ea-86e4-0242ac110009" in namespace "e2e-tests-configmap-rgkls" to be "success or failure"
Jul 21 01:45:02.860: INFO: Pod "pod-configmaps-cb7c484a-caf3-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 38.754028ms
Jul 21 01:45:05.020: INFO: Pod "pod-configmaps-cb7c484a-caf3-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198407908s
Jul 21 01:45:07.023: INFO: Pod "pod-configmaps-cb7c484a-caf3-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.202107056s
STEP: Saw pod success
Jul 21 01:45:07.023: INFO: Pod "pod-configmaps-cb7c484a-caf3-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:45:07.026: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-cb7c484a-caf3-11ea-86e4-0242ac110009 container configmap-volume-test: 
STEP: delete the pod
Jul 21 01:45:07.067: INFO: Waiting for pod pod-configmaps-cb7c484a-caf3-11ea-86e4-0242ac110009 to disappear
Jul 21 01:45:07.151: INFO: Pod pod-configmaps-cb7c484a-caf3-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:45:07.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-rgkls" for this suite.
Jul 21 01:45:13.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:45:13.199: INFO: namespace: e2e-tests-configmap-rgkls, resource: bindings, ignored listing per whitelist
Jul 21 01:45:13.253: INFO: namespace e2e-tests-configmap-rgkls deletion completed in 6.098101175s

• [SLOW TEST:10.535 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:45:13.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 21 01:45:13.345: INFO: Creating deployment "nginx-deployment"
Jul 21 01:45:13.349: INFO: Waiting for observed generation 1
Jul 21 01:45:15.588: INFO: Waiting for all required pods to come up
Jul 21 01:45:15.592: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jul 21 01:45:25.607: INFO: Waiting for deployment "nginx-deployment" to complete
Jul 21 01:45:25.612: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jul 21 01:45:25.618: INFO: Updating deployment nginx-deployment
Jul 21 01:45:25.618: INFO: Waiting for observed generation 2
Jul 21 01:45:27.634: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jul 21 01:45:27.696: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jul 21 01:45:27.699: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jul 21 01:45:27.707: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jul 21 01:45:27.707: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jul 21 01:45:27.709: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jul 21 01:45:27.713: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jul 21 01:45:27.713: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jul 21 01:45:27.717: INFO: Updating deployment nginx-deployment
Jul 21 01:45:27.718: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jul 21 01:45:27.789: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jul 21 01:45:27.936: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jul 21 01:45:28.108: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nxjsx/deployments/nginx-deployment,UID:d1c58035-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925936,Generation:3,CreationTimestamp:2020-07-21 01:45:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-07-21 01:45:26 +0000 UTC 2020-07-21 01:45:13 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-07-21 01:45:27 +0000 UTC 2020-07-21 01:45:27 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Jul 21 01:45:28.242: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nxjsx/replicasets/nginx-deployment-5c98f8fb5,UID:d9162c16-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925976,Generation:3,CreationTimestamp:2020-07-21 01:45:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment d1c58035-caf3-11ea-b2c9-0242ac120008 0xc0016055f7 0xc0016055f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul 21 01:45:28.242: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jul 21 01:45:28.242: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nxjsx/replicasets/nginx-deployment-85ddf47c5d,UID:d1d11d59-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925975,Generation:3,CreationTimestamp:2020-07-21 01:45:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment d1c58035-caf3-11ea-b2c9-0242ac120008 0xc001605747 0xc001605748}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jul 21 01:45:28.282: INFO: Pod "nginx-deployment-5c98f8fb5-7cljf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7cljf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-5c98f8fb5-7cljf,UID:da7a6749-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925967,Generation:0,CreationTimestamp:2020-07-21 01:45:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d9162c16-caf3-11ea-b2c9-0242ac120008 0xc001c802e7 0xc001c802e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c80360} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c80380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:28 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.282: INFO: Pod "nginx-deployment-5c98f8fb5-bpdgr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bpdgr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-5c98f8fb5-bpdgr,UID:d9192cc6-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925899,Generation:0,CreationTimestamp:2020-07-21 01:45:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d9162c16-caf3-11ea-b2c9-0242ac120008 0xc001c803f7 0xc001c803f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c80470} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c80490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:25 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-07-21 01:45:25 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.282: INFO: Pod "nginx-deployment-5c98f8fb5-d4s58" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-d4s58,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-5c98f8fb5-d4s58,UID:da7a9bd6-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925973,Generation:0,CreationTimestamp:2020-07-21 01:45:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d9162c16-caf3-11ea-b2c9-0242ac120008 0xc001c80550 0xc001c80551}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c805d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c805f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:28 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.282: INFO: Pod "nginx-deployment-5c98f8fb5-gndp2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gndp2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-5c98f8fb5-gndp2,UID:da83a2c5-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925977,Generation:0,CreationTimestamp:2020-07-21 01:45:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d9162c16-caf3-11ea-b2c9-0242ac120008 0xc001c80667 0xc001c80668}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c806e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c80700}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:28 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.282: INFO: Pod "nginx-deployment-5c98f8fb5-kvcft" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-kvcft,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-5c98f8fb5-kvcft,UID:d9182a3c-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925890,Generation:0,CreationTimestamp:2020-07-21 01:45:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d9162c16-caf3-11ea-b2c9-0242ac120008 0xc001c80777 0xc001c80778}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c80800} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c80820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:25 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-07-21 01:45:25 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.282: INFO: Pod "nginx-deployment-5c98f8fb5-l2b2z" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-l2b2z,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-5c98f8fb5-l2b2z,UID:da7a9f90-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925972,Generation:0,CreationTimestamp:2020-07-21 01:45:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d9162c16-caf3-11ea-b2c9-0242ac120008 0xc001c808e0 0xc001c808e1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c80960} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c80980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:28 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.282: INFO: Pod "nginx-deployment-5c98f8fb5-lxrcq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-lxrcq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-5c98f8fb5-lxrcq,UID:da78af7d-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925959,Generation:0,CreationTimestamp:2020-07-21 01:45:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d9162c16-caf3-11ea-b2c9-0242ac120008 0xc001c809f7 0xc001c809f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c80a70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c80a90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:27 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.283: INFO: Pod "nginx-deployment-5c98f8fb5-mpkfz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mpkfz,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-5c98f8fb5-mpkfz,UID:da61a0d2-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925944,Generation:0,CreationTimestamp:2020-07-21 01:45:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d9162c16-caf3-11ea-b2c9-0242ac120008 0xc001c80b07 0xc001c80b08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c80b80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c80ba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:27 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.283: INFO: Pod "nginx-deployment-5c98f8fb5-nslpm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-nslpm,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-5c98f8fb5-nslpm,UID:d937b654-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925912,Generation:0,CreationTimestamp:2020-07-21 01:45:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d9162c16-caf3-11ea-b2c9-0242ac120008 0xc001c80c17 0xc001c80c18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c80c90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c80cb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:25 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-07-21 01:45:25 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.283: INFO: Pod "nginx-deployment-5c98f8fb5-pmf6n" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-pmf6n,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-5c98f8fb5-pmf6n,UID:d9192363-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925891,Generation:0,CreationTimestamp:2020-07-21 01:45:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d9162c16-caf3-11ea-b2c9-0242ac120008 0xc001c80d70 0xc001c80d71}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c80df0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c80e10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:25 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-07-21 01:45:25 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.283: INFO: Pod "nginx-deployment-5c98f8fb5-rf5z7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rf5z7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-5c98f8fb5-rf5z7,UID:da78bee8-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925958,Generation:0,CreationTimestamp:2020-07-21 01:45:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d9162c16-caf3-11ea-b2c9-0242ac120008 0xc001c80ed0 0xc001c80ed1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c80f50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c80f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:27 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.283: INFO: Pod "nginx-deployment-5c98f8fb5-rfnhf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rfnhf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-5c98f8fb5-rfnhf,UID:da7aab1d-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925971,Generation:0,CreationTimestamp:2020-07-21 01:45:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d9162c16-caf3-11ea-b2c9-0242ac120008 0xc001c80fe7 0xc001c80fe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c81060} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c81080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:28 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.283: INFO: Pod "nginx-deployment-5c98f8fb5-zlmkt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zlmkt,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-5c98f8fb5-zlmkt,UID:d9419730-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925914,Generation:0,CreationTimestamp:2020-07-21 01:45:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d9162c16-caf3-11ea-b2c9-0242ac120008 0xc001c810f7 0xc001c810f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c81170} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c81190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:26 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:25 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-07-21 01:45:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.284: INFO: Pod "nginx-deployment-85ddf47c5d-7dl57" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7dl57,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-85ddf47c5d-7dl57,UID:da7a93a0-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925970,Generation:0,CreationTimestamp:2020-07-21 01:45:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d1d11d59-caf3-11ea-b2c9-0242ac120008 0xc001c81250 0xc001c81251}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c812c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c812e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:28 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.284: INFO: Pod "nginx-deployment-85ddf47c5d-b4d4l" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-b4d4l,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-85ddf47c5d-b4d4l,UID:da78a667-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925949,Generation:0,CreationTimestamp:2020-07-21 01:45:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d1d11d59-caf3-11ea-b2c9-0242ac120008 0xc001c81357 0xc001c81358}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c813d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c813f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:27 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.284: INFO: Pod "nginx-deployment-85ddf47c5d-b5pgt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-b5pgt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-85ddf47c5d-b5pgt,UID:da7a8aca-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925969,Generation:0,CreationTimestamp:2020-07-21 01:45:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d1d11d59-caf3-11ea-b2c9-0242ac120008 0xc001c81467 0xc001c81468}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c81580} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c815a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:28 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.284: INFO: Pod "nginx-deployment-85ddf47c5d-bn7sc" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bn7sc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-85ddf47c5d-bn7sc,UID:d1d44a3e-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925823,Generation:0,CreationTimestamp:2020-07-21 01:45:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d1d11d59-caf3-11ea-b2c9-0242ac120008 0xc001c81617 0xc001c81618}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c81690} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c816b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:22 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.232,StartTime:2020-07-21 01:45:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-21 01:45:21 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c72cab2e9194719f50c39ffb77dac30b6f8451dd59ca0afcb54b8f2a68b12674}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.284: INFO: Pod "nginx-deployment-85ddf47c5d-cpcrn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cpcrn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-85ddf47c5d-cpcrn,UID:da78a193-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925960,Generation:0,CreationTimestamp:2020-07-21 01:45:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d1d11d59-caf3-11ea-b2c9-0242ac120008 0xc001c81777 0xc001c81778}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c817f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c81810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:27 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.285: INFO: Pod "nginx-deployment-85ddf47c5d-f4wcw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-f4wcw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-85ddf47c5d-f4wcw,UID:da787341-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925953,Generation:0,CreationTimestamp:2020-07-21 01:45:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d1d11d59-caf3-11ea-b2c9-0242ac120008 0xc001c81887 0xc001c81888}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c81900} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c81920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:27 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.285: INFO: Pod "nginx-deployment-85ddf47c5d-fxfml" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fxfml,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-85ddf47c5d-fxfml,UID:d1d44f6c-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925829,Generation:0,CreationTimestamp:2020-07-21 01:45:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d1d11d59-caf3-11ea-b2c9-0242ac120008 0xc001c81997 0xc001c81998}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c81a10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c81a30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:22 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.233,StartTime:2020-07-21 01:45:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-21 01:45:21 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://8edffae98021735b0a30d24595bf3a74be90b0ffb07988fd8b3d9ecc46641048}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.285: INFO: Pod "nginx-deployment-85ddf47c5d-j5c99" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-j5c99,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-85ddf47c5d-j5c99,UID:da7a5a8a-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925966,Generation:0,CreationTimestamp:2020-07-21 01:45:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d1d11d59-caf3-11ea-b2c9-0242ac120008 0xc001c81af7 0xc001c81af8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c81b80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c81ba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:28 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.285: INFO: Pod "nginx-deployment-85ddf47c5d-m4dcl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-m4dcl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-85ddf47c5d-m4dcl,UID:da61aaa9-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925979,Generation:0,CreationTimestamp:2020-07-21 01:45:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d1d11d59-caf3-11ea-b2c9-0242ac120008 0xc001c81c17 0xc001c81c18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c81c90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c81cb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:27 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-07-21 01:45:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.285: INFO: Pod "nginx-deployment-85ddf47c5d-m8lcz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-m8lcz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-85ddf47c5d-m8lcz,UID:da61af92-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925943,Generation:0,CreationTimestamp:2020-07-21 01:45:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d1d11d59-caf3-11ea-b2c9-0242ac120008 0xc001c81d87 0xc001c81d88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c81e10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c81e30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:27 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.286: INFO: Pod "nginx-deployment-85ddf47c5d-mq4pd" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mq4pd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-85ddf47c5d-mq4pd,UID:d1de12a8-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925846,Generation:0,CreationTimestamp:2020-07-21 01:45:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d1d11d59-caf3-11ea-b2c9-0242ac120008 0xc001c81eb7 0xc001c81eb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c81f30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c81f50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:24 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:24 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.235,StartTime:2020-07-21 01:45:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-21 01:45:24 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f96c3d772fcb1ea02aa2fb71d194c19d5092edd0144bd83c10bc4beba7f3970a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.287: INFO: Pod "nginx-deployment-85ddf47c5d-mqg6s" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mqg6s,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-85ddf47c5d-mqg6s,UID:d1d6cb5c-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925848,Generation:0,CreationTimestamp:2020-07-21 01:45:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d1d11d59-caf3-11ea-b2c9-0242ac120008 0xc001bca027 0xc001bca028}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001bca1b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001bca1d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:24 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:24 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.234,StartTime:2020-07-21 01:45:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-21 01:45:23 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://31198c251600b4f315d98a79fad10e90f5b88dc29fd3a37ff1af9acdcf325710}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.287: INFO: Pod "nginx-deployment-85ddf47c5d-nwn4l" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nwn4l,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-85ddf47c5d-nwn4l,UID:da7a85a4-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925963,Generation:0,CreationTimestamp:2020-07-21 01:45:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d1d11d59-caf3-11ea-b2c9-0242ac120008 0xc001bca347 0xc001bca348}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001bca420} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001bca440}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:28 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.287: INFO: Pod "nginx-deployment-85ddf47c5d-qwph2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qwph2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-85ddf47c5d-qwph2,UID:da5a3605-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925981,Generation:0,CreationTimestamp:2020-07-21 01:45:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d1d11d59-caf3-11ea-b2c9-0242ac120008 0xc001bca527 0xc001bca528}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001bca610} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001bca630}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:27 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-07-21 01:45:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.287: INFO: Pod "nginx-deployment-85ddf47c5d-rgrkx" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rgrkx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-85ddf47c5d-rgrkx,UID:d1d6cbf5-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925828,Generation:0,CreationTimestamp:2020-07-21 01:45:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d1d11d59-caf3-11ea-b2c9-0242ac120008 0xc001bca6e7 0xc001bca6e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001bca760} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001bca780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:23 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:23 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.222,StartTime:2020-07-21 01:45:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-21 01:45:22 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c88986f02c1bb02dfe1c389a46b01d0e3fe17a460be873110da7f01010f0b735}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.287: INFO: Pod "nginx-deployment-85ddf47c5d-stqch" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-stqch,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-85ddf47c5d-stqch,UID:d1d6cfb5-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925832,Generation:0,CreationTimestamp:2020-07-21 01:45:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d1d11d59-caf3-11ea-b2c9-0242ac120008 0xc001bca847 0xc001bca848}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001bca8c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001bca8e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:23 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:23 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.221,StartTime:2020-07-21 01:45:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-21 01:45:21 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://cf853eaf0808e4cf24822850a70459b4ac6eb15467661afed6c050ef7da2bba7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.288: INFO: Pod "nginx-deployment-85ddf47c5d-twtbx" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-twtbx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-85ddf47c5d-twtbx,UID:d1d3d688-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925803,Generation:0,CreationTimestamp:2020-07-21 01:45:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d1d11d59-caf3-11ea-b2c9-0242ac120008 0xc001bca9a7 0xc001bca9a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001bcaa70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001bcaa90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:18 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:18 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.220,StartTime:2020-07-21 01:45:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-21 01:45:18 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://55821452cc1e88b0e9a37eb5b0b8e6ce76d42b13a6d574dbdcd347c907980d2b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.288: INFO: Pod "nginx-deployment-85ddf47c5d-vlsg7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vlsg7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-85ddf47c5d-vlsg7,UID:da789118-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925957,Generation:0,CreationTimestamp:2020-07-21 01:45:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d1d11d59-caf3-11ea-b2c9-0242ac120008 0xc001bcab87 0xc001bcab88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001bcac90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001bcacc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:27 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.288: INFO: Pod "nginx-deployment-85ddf47c5d-xv6dn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xv6dn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-85ddf47c5d-xv6dn,UID:da7a776d-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925964,Generation:0,CreationTimestamp:2020-07-21 01:45:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d1d11d59-caf3-11ea-b2c9-0242ac120008 0xc001bcad77 0xc001bcad78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001bcae00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001bcae20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:28 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 21 01:45:28.288: INFO: Pod "nginx-deployment-85ddf47c5d-zl5kk" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zl5kk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nxjsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxjsx/pods/nginx-deployment-85ddf47c5d-zl5kk,UID:d1d6d5e3-caf3-11ea-b2c9-0242ac120008,ResourceVersion:1925858,Generation:0,CreationTimestamp:2020-07-21 01:45:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d d1d11d59-caf3-11ea-b2c9-0242ac120008 0xc001bcaef7 0xc001bcaef8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q69wb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q69wb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q69wb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001bcafc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001bcaff0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:45:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.223,StartTime:2020-07-21 01:45:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-21 01:45:24 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b76c9659e75069b30ce76f99158ffcfa2df744be80ebb3fe9efc9da034443281}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:45:28.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-nxjsx" for this suite.
Jul 21 01:45:52.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:45:52.579: INFO: namespace: e2e-tests-deployment-nxjsx, resource: bindings, ignored listing per whitelist
Jul 21 01:45:52.602: INFO: namespace e2e-tests-deployment-nxjsx deletion completed in 24.221621218s

• [SLOW TEST:39.348 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:45:52.602: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jul 21 01:45:53.397: INFO: Pod name wrapped-volume-race-e99bf9e2-caf3-11ea-86e4-0242ac110009: Found 0 pods out of 5
Jul 21 01:45:58.405: INFO: Pod name wrapped-volume-race-e99bf9e2-caf3-11ea-86e4-0242ac110009: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-e99bf9e2-caf3-11ea-86e4-0242ac110009 in namespace e2e-tests-emptydir-wrapper-ssnzq, will wait for the garbage collector to delete the pods
Jul 21 01:47:48.706: INFO: Deleting ReplicationController wrapped-volume-race-e99bf9e2-caf3-11ea-86e4-0242ac110009 took: 7.026196ms
Jul 21 01:47:48.806: INFO: Terminating ReplicationController wrapped-volume-race-e99bf9e2-caf3-11ea-86e4-0242ac110009 pods took: 100.264744ms
STEP: Creating RC which spawns configmap-volume pods
Jul 21 01:48:28.669: INFO: Pod name wrapped-volume-race-4627dd70-caf4-11ea-86e4-0242ac110009: Found 0 pods out of 5
Jul 21 01:48:33.676: INFO: Pod name wrapped-volume-race-4627dd70-caf4-11ea-86e4-0242ac110009: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-4627dd70-caf4-11ea-86e4-0242ac110009 in namespace e2e-tests-emptydir-wrapper-ssnzq, will wait for the garbage collector to delete the pods
Jul 21 01:51:09.761: INFO: Deleting ReplicationController wrapped-volume-race-4627dd70-caf4-11ea-86e4-0242ac110009 took: 8.03482ms
Jul 21 01:51:09.861: INFO: Terminating ReplicationController wrapped-volume-race-4627dd70-caf4-11ea-86e4-0242ac110009 pods took: 100.25902ms
STEP: Creating RC which spawns configmap-volume pods
Jul 21 01:51:48.361: INFO: Pod name wrapped-volume-race-bd0a3b1e-caf4-11ea-86e4-0242ac110009: Found 0 pods out of 5
Jul 21 01:51:53.368: INFO: Pod name wrapped-volume-race-bd0a3b1e-caf4-11ea-86e4-0242ac110009: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-bd0a3b1e-caf4-11ea-86e4-0242ac110009 in namespace e2e-tests-emptydir-wrapper-ssnzq, will wait for the garbage collector to delete the pods
Jul 21 01:54:35.456: INFO: Deleting ReplicationController wrapped-volume-race-bd0a3b1e-caf4-11ea-86e4-0242ac110009 took: 7.722023ms
Jul 21 01:54:35.557: INFO: Terminating ReplicationController wrapped-volume-race-bd0a3b1e-caf4-11ea-86e4-0242ac110009 pods took: 100.214901ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:55:19.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-ssnzq" for this suite.
Jul 21 01:55:27.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:55:27.608: INFO: namespace: e2e-tests-emptydir-wrapper-ssnzq, resource: bindings, ignored listing per whitelist
Jul 21 01:55:27.646: INFO: namespace e2e-tests-emptydir-wrapper-ssnzq deletion completed in 8.08300177s

• [SLOW TEST:575.045 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:55:27.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0721 01:55:39.594287       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 21 01:55:39.594: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:55:39.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-c7vqv" for this suite.
Jul 21 01:55:51.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:55:51.793: INFO: namespace: e2e-tests-gc-c7vqv, resource: bindings, ignored listing per whitelist
Jul 21 01:55:51.859: INFO: namespace e2e-tests-gc-c7vqv deletion completed in 12.233685434s

• [SLOW TEST:24.213 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:55:51.860: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 21 01:55:52.286: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jul 21 01:55:58.731: INFO: Waiting up to 5m0s for pod "downward-api-526f1ddf-caf5-11ea-86e4-0242ac110009" in namespace "e2e-tests-downward-api-d9x5p" to be "success or failure"
Jul 21 01:55:58.735: INFO: Pod "downward-api-526f1ddf-caf5-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.821668ms
Jul 21 01:56:00.739: INFO: Pod "downward-api-526f1ddf-caf5-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008008092s
Jul 21 01:56:02.743: INFO: Pod "downward-api-526f1ddf-caf5-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012400511s
STEP: Saw pod success
Jul 21 01:56:02.743: INFO: Pod "downward-api-526f1ddf-caf5-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:56:02.746: INFO: Trying to get logs from node hunter-worker2 pod downward-api-526f1ddf-caf5-11ea-86e4-0242ac110009 container dapi-container: 
STEP: delete the pod
Jul 21 01:56:02.772: INFO: Waiting for pod downward-api-526f1ddf-caf5-11ea-86e4-0242ac110009 to disappear
Jul 21 01:56:02.808: INFO: Pod downward-api-526f1ddf-caf5-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:56:02.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-d9x5p" for this suite.
Jul 21 01:56:08.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:56:08.855: INFO: namespace: e2e-tests-downward-api-d9x5p, resource: bindings, ignored listing per whitelist
Jul 21 01:56:08.913: INFO: namespace e2e-tests-downward-api-d9x5p deletion completed in 6.100358601s

• [SLOW TEST:10.313 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:56:08.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 21 01:56:09.038: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jul 21 01:56:14.042: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul 21 01:56:14.042: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jul 21 01:56:16.046: INFO: Creating deployment "test-rollover-deployment"
Jul 21 01:56:16.056: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jul 21 01:56:18.061: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jul 21 01:56:18.067: INFO: Ensure that both replica sets have 1 created replica
Jul 21 01:56:18.073: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jul 21 01:56:18.078: INFO: Updating deployment test-rollover-deployment
Jul 21 01:56:18.078: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jul 21 01:56:20.111: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jul 21 01:56:20.117: INFO: Make sure deployment "test-rollover-deployment" is complete
Jul 21 01:56:20.123: INFO: all replica sets need to contain the pod-template-hash label
Jul 21 01:56:20.123: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730893376, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730893376, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730893378, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730893376, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 21 01:56:22.130: INFO: all replica sets need to contain the pod-template-hash label
Jul 21 01:56:22.130: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730893376, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730893376, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730893378, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730893376, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 21 01:56:24.130: INFO: all replica sets need to contain the pod-template-hash label
Jul 21 01:56:24.131: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730893376, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730893376, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730893382, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730893376, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 21 01:56:26.129: INFO: all replica sets need to contain the pod-template-hash label
Jul 21 01:56:26.129: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730893376, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730893376, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730893382, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730893376, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 21 01:56:28.131: INFO: all replica sets need to contain the pod-template-hash label
Jul 21 01:56:28.131: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730893376, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730893376, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730893382, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730893376, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 21 01:56:30.130: INFO: all replica sets need to contain the pod-template-hash label
Jul 21 01:56:30.131: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730893376, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730893376, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730893382, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730893376, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 21 01:56:32.131: INFO: all replica sets need to contain the pod-template-hash label
Jul 21 01:56:32.131: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730893376, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730893376, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730893382, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730893376, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 21 01:56:34.131: INFO: 
Jul 21 01:56:34.131: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jul 21 01:56:34.140: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-p9w67,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-p9w67/deployments/test-rollover-deployment,UID:5cc593e2-caf5-11ea-b2c9-0242ac120008,ResourceVersion:1928177,Generation:2,CreationTimestamp:2020-07-21 01:56:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-07-21 01:56:16 +0000 UTC 2020-07-21 01:56:16 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-07-21 01:56:32 +0000 UTC 2020-07-21 01:56:16 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jul 21 01:56:34.143: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-p9w67,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-p9w67/replicasets/test-rollover-deployment-5b8479fdb6,UID:5dfbcee1-caf5-11ea-b2c9-0242ac120008,ResourceVersion:1928168,Generation:2,CreationTimestamp:2020-07-21 01:56:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 5cc593e2-caf5-11ea-b2c9-0242ac120008 0xc0026ed127 0xc0026ed128}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jul 21 01:56:34.143: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jul 21 01:56:34.143: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-p9w67,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-p9w67/replicasets/test-rollover-controller,UID:5894eecc-caf5-11ea-b2c9-0242ac120008,ResourceVersion:1928176,Generation:2,CreationTimestamp:2020-07-21 01:56:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 5cc593e2-caf5-11ea-b2c9-0242ac120008 0xc0026ecf97 0xc0026ecf98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul 21 01:56:34.143: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-p9w67,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-p9w67/replicasets/test-rollover-deployment-58494b7559,UID:5cc849da-caf5-11ea-b2c9-0242ac120008,ResourceVersion:1928130,Generation:2,CreationTimestamp:2020-07-21 01:56:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 5cc593e2-caf5-11ea-b2c9-0242ac120008 0xc0026ed057 0xc0026ed058}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul 21 01:56:34.146: INFO: Pod "test-rollover-deployment-5b8479fdb6-pk5zb" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-pk5zb,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-p9w67,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p9w67/pods/test-rollover-deployment-5b8479fdb6-pk5zb,UID:5e116f86-caf5-11ea-b2c9-0242ac120008,ResourceVersion:1928146,Generation:0,CreationTimestamp:2020-07-21 01:56:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 5dfbcee1-caf5-11ea-b2c9-0242ac120008 0xc0026edce7 0xc0026edce8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6rbb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6rbb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-6rbb2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026edd60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026edd80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:56:18 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:56:22 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:56:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 01:56:18 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.6,StartTime:2020-07-21 01:56:18 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-07-21 01:56:21 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://01fcc4021986f90feffab2ae5d34637f1da3cd7dad7995d75f846f16612d1d41}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:56:34.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-p9w67" for this suite.
Jul 21 01:56:42.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:56:42.182: INFO: namespace: e2e-tests-deployment-p9w67, resource: bindings, ignored listing per whitelist
Jul 21 01:56:42.233: INFO: namespace e2e-tests-deployment-p9w67 deletion completed in 8.084250176s

• [SLOW TEST:33.321 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:56:42.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jul 21 01:56:42.336: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-d6kqf,SelfLink:/api/v1/namespaces/e2e-tests-watch-d6kqf/configmaps/e2e-watch-test-configmap-a,UID:6c70d05a-caf5-11ea-b2c9-0242ac120008,ResourceVersion:1928240,Generation:0,CreationTimestamp:2020-07-21 01:56:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul 21 01:56:42.337: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-d6kqf,SelfLink:/api/v1/namespaces/e2e-tests-watch-d6kqf/configmaps/e2e-watch-test-configmap-a,UID:6c70d05a-caf5-11ea-b2c9-0242ac120008,ResourceVersion:1928240,Generation:0,CreationTimestamp:2020-07-21 01:56:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jul 21 01:56:52.344: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-d6kqf,SelfLink:/api/v1/namespaces/e2e-tests-watch-d6kqf/configmaps/e2e-watch-test-configmap-a,UID:6c70d05a-caf5-11ea-b2c9-0242ac120008,ResourceVersion:1928260,Generation:0,CreationTimestamp:2020-07-21 01:56:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jul 21 01:56:52.344: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-d6kqf,SelfLink:/api/v1/namespaces/e2e-tests-watch-d6kqf/configmaps/e2e-watch-test-configmap-a,UID:6c70d05a-caf5-11ea-b2c9-0242ac120008,ResourceVersion:1928260,Generation:0,CreationTimestamp:2020-07-21 01:56:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jul 21 01:57:02.350: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-d6kqf,SelfLink:/api/v1/namespaces/e2e-tests-watch-d6kqf/configmaps/e2e-watch-test-configmap-a,UID:6c70d05a-caf5-11ea-b2c9-0242ac120008,ResourceVersion:1928280,Generation:0,CreationTimestamp:2020-07-21 01:56:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul 21 01:57:02.350: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-d6kqf,SelfLink:/api/v1/namespaces/e2e-tests-watch-d6kqf/configmaps/e2e-watch-test-configmap-a,UID:6c70d05a-caf5-11ea-b2c9-0242ac120008,ResourceVersion:1928280,Generation:0,CreationTimestamp:2020-07-21 01:56:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jul 21 01:57:12.356: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-d6kqf,SelfLink:/api/v1/namespaces/e2e-tests-watch-d6kqf/configmaps/e2e-watch-test-configmap-a,UID:6c70d05a-caf5-11ea-b2c9-0242ac120008,ResourceVersion:1928301,Generation:0,CreationTimestamp:2020-07-21 01:56:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul 21 01:57:12.356: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-d6kqf,SelfLink:/api/v1/namespaces/e2e-tests-watch-d6kqf/configmaps/e2e-watch-test-configmap-a,UID:6c70d05a-caf5-11ea-b2c9-0242ac120008,ResourceVersion:1928301,Generation:0,CreationTimestamp:2020-07-21 01:56:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jul 21 01:57:22.363: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-d6kqf,SelfLink:/api/v1/namespaces/e2e-tests-watch-d6kqf/configmaps/e2e-watch-test-configmap-b,UID:844bd7d0-caf5-11ea-b2c9-0242ac120008,ResourceVersion:1928321,Generation:0,CreationTimestamp:2020-07-21 01:57:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul 21 01:57:22.363: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-d6kqf,SelfLink:/api/v1/namespaces/e2e-tests-watch-d6kqf/configmaps/e2e-watch-test-configmap-b,UID:844bd7d0-caf5-11ea-b2c9-0242ac120008,ResourceVersion:1928321,Generation:0,CreationTimestamp:2020-07-21 01:57:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jul 21 01:57:32.370: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-d6kqf,SelfLink:/api/v1/namespaces/e2e-tests-watch-d6kqf/configmaps/e2e-watch-test-configmap-b,UID:844bd7d0-caf5-11ea-b2c9-0242ac120008,ResourceVersion:1928341,Generation:0,CreationTimestamp:2020-07-21 01:57:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul 21 01:57:32.370: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-d6kqf,SelfLink:/api/v1/namespaces/e2e-tests-watch-d6kqf/configmaps/e2e-watch-test-configmap-b,UID:844bd7d0-caf5-11ea-b2c9-0242ac120008,ResourceVersion:1928341,Generation:0,CreationTimestamp:2020-07-21 01:57:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:57:42.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-d6kqf" for this suite.
Jul 21 01:57:48.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:57:48.469: INFO: namespace: e2e-tests-watch-d6kqf, resource: bindings, ignored listing per whitelist
Jul 21 01:57:48.471: INFO: namespace e2e-tests-watch-d6kqf deletion completed in 6.095901756s

• [SLOW TEST:66.237 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:57:48.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-93f02a2a-caf5-11ea-86e4-0242ac110009
STEP: Creating a pod to test consume configMaps
Jul 21 01:57:48.611: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-93f09f86-caf5-11ea-86e4-0242ac110009" in namespace "e2e-tests-projected-lsdvg" to be "success or failure"
Jul 21 01:57:48.640: INFO: Pod "pod-projected-configmaps-93f09f86-caf5-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 28.527266ms
Jul 21 01:57:50.643: INFO: Pod "pod-projected-configmaps-93f09f86-caf5-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031767104s
Jul 21 01:57:52.655: INFO: Pod "pod-projected-configmaps-93f09f86-caf5-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044245359s
STEP: Saw pod success
Jul 21 01:57:52.655: INFO: Pod "pod-projected-configmaps-93f09f86-caf5-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:57:52.658: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-93f09f86-caf5-11ea-86e4-0242ac110009 container projected-configmap-volume-test: 
STEP: delete the pod
Jul 21 01:57:52.751: INFO: Waiting for pod pod-projected-configmaps-93f09f86-caf5-11ea-86e4-0242ac110009 to disappear
Jul 21 01:57:52.781: INFO: Pod pod-projected-configmaps-93f09f86-caf5-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:57:52.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-lsdvg" for this suite.
Jul 21 01:58:00.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:58:01.006: INFO: namespace: e2e-tests-projected-lsdvg, resource: bindings, ignored listing per whitelist
Jul 21 01:58:01.051: INFO: namespace e2e-tests-projected-lsdvg deletion completed in 8.26718297s

• [SLOW TEST:12.580 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:58:01.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Jul 21 01:58:01.703: INFO: Waiting up to 5m0s for pod "pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-9r6zg" in namespace "e2e-tests-svcaccounts-n6m8f" to be "success or failure"
Jul 21 01:58:01.731: INFO: Pod "pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-9r6zg": Phase="Pending", Reason="", readiness=false. Elapsed: 27.169642ms
Jul 21 01:58:03.787: INFO: Pod "pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-9r6zg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083671434s
Jul 21 01:58:06.009: INFO: Pod "pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-9r6zg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.305951327s
Jul 21 01:58:08.014: INFO: Pod "pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-9r6zg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.310364208s
Jul 21 01:58:10.018: INFO: Pod "pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-9r6zg": Phase="Running", Reason="", readiness=false. Elapsed: 8.314169793s
Jul 21 01:58:12.022: INFO: Pod "pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-9r6zg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.318830728s
STEP: Saw pod success
Jul 21 01:58:12.022: INFO: Pod "pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-9r6zg" satisfied condition "success or failure"
Jul 21 01:58:12.025: INFO: Trying to get logs from node hunter-worker pod pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-9r6zg container token-test: 
STEP: delete the pod
Jul 21 01:58:12.100: INFO: Waiting for pod pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-9r6zg to disappear
Jul 21 01:58:12.106: INFO: Pod pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-9r6zg no longer exists
STEP: Creating a pod to test consume service account root CA
Jul 21 01:58:12.111: INFO: Waiting up to 5m0s for pod "pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-r44zl" in namespace "e2e-tests-svcaccounts-n6m8f" to be "success or failure"
Jul 21 01:58:12.114: INFO: Pod "pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-r44zl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.405287ms
Jul 21 01:58:14.417: INFO: Pod "pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-r44zl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.305802962s
Jul 21 01:58:16.421: INFO: Pod "pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-r44zl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.309338056s
Jul 21 01:58:18.525: INFO: Pod "pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-r44zl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.413458237s
Jul 21 01:58:20.529: INFO: Pod "pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-r44zl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.417801126s
Jul 21 01:58:22.533: INFO: Pod "pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-r44zl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.42134035s
STEP: Saw pod success
Jul 21 01:58:22.533: INFO: Pod "pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-r44zl" satisfied condition "success or failure"
Jul 21 01:58:22.535: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-r44zl container root-ca-test: 
STEP: delete the pod
Jul 21 01:58:22.639: INFO: Waiting for pod pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-r44zl to disappear
Jul 21 01:58:22.727: INFO: Pod pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-r44zl no longer exists
STEP: Creating a pod to test consume service account namespace
Jul 21 01:58:22.732: INFO: Waiting up to 5m0s for pod "pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-kt6ms" in namespace "e2e-tests-svcaccounts-n6m8f" to be "success or failure"
Jul 21 01:58:22.757: INFO: Pod "pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-kt6ms": Phase="Pending", Reason="", readiness=false. Elapsed: 25.413515ms
Jul 21 01:58:24.812: INFO: Pod "pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-kt6ms": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08023818s
Jul 21 01:58:26.816: INFO: Pod "pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-kt6ms": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083985217s
Jul 21 01:58:28.979: INFO: Pod "pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-kt6ms": Phase="Pending", Reason="", readiness=false. Elapsed: 6.247438424s
Jul 21 01:58:31.039: INFO: Pod "pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-kt6ms": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.307150756s
STEP: Saw pod success
Jul 21 01:58:31.039: INFO: Pod "pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-kt6ms" satisfied condition "success or failure"
Jul 21 01:58:31.041: INFO: Trying to get logs from node hunter-worker pod pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-kt6ms container namespace-test: 
STEP: delete the pod
Jul 21 01:58:31.219: INFO: Waiting for pod pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-kt6ms to disappear
Jul 21 01:58:31.350: INFO: Pod pod-service-account-9bbe843e-caf5-11ea-86e4-0242ac110009-kt6ms no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:58:31.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-n6m8f" for this suite.
Jul 21 01:58:37.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:58:37.383: INFO: namespace: e2e-tests-svcaccounts-n6m8f, resource: bindings, ignored listing per whitelist
Jul 21 01:58:37.443: INFO: namespace e2e-tests-svcaccounts-n6m8f deletion completed in 6.088385166s

• [SLOW TEST:36.391 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:58:37.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-b11e2a3c-caf5-11ea-86e4-0242ac110009
STEP: Creating a pod to test consume secrets
Jul 21 01:58:37.564: INFO: Waiting up to 5m0s for pod "pod-secrets-b11e9c34-caf5-11ea-86e4-0242ac110009" in namespace "e2e-tests-secrets-l8gnh" to be "success or failure"
Jul 21 01:58:37.568: INFO: Pod "pod-secrets-b11e9c34-caf5-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.216799ms
Jul 21 01:58:39.620: INFO: Pod "pod-secrets-b11e9c34-caf5-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056355543s
Jul 21 01:58:41.638: INFO: Pod "pod-secrets-b11e9c34-caf5-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074173404s
STEP: Saw pod success
Jul 21 01:58:41.638: INFO: Pod "pod-secrets-b11e9c34-caf5-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:58:41.640: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-b11e9c34-caf5-11ea-86e4-0242ac110009 container secret-volume-test: 
STEP: delete the pod
Jul 21 01:58:41.790: INFO: Waiting for pod pod-secrets-b11e9c34-caf5-11ea-86e4-0242ac110009 to disappear
Jul 21 01:58:41.799: INFO: Pod pod-secrets-b11e9c34-caf5-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:58:41.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-l8gnh" for this suite.
Jul 21 01:58:47.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:58:47.894: INFO: namespace: e2e-tests-secrets-l8gnh, resource: bindings, ignored listing per whitelist
Jul 21 01:58:47.994: INFO: namespace e2e-tests-secrets-l8gnh deletion completed in 6.190997026s

• [SLOW TEST:10.551 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:58:47.994: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jul 21 01:58:57.255: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:58:58.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-gj85z" for this suite.
Jul 21 01:59:20.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:59:20.448: INFO: namespace: e2e-tests-replicaset-gj85z, resource: bindings, ignored listing per whitelist
Jul 21 01:59:20.454: INFO: namespace e2e-tests-replicaset-gj85z deletion completed in 22.114366688s

• [SLOW TEST:32.460 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:59:20.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Jul 21 01:59:24.660: INFO: Pod pod-hostip-cac99c1f-caf5-11ea-86e4-0242ac110009 has hostIP: 172.18.0.2
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:59:24.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-l69rh" for this suite.
Jul 21 01:59:47.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:59:47.058: INFO: namespace: e2e-tests-pods-l69rh, resource: bindings, ignored listing per whitelist
Jul 21 01:59:47.089: INFO: namespace e2e-tests-pods-l69rh deletion completed in 22.424045554s

• [SLOW TEST:26.634 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:59:47.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul 21 01:59:47.228: INFO: Waiting up to 5m0s for pod "pod-daa1f0ac-caf5-11ea-86e4-0242ac110009" in namespace "e2e-tests-emptydir-tg9th" to be "success or failure"
Jul 21 01:59:47.241: INFO: Pod "pod-daa1f0ac-caf5-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 13.37241ms
Jul 21 01:59:49.358: INFO: Pod "pod-daa1f0ac-caf5-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129988679s
Jul 21 01:59:51.361: INFO: Pod "pod-daa1f0ac-caf5-11ea-86e4-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.133201608s
Jul 21 01:59:53.364: INFO: Pod "pod-daa1f0ac-caf5-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.136406624s
STEP: Saw pod success
Jul 21 01:59:53.364: INFO: Pod "pod-daa1f0ac-caf5-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 01:59:53.367: INFO: Trying to get logs from node hunter-worker2 pod pod-daa1f0ac-caf5-11ea-86e4-0242ac110009 container test-container: 
STEP: delete the pod
Jul 21 01:59:53.443: INFO: Waiting for pod pod-daa1f0ac-caf5-11ea-86e4-0242ac110009 to disappear
Jul 21 01:59:53.453: INFO: Pod pod-daa1f0ac-caf5-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 01:59:53.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-tg9th" for this suite.
Jul 21 01:59:59.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 01:59:59.517: INFO: namespace: e2e-tests-emptydir-tg9th, resource: bindings, ignored listing per whitelist
Jul 21 01:59:59.590: INFO: namespace e2e-tests-emptydir-tg9th deletion completed in 6.1331467s

• [SLOW TEST:12.501 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 01:59:59.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul 21 01:59:59.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-5zrwr'
Jul 21 02:00:02.163: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul 21 02:00:02.163: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jul 21 02:00:02.214: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-j7bzr]
Jul 21 02:00:02.214: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-j7bzr" in namespace "e2e-tests-kubectl-5zrwr" to be "running and ready"
Jul 21 02:00:02.217: INFO: Pod "e2e-test-nginx-rc-j7bzr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.35881ms
Jul 21 02:00:04.220: INFO: Pod "e2e-test-nginx-rc-j7bzr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006019319s
Jul 21 02:00:06.224: INFO: Pod "e2e-test-nginx-rc-j7bzr": Phase="Running", Reason="", readiness=true. Elapsed: 4.009709748s
Jul 21 02:00:06.224: INFO: Pod "e2e-test-nginx-rc-j7bzr" satisfied condition "running and ready"
Jul 21 02:00:06.224: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-j7bzr]
Jul 21 02:00:06.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-5zrwr'
Jul 21 02:00:06.343: INFO: stderr: ""
Jul 21 02:00:06.343: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Jul 21 02:00:06.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-5zrwr'
Jul 21 02:00:06.461: INFO: stderr: ""
Jul 21 02:00:06.462: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:00:06.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5zrwr" for this suite.
Jul 21 02:00:12.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:00:12.571: INFO: namespace: e2e-tests-kubectl-5zrwr, resource: bindings, ignored listing per whitelist
Jul 21 02:00:12.697: INFO: namespace e2e-tests-kubectl-5zrwr deletion completed in 6.232042725s

• [SLOW TEST:13.107 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:00:12.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jul 21 02:00:12.921: INFO: Waiting up to 5m0s for pod "downward-api-e9f150b4-caf5-11ea-86e4-0242ac110009" in namespace "e2e-tests-downward-api-2bjz5" to be "success or failure"
Jul 21 02:00:12.936: INFO: Pod "downward-api-e9f150b4-caf5-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 15.413754ms
Jul 21 02:00:14.975: INFO: Pod "downward-api-e9f150b4-caf5-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054241822s
Jul 21 02:00:16.980: INFO: Pod "downward-api-e9f150b4-caf5-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058629061s
STEP: Saw pod success
Jul 21 02:00:16.980: INFO: Pod "downward-api-e9f150b4-caf5-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 02:00:16.983: INFO: Trying to get logs from node hunter-worker2 pod downward-api-e9f150b4-caf5-11ea-86e4-0242ac110009 container dapi-container: 
STEP: delete the pod
Jul 21 02:00:17.001: INFO: Waiting for pod downward-api-e9f150b4-caf5-11ea-86e4-0242ac110009 to disappear
Jul 21 02:00:17.005: INFO: Pod downward-api-e9f150b4-caf5-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:00:17.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-2bjz5" for this suite.
Jul 21 02:00:23.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:00:23.093: INFO: namespace: e2e-tests-downward-api-2bjz5, resource: bindings, ignored listing per whitelist
Jul 21 02:00:23.107: INFO: namespace e2e-tests-downward-api-2bjz5 deletion completed in 6.09892811s

• [SLOW TEST:10.410 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:00:23.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Jul 21 02:00:23.734: INFO: created pod pod-service-account-defaultsa
Jul 21 02:00:23.734: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jul 21 02:00:23.778: INFO: created pod pod-service-account-mountsa
Jul 21 02:00:23.778: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jul 21 02:00:23.797: INFO: created pod pod-service-account-nomountsa
Jul 21 02:00:23.797: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jul 21 02:00:23.892: INFO: created pod pod-service-account-defaultsa-mountspec
Jul 21 02:00:23.892: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jul 21 02:00:23.972: INFO: created pod pod-service-account-mountsa-mountspec
Jul 21 02:00:23.972: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jul 21 02:00:24.279: INFO: created pod pod-service-account-nomountsa-mountspec
Jul 21 02:00:24.279: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jul 21 02:00:24.557: INFO: created pod pod-service-account-defaultsa-nomountspec
Jul 21 02:00:24.557: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jul 21 02:00:24.593: INFO: created pod pod-service-account-mountsa-nomountspec
Jul 21 02:00:24.593: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jul 21 02:00:24.790: INFO: created pod pod-service-account-nomountsa-nomountspec
Jul 21 02:00:24.790: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:00:24.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-rpbwt" for this suite.
Jul 21 02:00:55.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:00:55.702: INFO: namespace: e2e-tests-svcaccounts-rpbwt, resource: bindings, ignored listing per whitelist
Jul 21 02:00:55.721: INFO: namespace e2e-tests-svcaccounts-rpbwt deletion completed in 30.529001253s

• [SLOW TEST:32.613 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:00:55.721: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 21 02:00:56.814: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jul 21 02:00:57.158: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:00:57.425: INFO: Number of nodes with available pods: 0
Jul 21 02:00:57.425: INFO: Node hunter-worker is running more than one daemon pod
Jul 21 02:00:59.300: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:00:59.617: INFO: Number of nodes with available pods: 0
Jul 21 02:00:59.617: INFO: Node hunter-worker is running more than one daemon pod
Jul 21 02:01:00.467: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:00.470: INFO: Number of nodes with available pods: 0
Jul 21 02:01:00.470: INFO: Node hunter-worker is running more than one daemon pod
Jul 21 02:01:01.505: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:01.509: INFO: Number of nodes with available pods: 0
Jul 21 02:01:01.510: INFO: Node hunter-worker is running more than one daemon pod
Jul 21 02:01:02.430: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:02.433: INFO: Number of nodes with available pods: 0
Jul 21 02:01:02.433: INFO: Node hunter-worker is running more than one daemon pod
Jul 21 02:01:03.436: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:03.439: INFO: Number of nodes with available pods: 0
Jul 21 02:01:03.439: INFO: Node hunter-worker is running more than one daemon pod
Jul 21 02:01:04.430: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:04.433: INFO: Number of nodes with available pods: 2
Jul 21 02:01:04.433: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jul 21 02:01:04.476: INFO: Wrong image for pod: daemon-set-8r688. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:04.476: INFO: Wrong image for pod: daemon-set-jzhkb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:04.510: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:05.514: INFO: Wrong image for pod: daemon-set-8r688. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:05.514: INFO: Wrong image for pod: daemon-set-jzhkb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:05.518: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:06.514: INFO: Wrong image for pod: daemon-set-8r688. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:06.514: INFO: Wrong image for pod: daemon-set-jzhkb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:06.518: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:07.515: INFO: Wrong image for pod: daemon-set-8r688. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:07.515: INFO: Wrong image for pod: daemon-set-jzhkb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:07.519: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:08.515: INFO: Wrong image for pod: daemon-set-8r688. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:08.515: INFO: Pod daemon-set-8r688 is not available
Jul 21 02:01:08.515: INFO: Wrong image for pod: daemon-set-jzhkb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:08.519: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:09.833: INFO: Wrong image for pod: daemon-set-8r688. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:09.833: INFO: Pod daemon-set-8r688 is not available
Jul 21 02:01:09.833: INFO: Wrong image for pod: daemon-set-jzhkb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:09.838: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:10.635: INFO: Wrong image for pod: daemon-set-8r688. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:10.635: INFO: Pod daemon-set-8r688 is not available
Jul 21 02:01:10.635: INFO: Wrong image for pod: daemon-set-jzhkb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:10.638: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:11.582: INFO: Wrong image for pod: daemon-set-8r688. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:11.582: INFO: Pod daemon-set-8r688 is not available
Jul 21 02:01:11.582: INFO: Wrong image for pod: daemon-set-jzhkb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:11.586: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:12.521: INFO: Wrong image for pod: daemon-set-8r688. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:12.521: INFO: Pod daemon-set-8r688 is not available
Jul 21 02:01:12.521: INFO: Wrong image for pod: daemon-set-jzhkb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:12.524: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:13.533: INFO: Wrong image for pod: daemon-set-8r688. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:13.533: INFO: Pod daemon-set-8r688 is not available
Jul 21 02:01:13.533: INFO: Wrong image for pod: daemon-set-jzhkb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:13.537: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:14.514: INFO: Wrong image for pod: daemon-set-8r688. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:14.514: INFO: Pod daemon-set-8r688 is not available
Jul 21 02:01:14.514: INFO: Wrong image for pod: daemon-set-jzhkb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:14.517: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:15.514: INFO: Wrong image for pod: daemon-set-8r688. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:15.514: INFO: Pod daemon-set-8r688 is not available
Jul 21 02:01:15.514: INFO: Wrong image for pod: daemon-set-jzhkb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:15.517: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:16.515: INFO: Wrong image for pod: daemon-set-8r688. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:16.515: INFO: Pod daemon-set-8r688 is not available
Jul 21 02:01:16.515: INFO: Wrong image for pod: daemon-set-jzhkb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:16.519: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:17.803: INFO: Wrong image for pod: daemon-set-8r688. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:17.803: INFO: Pod daemon-set-8r688 is not available
Jul 21 02:01:17.803: INFO: Wrong image for pod: daemon-set-jzhkb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:17.807: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:18.514: INFO: Pod daemon-set-h9lbg is not available
Jul 21 02:01:18.514: INFO: Wrong image for pod: daemon-set-jzhkb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:18.518: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:19.803: INFO: Pod daemon-set-h9lbg is not available
Jul 21 02:01:19.803: INFO: Wrong image for pod: daemon-set-jzhkb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:19.808: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:20.515: INFO: Pod daemon-set-h9lbg is not available
Jul 21 02:01:20.515: INFO: Wrong image for pod: daemon-set-jzhkb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:20.519: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:21.533: INFO: Pod daemon-set-h9lbg is not available
Jul 21 02:01:21.533: INFO: Wrong image for pod: daemon-set-jzhkb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:21.537: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:22.569: INFO: Wrong image for pod: daemon-set-jzhkb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:22.574: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:23.545: INFO: Wrong image for pod: daemon-set-jzhkb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:23.549: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:24.557: INFO: Wrong image for pod: daemon-set-jzhkb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:24.557: INFO: Pod daemon-set-jzhkb is not available
Jul 21 02:01:24.561: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:25.587: INFO: Wrong image for pod: daemon-set-jzhkb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:25.587: INFO: Pod daemon-set-jzhkb is not available
Jul 21 02:01:25.591: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:26.514: INFO: Wrong image for pod: daemon-set-jzhkb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 21 02:01:26.514: INFO: Pod daemon-set-jzhkb is not available
Jul 21 02:01:26.517: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:27.539: INFO: Pod daemon-set-fxz7b is not available
Jul 21 02:01:27.579: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Jul 21 02:01:27.582: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:27.683: INFO: Number of nodes with available pods: 1
Jul 21 02:01:27.683: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 21 02:01:28.756: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:28.759: INFO: Number of nodes with available pods: 1
Jul 21 02:01:28.759: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 21 02:01:29.744: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:29.747: INFO: Number of nodes with available pods: 1
Jul 21 02:01:29.747: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 21 02:01:30.726: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:30.744: INFO: Number of nodes with available pods: 1
Jul 21 02:01:30.745: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 21 02:01:31.689: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:31.693: INFO: Number of nodes with available pods: 1
Jul 21 02:01:31.693: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 21 02:01:32.688: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 21 02:01:32.691: INFO: Number of nodes with available pods: 2
Jul 21 02:01:32.691: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-xptqv, will wait for the garbage collector to delete the pods
Jul 21 02:01:32.775: INFO: Deleting DaemonSet.extensions daemon-set took: 19.997283ms
Jul 21 02:01:32.875: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.256464ms
Jul 21 02:01:47.822: INFO: Number of nodes with available pods: 0
Jul 21 02:01:47.822: INFO: Number of running nodes: 0, number of available pods: 0
Jul 21 02:01:47.825: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-xptqv/daemonsets","resourceVersion":"1929305"},"items":null}

Jul 21 02:01:47.828: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-xptqv/pods","resourceVersion":"1929305"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:01:47.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-xptqv" for this suite.
Jul 21 02:01:53.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:01:53.921: INFO: namespace: e2e-tests-daemonsets-xptqv, resource: bindings, ignored listing per whitelist
Jul 21 02:01:53.944: INFO: namespace e2e-tests-daemonsets-xptqv deletion completed in 6.104251067s

• [SLOW TEST:58.223 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:01:53.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Jul 21 02:01:54.066: INFO: Waiting up to 5m0s for pod "client-containers-263cd54b-caf6-11ea-86e4-0242ac110009" in namespace "e2e-tests-containers-hmzm5" to be "success or failure"
Jul 21 02:01:54.082: INFO: Pod "client-containers-263cd54b-caf6-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 15.969024ms
Jul 21 02:01:56.085: INFO: Pod "client-containers-263cd54b-caf6-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018801239s
Jul 21 02:01:58.372: INFO: Pod "client-containers-263cd54b-caf6-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.306469112s
Jul 21 02:02:00.377: INFO: Pod "client-containers-263cd54b-caf6-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.310725931s
STEP: Saw pod success
Jul 21 02:02:00.377: INFO: Pod "client-containers-263cd54b-caf6-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 02:02:00.380: INFO: Trying to get logs from node hunter-worker2 pod client-containers-263cd54b-caf6-11ea-86e4-0242ac110009 container test-container: 
STEP: delete the pod
Jul 21 02:02:00.401: INFO: Waiting for pod client-containers-263cd54b-caf6-11ea-86e4-0242ac110009 to disappear
Jul 21 02:02:00.410: INFO: Pod client-containers-263cd54b-caf6-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:02:00.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-hmzm5" for this suite.
Jul 21 02:02:06.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:02:06.451: INFO: namespace: e2e-tests-containers-hmzm5, resource: bindings, ignored listing per whitelist
Jul 21 02:02:06.488: INFO: namespace e2e-tests-containers-hmzm5 deletion completed in 6.074174847s

• [SLOW TEST:12.544 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:02:06.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul 21 02:02:06.638: INFO: Waiting up to 5m0s for pod "pod-2dbc8e76-caf6-11ea-86e4-0242ac110009" in namespace "e2e-tests-emptydir-k9xmg" to be "success or failure"
Jul 21 02:02:06.650: INFO: Pod "pod-2dbc8e76-caf6-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 11.781873ms
Jul 21 02:02:08.738: INFO: Pod "pod-2dbc8e76-caf6-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099999734s
Jul 21 02:02:10.742: INFO: Pod "pod-2dbc8e76-caf6-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.10389059s
STEP: Saw pod success
Jul 21 02:02:10.742: INFO: Pod "pod-2dbc8e76-caf6-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 02:02:10.744: INFO: Trying to get logs from node hunter-worker pod pod-2dbc8e76-caf6-11ea-86e4-0242ac110009 container test-container: 
STEP: delete the pod
Jul 21 02:02:10.997: INFO: Waiting for pod pod-2dbc8e76-caf6-11ea-86e4-0242ac110009 to disappear
Jul 21 02:02:11.042: INFO: Pod pod-2dbc8e76-caf6-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:02:11.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-k9xmg" for this suite.
Jul 21 02:02:17.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:02:17.126: INFO: namespace: e2e-tests-emptydir-k9xmg, resource: bindings, ignored listing per whitelist
Jul 21 02:02:17.141: INFO: namespace e2e-tests-emptydir-k9xmg deletion completed in 6.094940694s

• [SLOW TEST:10.653 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:02:17.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 21 02:02:17.268: INFO: Waiting up to 5m0s for pod "downwardapi-volume-340d77b5-caf6-11ea-86e4-0242ac110009" in namespace "e2e-tests-projected-fgtmk" to be "success or failure"
Jul 21 02:02:17.292: INFO: Pod "downwardapi-volume-340d77b5-caf6-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 24.107379ms
Jul 21 02:02:19.378: INFO: Pod "downwardapi-volume-340d77b5-caf6-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109963605s
Jul 21 02:02:21.382: INFO: Pod "downwardapi-volume-340d77b5-caf6-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11359658s
STEP: Saw pod success
Jul 21 02:02:21.382: INFO: Pod "downwardapi-volume-340d77b5-caf6-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 02:02:21.385: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-340d77b5-caf6-11ea-86e4-0242ac110009 container client-container: 
STEP: delete the pod
Jul 21 02:02:21.407: INFO: Waiting for pod downwardapi-volume-340d77b5-caf6-11ea-86e4-0242ac110009 to disappear
Jul 21 02:02:21.456: INFO: Pod downwardapi-volume-340d77b5-caf6-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:02:21.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-fgtmk" for this suite.
Jul 21 02:02:27.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:02:27.508: INFO: namespace: e2e-tests-projected-fgtmk, resource: bindings, ignored listing per whitelist
Jul 21 02:02:27.569: INFO: namespace e2e-tests-projected-fgtmk deletion completed in 6.110206126s

• [SLOW TEST:10.428 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:02:27.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 21 02:02:27.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:02:32.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-2mrr5" for this suite.
Jul 21 02:03:18.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:03:18.304: INFO: namespace: e2e-tests-pods-2mrr5, resource: bindings, ignored listing per whitelist
Jul 21 02:03:18.307: INFO: namespace e2e-tests-pods-2mrr5 deletion completed in 46.095132481s

• [SLOW TEST:50.738 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:03:18.308: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 21 02:03:18.611: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jul 21 02:03:23.616: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul 21 02:03:23.616: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jul 21 02:03:23.662: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-zsfsx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zsfsx/deployments/test-cleanup-deployment,UID:5ba03c1e-caf6-11ea-b2c9-0242ac120008,ResourceVersion:1929635,Generation:1,CreationTimestamp:2020-07-21 02:03:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jul 21 02:03:23.671: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Jul 21 02:03:23.671: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jul 21 02:03:23.671: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-zsfsx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zsfsx/replicasets/test-cleanup-controller,UID:589810eb-caf6-11ea-b2c9-0242ac120008,ResourceVersion:1929636,Generation:1,CreationTimestamp:2020-07-21 02:03:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 5ba03c1e-caf6-11ea-b2c9-0242ac120008 0xc0009382d7 0xc0009382d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jul 21 02:03:23.690: INFO: Pod "test-cleanup-controller-l8sdz" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-l8sdz,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-zsfsx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zsfsx/pods/test-cleanup-controller-l8sdz,UID:58a47a8f-caf6-11ea-b2c9-0242ac120008,ResourceVersion:1929630,Generation:0,CreationTimestamp:2020-07-21 02:03:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 589810eb-caf6-11ea-b2c9-0242ac120008 0xc000938977 0xc000938978}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-szlwr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szlwr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-szlwr true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000938a00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000938a80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 02:03:18 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 02:03:22 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 02:03:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-21 02:03:18 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.17,StartTime:2020-07-21 02:03:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-21 02:03:21 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://022c1bf9964af662142fb1319798fbe5347b2c1fd1290c2ee25e14c323c7c0cd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:03:23.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-zsfsx" for this suite.
Jul 21 02:03:31.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:03:31.836: INFO: namespace: e2e-tests-deployment-zsfsx, resource: bindings, ignored listing per whitelist
Jul 21 02:03:31.864: INFO: namespace e2e-tests-deployment-zsfsx deletion completed in 8.116964171s

• [SLOW TEST:13.556 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:03:31.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-7mdr8
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Jul 21 02:03:32.797: INFO: Found 0 stateful pods, waiting for 3
Jul 21 02:03:42.808: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 21 02:03:42.808: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 21 02:03:42.808: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jul 21 02:03:52.853: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 21 02:03:52.853: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 21 02:03:52.853: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jul 21 02:03:52.932: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jul 21 02:04:03.071: INFO: Updating stateful set ss2
Jul 21 02:04:03.121: INFO: Waiting for Pod e2e-tests-statefulset-7mdr8/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jul 21 02:04:15.249: INFO: Found 2 stateful pods, waiting for 3
Jul 21 02:04:25.254: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 21 02:04:25.254: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 21 02:04:25.254: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jul 21 02:04:25.280: INFO: Updating stateful set ss2
Jul 21 02:04:25.315: INFO: Waiting for Pod e2e-tests-statefulset-7mdr8/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul 21 02:04:35.341: INFO: Updating stateful set ss2
Jul 21 02:04:35.372: INFO: Waiting for StatefulSet e2e-tests-statefulset-7mdr8/ss2 to complete update
Jul 21 02:04:35.372: INFO: Waiting for Pod e2e-tests-statefulset-7mdr8/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jul 21 02:04:45.380: INFO: Deleting all statefulset in ns e2e-tests-statefulset-7mdr8
Jul 21 02:04:45.383: INFO: Scaling statefulset ss2 to 0
Jul 21 02:05:05.402: INFO: Waiting for statefulset status.replicas updated to 0
Jul 21 02:05:05.405: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:05:05.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-7mdr8" for this suite.
Jul 21 02:05:11.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:05:11.551: INFO: namespace: e2e-tests-statefulset-7mdr8, resource: bindings, ignored listing per whitelist
Jul 21 02:05:11.598: INFO: namespace e2e-tests-statefulset-7mdr8 deletion completed in 6.176101135s

• [SLOW TEST:99.734 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:05:11.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-tqqz5 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-tqqz5;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-tqqz5 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-tqqz5;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-tqqz5.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-tqqz5.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-tqqz5.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-tqqz5.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-tqqz5.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-tqqz5.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-tqqz5.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-tqqz5.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-tqqz5.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 241.195.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.195.241_udp@PTR;check="$$(dig +tcp +noall +answer +search 241.195.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.195.241_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-tqqz5 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-tqqz5;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-tqqz5 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-tqqz5;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-tqqz5.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-tqqz5.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-tqqz5.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-tqqz5.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-tqqz5.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-tqqz5.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-tqqz5.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-tqqz5.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-tqqz5.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 241.195.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.195.241_udp@PTR;check="$$(dig +tcp +noall +answer +search 241.195.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.195.241_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 21 02:05:17.960: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:17.966: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-tqqz5 from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:17.968: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-tqqz5 from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:17.976: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:17.978: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:17.995: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:17.997: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:18.000: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tqqz5 from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:18.002: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tqqz5 from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:18.005: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:18.007: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:18.010: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:18.013: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:18.029: INFO: Lookups using e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009 failed for: [wheezy_udp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-tqqz5 wheezy_tcp@dns-test-service.e2e-tests-dns-tqqz5 wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-tqqz5 jessie_tcp@dns-test-service.e2e-tests-dns-tqqz5 jessie_udp@dns-test-service.e2e-tests-dns-tqqz5.svc jessie_tcp@dns-test-service.e2e-tests-dns-tqqz5.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc]

Jul 21 02:05:23.034: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:23.041: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-tqqz5 from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:23.044: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-tqqz5 from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:23.053: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:23.056: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:23.078: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:23.081: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:23.083: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tqqz5 from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:23.086: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tqqz5 from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:23.089: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:23.091: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:23.094: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:23.097: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:23.112: INFO: Lookups using e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009 failed for: [wheezy_udp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-tqqz5 wheezy_tcp@dns-test-service.e2e-tests-dns-tqqz5 wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-tqqz5 jessie_tcp@dns-test-service.e2e-tests-dns-tqqz5 jessie_udp@dns-test-service.e2e-tests-dns-tqqz5.svc jessie_tcp@dns-test-service.e2e-tests-dns-tqqz5.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc]

Jul 21 02:05:28.034: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:28.040: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-tqqz5 from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:28.043: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-tqqz5 from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:28.051: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:28.053: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:28.073: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:28.076: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:28.078: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tqqz5 from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:28.081: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tqqz5 from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:28.083: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:28.086: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:28.088: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:28.090: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:28.106: INFO: Lookups using e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009 failed for: [wheezy_udp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-tqqz5 wheezy_tcp@dns-test-service.e2e-tests-dns-tqqz5 wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-tqqz5 jessie_tcp@dns-test-service.e2e-tests-dns-tqqz5 jessie_udp@dns-test-service.e2e-tests-dns-tqqz5.svc jessie_tcp@dns-test-service.e2e-tests-dns-tqqz5.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc]

Jul 21 02:05:33.034: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:33.040: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-tqqz5 from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:33.042: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-tqqz5 from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:33.051: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:33.053: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:33.073: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:33.077: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:33.080: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tqqz5 from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:33.082: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tqqz5 from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:33.085: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:33.087: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:33.090: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:33.092: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:33.107: INFO: Lookups using e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009 failed for: [wheezy_udp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-tqqz5 wheezy_tcp@dns-test-service.e2e-tests-dns-tqqz5 wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-tqqz5 jessie_tcp@dns-test-service.e2e-tests-dns-tqqz5 jessie_udp@dns-test-service.e2e-tests-dns-tqqz5.svc jessie_tcp@dns-test-service.e2e-tests-dns-tqqz5.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc]

Jul 21 02:05:38.034: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:38.040: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-tqqz5 from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:38.042: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-tqqz5 from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:38.050: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:38.052: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:38.067: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:38.070: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:38.072: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tqqz5 from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:38.075: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tqqz5 from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:38.077: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:38.080: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:38.083: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:38.085: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:38.100: INFO: Lookups using e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009 failed for: [wheezy_udp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-tqqz5 wheezy_tcp@dns-test-service.e2e-tests-dns-tqqz5 wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-tqqz5 jessie_tcp@dns-test-service.e2e-tests-dns-tqqz5 jessie_udp@dns-test-service.e2e-tests-dns-tqqz5.svc jessie_tcp@dns-test-service.e2e-tests-dns-tqqz5.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc]

Jul 21 02:05:43.034: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:43.039: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-tqqz5 from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:43.042: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-tqqz5 from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:43.050: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:43.053: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:43.077: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:43.080: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:43.082: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tqqz5 from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:43.086: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tqqz5 from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:43.089: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:43.091: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:43.095: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:43.098: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc from pod e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009: the server could not find the requested resource (get pods dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009)
Jul 21 02:05:43.116: INFO: Lookups using e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009 failed for: [wheezy_udp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-tqqz5 wheezy_tcp@dns-test-service.e2e-tests-dns-tqqz5 wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-tqqz5 jessie_tcp@dns-test-service.e2e-tests-dns-tqqz5 jessie_udp@dns-test-service.e2e-tests-dns-tqqz5.svc jessie_tcp@dns-test-service.e2e-tests-dns-tqqz5.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tqqz5.svc]

Jul 21 02:05:48.123: INFO: DNS probes using e2e-tests-dns-tqqz5/dns-test-9c27e8d1-caf6-11ea-86e4-0242ac110009 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:05:48.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-tqqz5" for this suite.
Jul 21 02:05:57.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:05:57.225: INFO: namespace: e2e-tests-dns-tqqz5, resource: bindings, ignored listing per whitelist
Jul 21 02:05:57.225: INFO: namespace e2e-tests-dns-tqqz5 deletion completed in 8.285037045s

• [SLOW TEST:45.627 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:05:57.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 21 02:05:57.439: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b7440412-caf6-11ea-86e4-0242ac110009" in namespace "e2e-tests-projected-jkqwh" to be "success or failure"
Jul 21 02:05:57.453: INFO: Pod "downwardapi-volume-b7440412-caf6-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 14.089469ms
Jul 21 02:05:59.616: INFO: Pod "downwardapi-volume-b7440412-caf6-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.176977709s
Jul 21 02:06:01.619: INFO: Pod "downwardapi-volume-b7440412-caf6-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.180762476s
STEP: Saw pod success
Jul 21 02:06:01.619: INFO: Pod "downwardapi-volume-b7440412-caf6-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 02:06:01.622: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-b7440412-caf6-11ea-86e4-0242ac110009 container client-container: 
STEP: delete the pod
Jul 21 02:06:01.813: INFO: Waiting for pod downwardapi-volume-b7440412-caf6-11ea-86e4-0242ac110009 to disappear
Jul 21 02:06:01.832: INFO: Pod downwardapi-volume-b7440412-caf6-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:06:01.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jkqwh" for this suite.
Jul 21 02:06:07.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:06:07.888: INFO: namespace: e2e-tests-projected-jkqwh, resource: bindings, ignored listing per whitelist
Jul 21 02:06:07.946: INFO: namespace e2e-tests-projected-jkqwh deletion completed in 6.110257829s

• [SLOW TEST:10.721 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:06:07.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 21 02:06:08.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jul 21 02:06:08.178: INFO: stderr: ""
Jul 21 02:06:08.178: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-07-10T10:25:27Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:50:51Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:06:08.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6fv2s" for this suite.
Jul 21 02:06:14.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:06:14.245: INFO: namespace: e2e-tests-kubectl-6fv2s, resource: bindings, ignored listing per whitelist
Jul 21 02:06:14.278: INFO: namespace e2e-tests-kubectl-6fv2s deletion completed in 6.095442279s

• [SLOW TEST:6.331 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:06:14.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-c17a8c11-caf6-11ea-86e4-0242ac110009
STEP: Creating a pod to test consume configMaps
Jul 21 02:06:14.641: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c18eb7b2-caf6-11ea-86e4-0242ac110009" in namespace "e2e-tests-projected-7hps8" to be "success or failure"
Jul 21 02:06:14.678: INFO: Pod "pod-projected-configmaps-c18eb7b2-caf6-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 37.551125ms
Jul 21 02:06:16.682: INFO: Pod "pod-projected-configmaps-c18eb7b2-caf6-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041050136s
Jul 21 02:06:18.724: INFO: Pod "pod-projected-configmaps-c18eb7b2-caf6-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083082881s
Jul 21 02:06:20.728: INFO: Pod "pod-projected-configmaps-c18eb7b2-caf6-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.087475946s
STEP: Saw pod success
Jul 21 02:06:20.728: INFO: Pod "pod-projected-configmaps-c18eb7b2-caf6-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 02:06:20.731: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-c18eb7b2-caf6-11ea-86e4-0242ac110009 container projected-configmap-volume-test: 
STEP: delete the pod
Jul 21 02:06:20.864: INFO: Waiting for pod pod-projected-configmaps-c18eb7b2-caf6-11ea-86e4-0242ac110009 to disappear
Jul 21 02:06:21.148: INFO: Pod pod-projected-configmaps-c18eb7b2-caf6-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:06:21.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7hps8" for this suite.
Jul 21 02:06:27.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:06:27.236: INFO: namespace: e2e-tests-projected-7hps8, resource: bindings, ignored listing per whitelist
Jul 21 02:06:27.240: INFO: namespace e2e-tests-projected-7hps8 deletion completed in 6.088120144s

• [SLOW TEST:12.961 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:06:27.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-m64f2 in namespace e2e-tests-proxy-w7fwk
I0721 02:06:27.436056       6 runners.go:184] Created replication controller with name: proxy-service-m64f2, namespace: e2e-tests-proxy-w7fwk, replica count: 1
I0721 02:06:28.486512       6 runners.go:184] proxy-service-m64f2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0721 02:06:29.486702       6 runners.go:184] proxy-service-m64f2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0721 02:06:30.486920       6 runners.go:184] proxy-service-m64f2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0721 02:06:31.487193       6 runners.go:184] proxy-service-m64f2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0721 02:06:32.487392       6 runners.go:184] proxy-service-m64f2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0721 02:06:33.487645       6 runners.go:184] proxy-service-m64f2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0721 02:06:34.487889       6 runners.go:184] proxy-service-m64f2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0721 02:06:35.488135       6 runners.go:184] proxy-service-m64f2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0721 02:06:36.488355       6 runners.go:184] proxy-service-m64f2 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul 21 02:06:36.492: INFO: setup took 9.114018357s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jul 21 02:06:36.497: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-w7fwk/pods/proxy-service-m64f2-wkd68:162/proxy/: bar (200; 4.90564ms)
Jul 21 02:06:36.498: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-w7fwk/pods/proxy-service-m64f2-wkd68:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-d8f0424a-caf6-11ea-86e4-0242ac110009
STEP: Creating a pod to test consume secrets
Jul 21 02:06:53.881: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d8f221dd-caf6-11ea-86e4-0242ac110009" in namespace "e2e-tests-projected-m2r9v" to be "success or failure"
Jul 21 02:06:53.885: INFO: Pod "pod-projected-secrets-d8f221dd-caf6-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.686629ms
Jul 21 02:06:55.889: INFO: Pod "pod-projected-secrets-d8f221dd-caf6-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007892569s
Jul 21 02:06:57.893: INFO: Pod "pod-projected-secrets-d8f221dd-caf6-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012138511s
STEP: Saw pod success
Jul 21 02:06:57.894: INFO: Pod "pod-projected-secrets-d8f221dd-caf6-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 02:06:57.897: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-d8f221dd-caf6-11ea-86e4-0242ac110009 container projected-secret-volume-test: 
STEP: delete the pod
Jul 21 02:06:57.950: INFO: Waiting for pod pod-projected-secrets-d8f221dd-caf6-11ea-86e4-0242ac110009 to disappear
Jul 21 02:06:57.963: INFO: Pod pod-projected-secrets-d8f221dd-caf6-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:06:57.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-m2r9v" for this suite.
Jul 21 02:07:04.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:07:04.130: INFO: namespace: e2e-tests-projected-m2r9v, resource: bindings, ignored listing per whitelist
Jul 21 02:07:04.134: INFO: namespace e2e-tests-projected-m2r9v deletion completed in 6.166519994s

• [SLOW TEST:10.371 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:07:04.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 21 02:07:04.272: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df1db5e6-caf6-11ea-86e4-0242ac110009" in namespace "e2e-tests-projected-6xg88" to be "success or failure"
Jul 21 02:07:04.282: INFO: Pod "downwardapi-volume-df1db5e6-caf6-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 9.954162ms
Jul 21 02:07:06.286: INFO: Pod "downwardapi-volume-df1db5e6-caf6-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014355358s
Jul 21 02:07:08.290: INFO: Pod "downwardapi-volume-df1db5e6-caf6-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018031141s
STEP: Saw pod success
Jul 21 02:07:08.290: INFO: Pod "downwardapi-volume-df1db5e6-caf6-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 02:07:08.293: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-df1db5e6-caf6-11ea-86e4-0242ac110009 container client-container: 
STEP: delete the pod
Jul 21 02:07:08.318: INFO: Waiting for pod downwardapi-volume-df1db5e6-caf6-11ea-86e4-0242ac110009 to disappear
Jul 21 02:07:08.341: INFO: Pod downwardapi-volume-df1db5e6-caf6-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:07:08.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6xg88" for this suite.
Jul 21 02:07:14.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:07:14.475: INFO: namespace: e2e-tests-projected-6xg88, resource: bindings, ignored listing per whitelist
Jul 21 02:07:14.478: INFO: namespace e2e-tests-projected-6xg88 deletion completed in 6.133110422s

• [SLOW TEST:10.344 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:07:14.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-e54954b7-caf6-11ea-86e4-0242ac110009
STEP: Creating a pod to test consume configMaps
Jul 21 02:07:14.646: INFO: Waiting up to 5m0s for pod "pod-configmaps-e5530010-caf6-11ea-86e4-0242ac110009" in namespace "e2e-tests-configmap-62fdn" to be "success or failure"
Jul 21 02:07:14.658: INFO: Pod "pod-configmaps-e5530010-caf6-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 11.522055ms
Jul 21 02:07:16.662: INFO: Pod "pod-configmaps-e5530010-caf6-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0153129s
Jul 21 02:07:18.666: INFO: Pod "pod-configmaps-e5530010-caf6-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019443235s
STEP: Saw pod success
Jul 21 02:07:18.666: INFO: Pod "pod-configmaps-e5530010-caf6-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 02:07:18.668: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-e5530010-caf6-11ea-86e4-0242ac110009 container configmap-volume-test: 
STEP: delete the pod
Jul 21 02:07:18.682: INFO: Waiting for pod pod-configmaps-e5530010-caf6-11ea-86e4-0242ac110009 to disappear
Jul 21 02:07:18.904: INFO: Pod pod-configmaps-e5530010-caf6-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:07:18.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-62fdn" for this suite.
Jul 21 02:07:24.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:07:25.008: INFO: namespace: e2e-tests-configmap-62fdn, resource: bindings, ignored listing per whitelist
Jul 21 02:07:25.052: INFO: namespace e2e-tests-configmap-62fdn deletion completed in 6.142385031s

• [SLOW TEST:10.574 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:07:25.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-eb96376a-caf6-11ea-86e4-0242ac110009
STEP: Creating a pod to test consume secrets
Jul 21 02:07:25.170: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-eb9770f4-caf6-11ea-86e4-0242ac110009" in namespace "e2e-tests-projected-rkppj" to be "success or failure"
Jul 21 02:07:25.203: INFO: Pod "pod-projected-secrets-eb9770f4-caf6-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 33.794498ms
Jul 21 02:07:27.207: INFO: Pod "pod-projected-secrets-eb9770f4-caf6-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037425461s
Jul 21 02:07:29.211: INFO: Pod "pod-projected-secrets-eb9770f4-caf6-11ea-86e4-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.04156659s
Jul 21 02:07:31.215: INFO: Pod "pod-projected-secrets-eb9770f4-caf6-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045353529s
STEP: Saw pod success
Jul 21 02:07:31.215: INFO: Pod "pod-projected-secrets-eb9770f4-caf6-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 02:07:31.218: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-eb9770f4-caf6-11ea-86e4-0242ac110009 container projected-secret-volume-test: 
STEP: delete the pod
Jul 21 02:07:31.265: INFO: Waiting for pod pod-projected-secrets-eb9770f4-caf6-11ea-86e4-0242ac110009 to disappear
Jul 21 02:07:31.275: INFO: Pod pod-projected-secrets-eb9770f4-caf6-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:07:31.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rkppj" for this suite.
Jul 21 02:07:37.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:07:37.476: INFO: namespace: e2e-tests-projected-rkppj, resource: bindings, ignored listing per whitelist
Jul 21 02:07:37.536: INFO: namespace e2e-tests-projected-rkppj deletion completed in 6.25671553s

• [SLOW TEST:12.484 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:07:37.536: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0721 02:08:08.195793       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 21 02:08:08.195: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:08:08.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-rp4p2" for this suite.
Jul 21 02:08:16.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:08:16.250: INFO: namespace: e2e-tests-gc-rp4p2, resource: bindings, ignored listing per whitelist
Jul 21 02:08:16.321: INFO: namespace e2e-tests-gc-rp4p2 deletion completed in 8.121980441s

• [SLOW TEST:38.785 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:08:16.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul 21 02:08:16.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-tpdfk'
Jul 21 02:08:16.547: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul 21 02:08:16.547: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Jul 21 02:08:20.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-tpdfk'
Jul 21 02:08:20.707: INFO: stderr: ""
Jul 21 02:08:20.707: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:08:20.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-tpdfk" for this suite.
Jul 21 02:08:42.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:08:42.759: INFO: namespace: e2e-tests-kubectl-tpdfk, resource: bindings, ignored listing per whitelist
Jul 21 02:08:42.831: INFO: namespace e2e-tests-kubectl-tpdfk deletion completed in 22.120383837s

• [SLOW TEST:26.510 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:08:42.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:08:46.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-f7dhx" for this suite.
Jul 21 02:09:29.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:09:29.031: INFO: namespace: e2e-tests-kubelet-test-f7dhx, resource: bindings, ignored listing per whitelist
Jul 21 02:09:29.097: INFO: namespace e2e-tests-kubelet-test-f7dhx deletion completed in 42.096700837s

• [SLOW TEST:46.266 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:09:29.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jul 21 02:09:29.257: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-r9pwd,SelfLink:/api/v1/namespaces/e2e-tests-watch-r9pwd/configmaps/e2e-watch-test-label-changed,UID:3583fdff-caf7-11ea-b2c9-0242ac120008,ResourceVersion:1931012,Generation:0,CreationTimestamp:2020-07-21 02:09:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul 21 02:09:29.257: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-r9pwd,SelfLink:/api/v1/namespaces/e2e-tests-watch-r9pwd/configmaps/e2e-watch-test-label-changed,UID:3583fdff-caf7-11ea-b2c9-0242ac120008,ResourceVersion:1931013,Generation:0,CreationTimestamp:2020-07-21 02:09:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jul 21 02:09:29.257: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-r9pwd,SelfLink:/api/v1/namespaces/e2e-tests-watch-r9pwd/configmaps/e2e-watch-test-label-changed,UID:3583fdff-caf7-11ea-b2c9-0242ac120008,ResourceVersion:1931014,Generation:0,CreationTimestamp:2020-07-21 02:09:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jul 21 02:09:39.306: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-r9pwd,SelfLink:/api/v1/namespaces/e2e-tests-watch-r9pwd/configmaps/e2e-watch-test-label-changed,UID:3583fdff-caf7-11ea-b2c9-0242ac120008,ResourceVersion:1931035,Generation:0,CreationTimestamp:2020-07-21 02:09:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul 21 02:09:39.306: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-r9pwd,SelfLink:/api/v1/namespaces/e2e-tests-watch-r9pwd/configmaps/e2e-watch-test-label-changed,UID:3583fdff-caf7-11ea-b2c9-0242ac120008,ResourceVersion:1931036,Generation:0,CreationTimestamp:2020-07-21 02:09:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jul 21 02:09:39.306: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-r9pwd,SelfLink:/api/v1/namespaces/e2e-tests-watch-r9pwd/configmaps/e2e-watch-test-label-changed,UID:3583fdff-caf7-11ea-b2c9-0242ac120008,ResourceVersion:1931037,Generation:0,CreationTimestamp:2020-07-21 02:09:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:09:39.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-r9pwd" for this suite.
Jul 21 02:09:45.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:09:45.427: INFO: namespace: e2e-tests-watch-r9pwd, resource: bindings, ignored listing per whitelist
Jul 21 02:09:45.449: INFO: namespace e2e-tests-watch-r9pwd deletion completed in 6.127367082s

• [SLOW TEST:16.352 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:09:45.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jul 21 02:09:45.635: INFO: namespace e2e-tests-kubectl-9g4tc
Jul 21 02:09:45.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-9g4tc'
Jul 21 02:09:45.901: INFO: stderr: ""
Jul 21 02:09:45.901: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jul 21 02:09:46.930: INFO: Selector matched 1 pods for map[app:redis]
Jul 21 02:09:46.930: INFO: Found 0 / 1
Jul 21 02:09:47.906: INFO: Selector matched 1 pods for map[app:redis]
Jul 21 02:09:47.906: INFO: Found 0 / 1
Jul 21 02:09:48.937: INFO: Selector matched 1 pods for map[app:redis]
Jul 21 02:09:48.937: INFO: Found 0 / 1
Jul 21 02:09:49.906: INFO: Selector matched 1 pods for map[app:redis]
Jul 21 02:09:49.906: INFO: Found 1 / 1
Jul 21 02:09:49.906: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jul 21 02:09:49.909: INFO: Selector matched 1 pods for map[app:redis]
Jul 21 02:09:49.909: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul 21 02:09:49.909: INFO: wait on redis-master startup in e2e-tests-kubectl-9g4tc 
Jul 21 02:09:49.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-phw5t redis-master --namespace=e2e-tests-kubectl-9g4tc'
Jul 21 02:09:50.015: INFO: stderr: ""
Jul 21 02:09:50.015: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 21 Jul 02:09:48.881 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 21 Jul 02:09:48.881 # Server started, Redis version 3.2.12\n1:M 21 Jul 02:09:48.882 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 21 Jul 02:09:48.882 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jul 21 02:09:50.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-9g4tc'
Jul 21 02:09:50.158: INFO: stderr: ""
Jul 21 02:09:50.158: INFO: stdout: "service/rm2 exposed\n"
Jul 21 02:09:50.164: INFO: Service rm2 in namespace e2e-tests-kubectl-9g4tc found.
STEP: exposing service
Jul 21 02:09:52.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-9g4tc'
Jul 21 02:09:52.320: INFO: stderr: ""
Jul 21 02:09:52.321: INFO: stdout: "service/rm3 exposed\n"
Jul 21 02:09:52.326: INFO: Service rm3 in namespace e2e-tests-kubectl-9g4tc found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:09:54.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9g4tc" for this suite.
Jul 21 02:10:16.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:10:16.433: INFO: namespace: e2e-tests-kubectl-9g4tc, resource: bindings, ignored listing per whitelist
Jul 21 02:10:16.449: INFO: namespace e2e-tests-kubectl-9g4tc deletion completed in 22.113726933s

• [SLOW TEST:30.999 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:10:16.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-51bfc324-caf7-11ea-86e4-0242ac110009
STEP: Creating a pod to test consume configMaps
Jul 21 02:10:16.569: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-51c1cb5a-caf7-11ea-86e4-0242ac110009" in namespace "e2e-tests-projected-dxxqt" to be "success or failure"
Jul 21 02:10:16.573: INFO: Pod "pod-projected-configmaps-51c1cb5a-caf7-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093123ms
Jul 21 02:10:18.577: INFO: Pod "pod-projected-configmaps-51c1cb5a-caf7-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008185001s
Jul 21 02:10:20.581: INFO: Pod "pod-projected-configmaps-51c1cb5a-caf7-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01234963s
STEP: Saw pod success
Jul 21 02:10:20.581: INFO: Pod "pod-projected-configmaps-51c1cb5a-caf7-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 02:10:20.584: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-51c1cb5a-caf7-11ea-86e4-0242ac110009 container projected-configmap-volume-test: 
STEP: delete the pod
Jul 21 02:10:20.611: INFO: Waiting for pod pod-projected-configmaps-51c1cb5a-caf7-11ea-86e4-0242ac110009 to disappear
Jul 21 02:10:20.615: INFO: Pod pod-projected-configmaps-51c1cb5a-caf7-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:10:20.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dxxqt" for this suite.
Jul 21 02:10:26.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:10:26.661: INFO: namespace: e2e-tests-projected-dxxqt, resource: bindings, ignored listing per whitelist
Jul 21 02:10:26.706: INFO: namespace e2e-tests-projected-dxxqt deletion completed in 6.087789787s

• [SLOW TEST:10.257 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:10:26.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Jul 21 02:10:26.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xcbt2'
Jul 21 02:10:29.327: INFO: stderr: ""
Jul 21 02:10:29.327: INFO: stdout: "pod/pause created\n"
Jul 21 02:10:29.327: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jul 21 02:10:29.327: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-xcbt2" to be "running and ready"
Jul 21 02:10:29.359: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 31.991593ms
Jul 21 02:10:31.362: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03494848s
Jul 21 02:10:33.365: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.038180396s
Jul 21 02:10:33.365: INFO: Pod "pause" satisfied condition "running and ready"
Jul 21 02:10:33.365: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Jul 21 02:10:33.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-xcbt2'
Jul 21 02:10:33.479: INFO: stderr: ""
Jul 21 02:10:33.479: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jul 21 02:10:33.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-xcbt2'
Jul 21 02:10:33.572: INFO: stderr: ""
Jul 21 02:10:33.572: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Jul 21 02:10:33.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-xcbt2'
Jul 21 02:10:33.678: INFO: stderr: ""
Jul 21 02:10:33.678: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jul 21 02:10:33.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-xcbt2'
Jul 21 02:10:33.782: INFO: stderr: ""
Jul 21 02:10:33.782: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Jul 21 02:10:33.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-xcbt2'
Jul 21 02:10:33.911: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 21 02:10:33.911: INFO: stdout: "pod \"pause\" force deleted\n"
Jul 21 02:10:33.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-xcbt2'
Jul 21 02:10:34.235: INFO: stderr: "No resources found.\n"
Jul 21 02:10:34.235: INFO: stdout: ""
Jul 21 02:10:34.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-xcbt2 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul 21 02:10:34.341: INFO: stderr: ""
Jul 21 02:10:34.341: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:10:34.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xcbt2" for this suite.
Jul 21 02:10:40.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:10:40.419: INFO: namespace: e2e-tests-kubectl-xcbt2, resource: bindings, ignored listing per whitelist
Jul 21 02:10:40.441: INFO: namespace e2e-tests-kubectl-xcbt2 deletion completed in 6.09601348s

• [SLOW TEST:13.734 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:10:40.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Jul 21 02:10:40.563: INFO: Waiting up to 5m0s for pod "var-expansion-600f42a8-caf7-11ea-86e4-0242ac110009" in namespace "e2e-tests-var-expansion-dbw4q" to be "success or failure"
Jul 21 02:10:40.686: INFO: Pod "var-expansion-600f42a8-caf7-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 123.356353ms
Jul 21 02:10:42.690: INFO: Pod "var-expansion-600f42a8-caf7-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127484424s
Jul 21 02:10:44.694: INFO: Pod "var-expansion-600f42a8-caf7-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.131533858s
STEP: Saw pod success
Jul 21 02:10:44.694: INFO: Pod "var-expansion-600f42a8-caf7-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 02:10:44.697: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-600f42a8-caf7-11ea-86e4-0242ac110009 container dapi-container: 
STEP: delete the pod
Jul 21 02:10:44.731: INFO: Waiting for pod var-expansion-600f42a8-caf7-11ea-86e4-0242ac110009 to disappear
Jul 21 02:10:44.754: INFO: Pod var-expansion-600f42a8-caf7-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:10:44.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-dbw4q" for this suite.
Jul 21 02:10:50.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:10:50.847: INFO: namespace: e2e-tests-var-expansion-dbw4q, resource: bindings, ignored listing per whitelist
Jul 21 02:10:50.858: INFO: namespace e2e-tests-var-expansion-dbw4q deletion completed in 6.094213791s

• [SLOW TEST:10.417 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:10:50.858: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-5pnwf
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-5pnwf
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-5pnwf
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-5pnwf
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-5pnwf
Jul 21 02:10:55.197: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-5pnwf, name: ss-0, uid: 66a4762f-caf7-11ea-b2c9-0242ac120008, status phase: Pending. Waiting for statefulset controller to delete.
Jul 21 02:10:57.551: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-5pnwf, name: ss-0, uid: 66a4762f-caf7-11ea-b2c9-0242ac120008, status phase: Failed. Waiting for statefulset controller to delete.
Jul 21 02:10:57.586: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-5pnwf, name: ss-0, uid: 66a4762f-caf7-11ea-b2c9-0242ac120008, status phase: Failed. Waiting for statefulset controller to delete.
Jul 21 02:10:57.645: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-5pnwf
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-5pnwf
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-5pnwf and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jul 21 02:11:01.867: INFO: Deleting all statefulset in ns e2e-tests-statefulset-5pnwf
Jul 21 02:11:01.870: INFO: Scaling statefulset ss to 0
Jul 21 02:11:11.884: INFO: Waiting for statefulset status.replicas updated to 0
Jul 21 02:11:11.887: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:11:11.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-5pnwf" for this suite.
Jul 21 02:11:17.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:11:18.049: INFO: namespace: e2e-tests-statefulset-5pnwf, resource: bindings, ignored listing per whitelist
Jul 21 02:11:18.070: INFO: namespace e2e-tests-statefulset-5pnwf deletion completed in 6.153708952s

• [SLOW TEST:27.212 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:11:18.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-767abb33-caf7-11ea-86e4-0242ac110009
STEP: Creating a pod to test consume configMaps
Jul 21 02:11:18.225: INFO: Waiting up to 5m0s for pod "pod-configmaps-76822e89-caf7-11ea-86e4-0242ac110009" in namespace "e2e-tests-configmap-9gwf4" to be "success or failure"
Jul 21 02:11:18.234: INFO: Pod "pod-configmaps-76822e89-caf7-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 8.935274ms
Jul 21 02:11:20.258: INFO: Pod "pod-configmaps-76822e89-caf7-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033001734s
Jul 21 02:11:22.263: INFO: Pod "pod-configmaps-76822e89-caf7-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037426341s
STEP: Saw pod success
Jul 21 02:11:22.263: INFO: Pod "pod-configmaps-76822e89-caf7-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 02:11:22.265: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-76822e89-caf7-11ea-86e4-0242ac110009 container configmap-volume-test: 
STEP: delete the pod
Jul 21 02:11:22.410: INFO: Waiting for pod pod-configmaps-76822e89-caf7-11ea-86e4-0242ac110009 to disappear
Jul 21 02:11:22.424: INFO: Pod pod-configmaps-76822e89-caf7-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:11:22.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-9gwf4" for this suite.
Jul 21 02:11:28.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:11:28.465: INFO: namespace: e2e-tests-configmap-9gwf4, resource: bindings, ignored listing per whitelist
Jul 21 02:11:28.522: INFO: namespace e2e-tests-configmap-9gwf4 deletion completed in 6.094540549s

• [SLOW TEST:10.452 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:11:28.523: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jul 21 02:11:36.706: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 21 02:11:36.720: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 21 02:11:38.720: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 21 02:11:38.725: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 21 02:11:40.720: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 21 02:11:40.725: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 21 02:11:42.720: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 21 02:11:42.725: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 21 02:11:44.720: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 21 02:11:44.724: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 21 02:11:46.720: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 21 02:11:46.725: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 21 02:11:48.720: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 21 02:11:48.724: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 21 02:11:50.720: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 21 02:11:50.724: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 21 02:11:52.720: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 21 02:11:52.725: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 21 02:11:54.720: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 21 02:11:54.725: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 21 02:11:56.720: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 21 02:11:56.725: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 21 02:11:58.720: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 21 02:11:58.725: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:11:58.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-pgrgn" for this suite.
Jul 21 02:12:20.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:12:20.761: INFO: namespace: e2e-tests-container-lifecycle-hook-pgrgn, resource: bindings, ignored listing per whitelist
Jul 21 02:12:20.831: INFO: namespace e2e-tests-container-lifecycle-hook-pgrgn deletion completed in 22.093497104s

• [SLOW TEST:52.308 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:12:20.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:12:20.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-txv4w" for this suite.
Jul 21 02:12:26.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:12:27.059: INFO: namespace: e2e-tests-services-txv4w, resource: bindings, ignored listing per whitelist
Jul 21 02:12:27.062: INFO: namespace e2e-tests-services-txv4w deletion completed in 6.102571232s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.231 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:12:27.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul 21 02:12:27.148: INFO: Waiting up to 5m0s for pod "pod-9f9539aa-caf7-11ea-86e4-0242ac110009" in namespace "e2e-tests-emptydir-tsv8t" to be "success or failure"
Jul 21 02:12:27.158: INFO: Pod "pod-9f9539aa-caf7-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 9.911483ms
Jul 21 02:12:29.162: INFO: Pod "pod-9f9539aa-caf7-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014165861s
Jul 21 02:12:31.166: INFO: Pod "pod-9f9539aa-caf7-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018183272s
STEP: Saw pod success
Jul 21 02:12:31.166: INFO: Pod "pod-9f9539aa-caf7-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 02:12:31.169: INFO: Trying to get logs from node hunter-worker pod pod-9f9539aa-caf7-11ea-86e4-0242ac110009 container test-container: 
STEP: delete the pod
Jul 21 02:12:31.360: INFO: Waiting for pod pod-9f9539aa-caf7-11ea-86e4-0242ac110009 to disappear
Jul 21 02:12:31.455: INFO: Pod pod-9f9539aa-caf7-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:12:31.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-tsv8t" for this suite.
Jul 21 02:12:37.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:12:37.598: INFO: namespace: e2e-tests-emptydir-tsv8t, resource: bindings, ignored listing per whitelist
Jul 21 02:12:37.619: INFO: namespace e2e-tests-emptydir-tsv8t deletion completed in 6.091093753s

• [SLOW TEST:10.557 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:12:37.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Jul 21 02:12:37.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jul 21 02:12:37.881: INFO: stderr: ""
Jul 21 02:12:37.881: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45709\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45709/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:12:37.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-68bjx" for this suite.
Jul 21 02:12:43.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:12:43.916: INFO: namespace: e2e-tests-kubectl-68bjx, resource: bindings, ignored listing per whitelist
Jul 21 02:12:43.979: INFO: namespace e2e-tests-kubectl-68bjx deletion completed in 6.094214146s

• [SLOW TEST:6.360 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:12:43.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-a9af173c-caf7-11ea-86e4-0242ac110009
STEP: Creating a pod to test consume configMaps
Jul 21 02:12:44.100: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a9b173fb-caf7-11ea-86e4-0242ac110009" in namespace "e2e-tests-projected-lzvjs" to be "success or failure"
Jul 21 02:12:44.161: INFO: Pod "pod-projected-configmaps-a9b173fb-caf7-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 61.022725ms
Jul 21 02:12:46.165: INFO: Pod "pod-projected-configmaps-a9b173fb-caf7-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065158329s
Jul 21 02:12:48.169: INFO: Pod "pod-projected-configmaps-a9b173fb-caf7-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069209792s
STEP: Saw pod success
Jul 21 02:12:48.169: INFO: Pod "pod-projected-configmaps-a9b173fb-caf7-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 02:12:48.172: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-a9b173fb-caf7-11ea-86e4-0242ac110009 container projected-configmap-volume-test: 
STEP: delete the pod
Jul 21 02:12:48.209: INFO: Waiting for pod pod-projected-configmaps-a9b173fb-caf7-11ea-86e4-0242ac110009 to disappear
Jul 21 02:12:48.225: INFO: Pod pod-projected-configmaps-a9b173fb-caf7-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:12:48.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-lzvjs" for this suite.
Jul 21 02:12:54.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:12:54.318: INFO: namespace: e2e-tests-projected-lzvjs, resource: bindings, ignored listing per whitelist
Jul 21 02:12:54.334: INFO: namespace e2e-tests-projected-lzvjs deletion completed in 6.106120045s

• [SLOW TEST:10.355 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:12:54.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 21 02:12:54.478: INFO: Waiting up to 5m0s for pod "downwardapi-volume-afdf4ea3-caf7-11ea-86e4-0242ac110009" in namespace "e2e-tests-downward-api-rkv74" to be "success or failure"
Jul 21 02:12:54.495: INFO: Pod "downwardapi-volume-afdf4ea3-caf7-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 16.68384ms
Jul 21 02:12:56.520: INFO: Pod "downwardapi-volume-afdf4ea3-caf7-11ea-86e4-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042564831s
Jul 21 02:12:58.525: INFO: Pod "downwardapi-volume-afdf4ea3-caf7-11ea-86e4-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046742867s
STEP: Saw pod success
Jul 21 02:12:58.525: INFO: Pod "downwardapi-volume-afdf4ea3-caf7-11ea-86e4-0242ac110009" satisfied condition "success or failure"
Jul 21 02:12:58.527: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-afdf4ea3-caf7-11ea-86e4-0242ac110009 container client-container: 
STEP: delete the pod
Jul 21 02:12:58.630: INFO: Waiting for pod downwardapi-volume-afdf4ea3-caf7-11ea-86e4-0242ac110009 to disappear
Jul 21 02:12:58.690: INFO: Pod downwardapi-volume-afdf4ea3-caf7-11ea-86e4-0242ac110009 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:12:58.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-rkv74" for this suite.
Jul 21 02:13:04.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:13:04.807: INFO: namespace: e2e-tests-downward-api-rkv74, resource: bindings, ignored listing per whitelist
Jul 21 02:13:04.896: INFO: namespace e2e-tests-downward-api-rkv74 deletion completed in 6.202566049s

• [SLOW TEST:10.561 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:13:04.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-nb46n
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 21 02:13:04.972: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul 21 02:13:31.294: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.35 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-nb46n PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 21 02:13:31.294: INFO: >>> kubeConfig: /root/.kube/config
I0721 02:13:31.332421       6 log.go:172] (0xc00168a370) (0xc000599540) Create stream
I0721 02:13:31.332465       6 log.go:172] (0xc00168a370) (0xc000599540) Stream added, broadcasting: 1
I0721 02:13:31.334374       6 log.go:172] (0xc00168a370) Reply frame received for 1
I0721 02:13:31.334419       6 log.go:172] (0xc00168a370) (0xc002daa820) Create stream
I0721 02:13:31.334438       6 log.go:172] (0xc00168a370) (0xc002daa820) Stream added, broadcasting: 3
I0721 02:13:31.335266       6 log.go:172] (0xc00168a370) Reply frame received for 3
I0721 02:13:31.335305       6 log.go:172] (0xc00168a370) (0xc0012600a0) Create stream
I0721 02:13:31.335322       6 log.go:172] (0xc00168a370) (0xc0012600a0) Stream added, broadcasting: 5
I0721 02:13:31.336219       6 log.go:172] (0xc00168a370) Reply frame received for 5
I0721 02:13:32.397056       6 log.go:172] (0xc00168a370) Data frame received for 3
I0721 02:13:32.397111       6 log.go:172] (0xc002daa820) (3) Data frame handling
I0721 02:13:32.397157       6 log.go:172] (0xc002daa820) (3) Data frame sent
I0721 02:13:32.397335       6 log.go:172] (0xc00168a370) Data frame received for 5
I0721 02:13:32.397395       6 log.go:172] (0xc0012600a0) (5) Data frame handling
I0721 02:13:32.397447       6 log.go:172] (0xc00168a370) Data frame received for 3
I0721 02:13:32.397468       6 log.go:172] (0xc002daa820) (3) Data frame handling
I0721 02:13:32.399433       6 log.go:172] (0xc00168a370) Data frame received for 1
I0721 02:13:32.399472       6 log.go:172] (0xc000599540) (1) Data frame handling
I0721 02:13:32.399512       6 log.go:172] (0xc000599540) (1) Data frame sent
I0721 02:13:32.399535       6 log.go:172] (0xc00168a370) (0xc000599540) Stream removed, broadcasting: 1
I0721 02:13:32.399552       6 log.go:172] (0xc00168a370) Go away received
I0721 02:13:32.399716       6 log.go:172] (0xc00168a370) (0xc000599540) Stream removed, broadcasting: 1
I0721 02:13:32.399747       6 log.go:172] (0xc00168a370) (0xc002daa820) Stream removed, broadcasting: 3
I0721 02:13:32.399769       6 log.go:172] (0xc00168a370) (0xc0012600a0) Stream removed, broadcasting: 5
Jul 21 02:13:32.399: INFO: Found all expected endpoints: [netserver-0]
Jul 21 02:13:32.403: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.38 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-nb46n PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 21 02:13:32.403: INFO: >>> kubeConfig: /root/.kube/config
I0721 02:13:32.438923       6 log.go:172] (0xc0017b02c0) (0xc001208000) Create stream
I0721 02:13:32.438962       6 log.go:172] (0xc0017b02c0) (0xc001208000) Stream added, broadcasting: 1
I0721 02:13:32.446073       6 log.go:172] (0xc0017b02c0) Reply frame received for 1
I0721 02:13:32.446112       6 log.go:172] (0xc0017b02c0) (0xc000599900) Create stream
I0721 02:13:32.446126       6 log.go:172] (0xc0017b02c0) (0xc000599900) Stream added, broadcasting: 3
I0721 02:13:32.447125       6 log.go:172] (0xc0017b02c0) Reply frame received for 3
I0721 02:13:32.447167       6 log.go:172] (0xc0017b02c0) (0xc001e9a000) Create stream
I0721 02:13:32.447188       6 log.go:172] (0xc0017b02c0) (0xc001e9a000) Stream added, broadcasting: 5
I0721 02:13:32.447978       6 log.go:172] (0xc0017b02c0) Reply frame received for 5
I0721 02:13:33.505779       6 log.go:172] (0xc0017b02c0) Data frame received for 3
I0721 02:13:33.505828       6 log.go:172] (0xc000599900) (3) Data frame handling
I0721 02:13:33.505860       6 log.go:172] (0xc000599900) (3) Data frame sent
I0721 02:13:33.505886       6 log.go:172] (0xc0017b02c0) Data frame received for 3
I0721 02:13:33.505901       6 log.go:172] (0xc000599900) (3) Data frame handling
I0721 02:13:33.506210       6 log.go:172] (0xc0017b02c0) Data frame received for 5
I0721 02:13:33.506246       6 log.go:172] (0xc001e9a000) (5) Data frame handling
I0721 02:13:33.508281       6 log.go:172] (0xc0017b02c0) Data frame received for 1
I0721 02:13:33.508304       6 log.go:172] (0xc001208000) (1) Data frame handling
I0721 02:13:33.508324       6 log.go:172] (0xc001208000) (1) Data frame sent
I0721 02:13:33.508341       6 log.go:172] (0xc0017b02c0) (0xc001208000) Stream removed, broadcasting: 1
I0721 02:13:33.508358       6 log.go:172] (0xc0017b02c0) Go away received
I0721 02:13:33.508499       6 log.go:172] (0xc0017b02c0) (0xc001208000) Stream removed, broadcasting: 1
I0721 02:13:33.508538       6 log.go:172] (0xc0017b02c0) (0xc000599900) Stream removed, broadcasting: 3
I0721 02:13:33.508560       6 log.go:172] (0xc0017b02c0) (0xc001e9a000) Stream removed, broadcasting: 5
Jul 21 02:13:33.508: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:13:33.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-nb46n" for this suite.
Jul 21 02:13:57.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:13:57.585: INFO: namespace: e2e-tests-pod-network-test-nb46n, resource: bindings, ignored listing per whitelist
Jul 21 02:13:57.588: INFO: namespace e2e-tests-pod-network-test-nb46n deletion completed in 24.074355335s

• [SLOW TEST:52.692 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:13:57.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 21 02:13:57.767: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"d595af0f-caf7-11ea-b2c9-0242ac120008", Controller:(*bool)(0xc001bfdcaa), BlockOwnerDeletion:(*bool)(0xc001bfdcab)}}
Jul 21 02:13:57.809: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"d593999a-caf7-11ea-b2c9-0242ac120008", Controller:(*bool)(0xc000939d72), BlockOwnerDeletion:(*bool)(0xc000939d73)}}
Jul 21 02:13:57.863: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"d5940ae9-caf7-11ea-b2c9-0242ac120008", Controller:(*bool)(0xc0021d5722), BlockOwnerDeletion:(*bool)(0xc0021d5723)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:14:02.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-k6gmr" for this suite.
Jul 21 02:14:08.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:14:09.007: INFO: namespace: e2e-tests-gc-k6gmr, resource: bindings, ignored listing per whitelist
Jul 21 02:14:09.060: INFO: namespace e2e-tests-gc-k6gmr deletion completed in 6.091058974s

• [SLOW TEST:11.472 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 21 02:14:09.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 21 02:14:15.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-vkhvw" for this suite.
Jul 21 02:14:21.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 21 02:14:21.374: INFO: namespace: e2e-tests-emptydir-wrapper-vkhvw, resource: bindings, ignored listing per whitelist
Jul 21 02:14:21.391: INFO: namespace e2e-tests-emptydir-wrapper-vkhvw deletion completed in 6.072246459s

• [SLOW TEST:12.331 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJul 21 02:14:21.392: INFO: Running AfterSuite actions on all nodes
Jul 21 02:14:21.392: INFO: Running AfterSuite actions on node 1
Jul 21 02:14:21.392: INFO: Skipping dumping logs from cluster

Ran 200 of 2164 Specs in 6282.802 seconds
SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS