I0508 10:46:44.129261 6 e2e.go:224] Starting e2e run "34dad46a-9119-11ea-8adb-0242ac110017" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1588934803 - Will randomize all specs Will run 201 of 2164 specs May 8 10:46:44.346: INFO: >>> kubeConfig: /root/.kube/config May 8 10:46:44.350: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 8 10:46:44.366: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 8 10:46:44.399: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 8 10:46:44.399: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 8 10:46:44.399: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 8 10:46:44.405: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 8 10:46:44.405: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 8 10:46:44.405: INFO: e2e test version: v1.13.12 May 8 10:46:44.406: INFO: kube-apiserver version: v1.13.12 SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 10:46:44.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc May 8 10:46:44.981: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0508 10:47:26.308119 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 8 10:47:26.308: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 10:47:26.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-cckst" for this suite. May 8 10:47:34.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 10:47:34.422: INFO: namespace: e2e-tests-gc-cckst, resource: bindings, ignored listing per whitelist May 8 10:47:34.427: INFO: namespace e2e-tests-gc-cckst deletion completed in 8.116152638s • [SLOW TEST:50.021 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 10:47:34.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 8 10:47:34.616: INFO: Waiting up to 5m0s for pod "downwardapi-volume-534ee710-9119-11ea-8adb-0242ac110017" in namespace "e2e-tests-downward-api-mv549" to be "success or failure" May 8 10:47:34.623: INFO: Pod "downwardapi-volume-534ee710-9119-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 6.208207ms May 8 10:47:36.633: INFO: Pod "downwardapi-volume-534ee710-9119-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016588096s May 8 10:47:38.735: INFO: Pod "downwardapi-volume-534ee710-9119-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118886955s May 8 10:47:40.739: INFO: Pod "downwardapi-volume-534ee710-9119-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.122813705s STEP: Saw pod success May 8 10:47:40.739: INFO: Pod "downwardapi-volume-534ee710-9119-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 10:47:40.742: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-534ee710-9119-11ea-8adb-0242ac110017 container client-container: STEP: delete the pod May 8 10:47:40.763: INFO: Waiting for pod downwardapi-volume-534ee710-9119-11ea-8adb-0242ac110017 to disappear May 8 10:47:40.768: INFO: Pod downwardapi-volume-534ee710-9119-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 10:47:40.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-mv549" for this suite. May 8 10:47:46.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 10:47:46.828: INFO: namespace: e2e-tests-downward-api-mv549, resource: bindings, ignored listing per whitelist May 8 10:47:46.858: INFO: namespace e2e-tests-downward-api-mv549 deletion completed in 6.087117999s • [SLOW TEST:12.431 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 10:47:46.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-5aa89111-9119-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume configMaps May 8 10:47:47.010: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5ab202a6-9119-11ea-8adb-0242ac110017" in namespace "e2e-tests-projected-gplzw" to be "success or failure" May 8 10:47:47.027: INFO: Pod "pod-projected-configmaps-5ab202a6-9119-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 16.067851ms May 8 10:47:49.064: INFO: Pod "pod-projected-configmaps-5ab202a6-9119-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053684423s May 8 10:47:51.068: INFO: Pod "pod-projected-configmaps-5ab202a6-9119-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057729436s May 8 10:47:53.072: INFO: Pod "pod-projected-configmaps-5ab202a6-9119-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.061972481s STEP: Saw pod success May 8 10:47:53.072: INFO: Pod "pod-projected-configmaps-5ab202a6-9119-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 10:47:53.076: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-5ab202a6-9119-11ea-8adb-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 8 10:47:53.093: INFO: Waiting for pod pod-projected-configmaps-5ab202a6-9119-11ea-8adb-0242ac110017 to disappear May 8 10:47:53.110: INFO: Pod pod-projected-configmaps-5ab202a6-9119-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 10:47:53.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gplzw" for this suite. May 8 10:47:59.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 10:47:59.199: INFO: namespace: e2e-tests-projected-gplzw, resource: bindings, ignored listing per whitelist May 8 10:47:59.231: INFO: namespace e2e-tests-projected-gplzw deletion completed in 6.117737541s • [SLOW TEST:12.373 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 10:47:59.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 8 10:48:27.405: INFO: Container started at 2020-05-08 10:48:02 +0000 UTC, pod became ready at 2020-05-08 10:48:26 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 10:48:27.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-dpntl" for this suite. May 8 10:48:49.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 10:48:49.445: INFO: namespace: e2e-tests-container-probe-dpntl, resource: bindings, ignored listing per whitelist May 8 10:48:49.503: INFO: namespace e2e-tests-container-probe-dpntl deletion completed in 22.095124278s • [SLOW TEST:50.272 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 10:48:49.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-800426f0-9119-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume secrets May 8 10:48:49.637: INFO: Waiting up to 5m0s for pod "pod-secrets-8005ed6f-9119-11ea-8adb-0242ac110017" in namespace "e2e-tests-secrets-ljp67" to be "success or failure" May 8 10:48:49.641: INFO: Pod "pod-secrets-8005ed6f-9119-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.789823ms May 8 10:48:51.645: INFO: Pod "pod-secrets-8005ed6f-9119-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007927943s May 8 10:48:53.651: INFO: Pod "pod-secrets-8005ed6f-9119-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013028317s STEP: Saw pod success May 8 10:48:53.651: INFO: Pod "pod-secrets-8005ed6f-9119-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 10:48:53.654: INFO: Trying to get logs from node hunter-worker pod pod-secrets-8005ed6f-9119-11ea-8adb-0242ac110017 container secret-volume-test: STEP: delete the pod May 8 10:48:53.800: INFO: Waiting for pod pod-secrets-8005ed6f-9119-11ea-8adb-0242ac110017 to disappear May 8 10:48:53.809: INFO: Pod pod-secrets-8005ed6f-9119-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 10:48:53.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-ljp67" for this suite. May 8 10:48:59.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 10:48:59.988: INFO: namespace: e2e-tests-secrets-ljp67, resource: bindings, ignored listing per whitelist May 8 10:49:00.000: INFO: namespace e2e-tests-secrets-ljp67 deletion completed in 6.18719272s • [SLOW TEST:10.496 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 10:49:00.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 10:49:04.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-zkhz6" for this suite. May 8 10:49:10.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 10:49:10.207: INFO: namespace: e2e-tests-kubelet-test-zkhz6, resource: bindings, ignored listing per whitelist May 8 10:49:10.231: INFO: namespace e2e-tests-kubelet-test-zkhz6 deletion completed in 6.090329638s • [SLOW TEST:10.231 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 10:49:10.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-8c68e652-9119-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume configMaps May 8 10:49:10.435: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8c6b31c7-9119-11ea-8adb-0242ac110017" in namespace "e2e-tests-projected-vwpv6" to be "success or failure" May 8 10:49:10.464: INFO: Pod "pod-projected-configmaps-8c6b31c7-9119-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 28.58871ms May 8 10:49:12.468: INFO: Pod "pod-projected-configmaps-8c6b31c7-9119-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032788772s May 8 10:49:14.472: INFO: Pod "pod-projected-configmaps-8c6b31c7-9119-11ea-8adb-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.036989065s May 8 10:49:16.476: INFO: Pod "pod-projected-configmaps-8c6b31c7-9119-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040670592s STEP: Saw pod success May 8 10:49:16.476: INFO: Pod "pod-projected-configmaps-8c6b31c7-9119-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 10:49:16.478: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-8c6b31c7-9119-11ea-8adb-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 8 10:49:16.499: INFO: Waiting for pod pod-projected-configmaps-8c6b31c7-9119-11ea-8adb-0242ac110017 to disappear May 8 10:49:16.510: INFO: Pod pod-projected-configmaps-8c6b31c7-9119-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 10:49:16.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vwpv6" for this suite. May 8 10:49:22.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 10:49:22.573: INFO: namespace: e2e-tests-projected-vwpv6, resource: bindings, ignored listing per whitelist May 8 10:49:22.597: INFO: namespace e2e-tests-projected-vwpv6 deletion completed in 6.083692248s • [SLOW TEST:12.366 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 10:49:22.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 10:49:22.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-5rm9r" for this suite. May 8 10:49:44.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 10:49:44.815: INFO: namespace: e2e-tests-pods-5rm9r, resource: bindings, ignored listing per whitelist May 8 10:49:44.862: INFO: namespace e2e-tests-pods-5rm9r deletion completed in 22.093496004s • [SLOW TEST:22.265 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 10:49:44.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-a10816b0-9119-11ea-8adb-0242ac110017 STEP: Creating configMap with name cm-test-opt-upd-a1081729-9119-11ea-8adb-0242ac110017 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-a10816b0-9119-11ea-8adb-0242ac110017 STEP: Updating configmap cm-test-opt-upd-a1081729-9119-11ea-8adb-0242ac110017 STEP: Creating configMap with name cm-test-opt-create-a1081764-9119-11ea-8adb-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 10:49:53.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cbsj6" for this suite. May 8 10:50:15.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 10:50:15.189: INFO: namespace: e2e-tests-projected-cbsj6, resource: bindings, ignored listing per whitelist May 8 10:50:15.231: INFO: namespace e2e-tests-projected-cbsj6 deletion completed in 22.080836039s • [SLOW TEST:30.369 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 10:50:15.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 10:50:19.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-5nm8n" for this suite. May 8 10:51:05.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 10:51:05.472: INFO: namespace: e2e-tests-kubelet-test-5nm8n, resource: bindings, ignored listing per whitelist May 8 10:51:05.478: INFO: namespace e2e-tests-kubelet-test-5nm8n deletion completed in 46.117852662s • [SLOW TEST:50.246 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 10:51:05.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 8 10:51:05.599: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d10e846c-9119-11ea-8adb-0242ac110017" in namespace "e2e-tests-downward-api-wdb67" to be "success or failure" May 8 10:51:05.608: INFO: Pod "downwardapi-volume-d10e846c-9119-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 9.235062ms May 8 10:51:07.639: INFO: Pod "downwardapi-volume-d10e846c-9119-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040333531s May 8 10:51:09.643: INFO: Pod "downwardapi-volume-d10e846c-9119-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044859684s May 8 10:51:11.648: INFO: Pod "downwardapi-volume-d10e846c-9119-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049014732s STEP: Saw pod success May 8 10:51:11.648: INFO: Pod "downwardapi-volume-d10e846c-9119-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 10:51:11.651: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-d10e846c-9119-11ea-8adb-0242ac110017 container client-container: STEP: delete the pod May 8 10:51:11.751: INFO: Waiting for pod downwardapi-volume-d10e846c-9119-11ea-8adb-0242ac110017 to disappear May 8 10:51:11.758: INFO: Pod downwardapi-volume-d10e846c-9119-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 10:51:11.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-wdb67" for this suite. May 8 10:51:17.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 10:51:17.811: INFO: namespace: e2e-tests-downward-api-wdb67, resource: bindings, ignored listing per whitelist May 8 10:51:17.843: INFO: namespace e2e-tests-downward-api-wdb67 deletion completed in 6.082829928s • [SLOW TEST:12.366 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 10:51:17.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 8 10:51:18.026: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-mtn6r,SelfLink:/api/v1/namespaces/e2e-tests-watch-mtn6r/configmaps/e2e-watch-test-resource-version,UID:d86cc8d6-9119-11ea-99e8-0242ac110002,ResourceVersion:9397196,Generation:0,CreationTimestamp:2020-05-08 10:51:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 8 10:51:18.026: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-mtn6r,SelfLink:/api/v1/namespaces/e2e-tests-watch-mtn6r/configmaps/e2e-watch-test-resource-version,UID:d86cc8d6-9119-11ea-99e8-0242ac110002,ResourceVersion:9397197,Generation:0,CreationTimestamp:2020-05-08 10:51:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 10:51:18.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-mtn6r" for this suite. May 8 10:51:24.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 10:51:24.206: INFO: namespace: e2e-tests-watch-mtn6r, resource: bindings, ignored listing per whitelist May 8 10:51:24.227: INFO: namespace e2e-tests-watch-mtn6r deletion completed in 6.1874984s • [SLOW TEST:6.384 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 10:51:24.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 8 10:51:24.319: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 10:51:33.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-zg4bn" for this suite. May 8 10:51:55.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 10:51:56.103: INFO: namespace: e2e-tests-init-container-zg4bn, resource: bindings, ignored listing per whitelist May 8 10:51:56.103: INFO: namespace e2e-tests-init-container-zg4bn deletion completed in 22.200516956s • [SLOW TEST:31.876 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 10:51:56.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-ef39cfba-9119-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume secrets May 8 10:51:56.214: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ef3b4ee7-9119-11ea-8adb-0242ac110017" in namespace "e2e-tests-projected-cbbdg" to be "success or failure" May 8 10:51:56.260: INFO: Pod "pod-projected-secrets-ef3b4ee7-9119-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 46.508794ms May 8 10:51:58.272: INFO: Pod "pod-projected-secrets-ef3b4ee7-9119-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057616862s May 8 10:52:00.276: INFO: Pod "pod-projected-secrets-ef3b4ee7-9119-11ea-8adb-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.061973064s May 8 10:52:02.280: INFO: Pod "pod-projected-secrets-ef3b4ee7-9119-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.066458927s STEP: Saw pod success May 8 10:52:02.280: INFO: Pod "pod-projected-secrets-ef3b4ee7-9119-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 10:52:02.284: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-ef3b4ee7-9119-11ea-8adb-0242ac110017 container projected-secret-volume-test: STEP: delete the pod May 8 10:52:02.345: INFO: Waiting for pod pod-projected-secrets-ef3b4ee7-9119-11ea-8adb-0242ac110017 to disappear May 8 10:52:02.355: INFO: Pod pod-projected-secrets-ef3b4ee7-9119-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 10:52:02.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cbbdg" for this suite. May 8 10:52:08.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 10:52:08.421: INFO: namespace: e2e-tests-projected-cbbdg, resource: bindings, ignored listing per whitelist May 8 10:52:08.465: INFO: namespace e2e-tests-projected-cbbdg deletion completed in 6.106410511s • [SLOW TEST:12.361 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 10:52:08.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-r7zkn STEP: creating a selector STEP: Creating the service pods in kubernetes May 8 10:52:08.603: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 8 10:52:36.981: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.235:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-r7zkn PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 10:52:36.981: INFO: >>> kubeConfig: /root/.kube/config I0508 10:52:37.020432 6 log.go:172] (0xc001b14420) (0xc001daf180) Create stream I0508 10:52:37.020463 6 log.go:172] (0xc001b14420) (0xc001daf180) Stream added, broadcasting: 1 I0508 10:52:37.023568 6 log.go:172] (0xc001b14420) Reply frame received for 1 I0508 10:52:37.023610 6 log.go:172] (0xc001b14420) (0xc00147cfa0) Create stream I0508 10:52:37.023629 6 log.go:172] (0xc001b14420) (0xc00147cfa0) Stream added, broadcasting: 3 I0508 10:52:37.024854 6 log.go:172] (0xc001b14420) Reply frame received for 3 I0508 10:52:37.024884 6 log.go:172] (0xc001b14420) (0xc001daf220) Create stream I0508 10:52:37.024895 6 log.go:172] (0xc001b14420) (0xc001daf220) Stream added, broadcasting: 5 I0508 10:52:37.026246 6 log.go:172] (0xc001b14420) Reply frame received for 5 I0508 10:52:37.111868 6 log.go:172] (0xc001b14420) Data frame received for 5 I0508 10:52:37.111915 6 log.go:172] (0xc001daf220) (5) Data frame handling I0508 10:52:37.111957 6 log.go:172] (0xc001b14420) Data frame received for 3 I0508 10:52:37.111985 6 log.go:172] (0xc00147cfa0) (3) Data frame handling I0508 10:52:37.112019 6 log.go:172] (0xc00147cfa0) (3) Data frame sent I0508 10:52:37.112040 6 log.go:172] (0xc001b14420) Data frame received for 3 I0508 10:52:37.112058 6 log.go:172] (0xc00147cfa0) (3) Data frame handling I0508 10:52:37.113697 6 log.go:172] (0xc001b14420) Data frame received for 1 I0508 10:52:37.113717 6 log.go:172] (0xc001daf180) (1) Data frame handling I0508 10:52:37.113737 6 log.go:172] (0xc001daf180) (1) Data frame sent I0508 10:52:37.113747 6 log.go:172] (0xc001b14420) (0xc001daf180) Stream removed, broadcasting: 1 I0508 10:52:37.113759 6 log.go:172] (0xc001b14420) Go away received I0508 10:52:37.113967 6 log.go:172] (0xc001b14420) (0xc001daf180) Stream removed, broadcasting: 1 I0508 10:52:37.113992 6 log.go:172] (0xc001b14420) (0xc00147cfa0) Stream removed, broadcasting: 3 I0508 10:52:37.114011 6 log.go:172] (0xc001b14420) (0xc001daf220) Stream removed, broadcasting: 5 May 8 10:52:37.114: INFO: Found all expected endpoints: [netserver-0] May 8 10:52:37.116: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.81:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-r7zkn PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 10:52:37.116: INFO: >>> kubeConfig: /root/.kube/config I0508 10:52:37.144456 6 log.go:172] (0xc001b140b0) (0xc001ba2000) Create stream I0508 10:52:37.144480 6 log.go:172] (0xc001b140b0) (0xc001ba2000) Stream added, broadcasting: 1 I0508 10:52:37.146737 6 log.go:172] (0xc001b140b0) Reply frame received for 1 I0508 10:52:37.146806 6 log.go:172] (0xc001b140b0) (0xc001f30000) Create stream I0508 10:52:37.146826 6 log.go:172] (0xc001b140b0) (0xc001f30000) Stream added, broadcasting: 3 I0508 10:52:37.147694 6 log.go:172] (0xc001b140b0) Reply frame received for 3 I0508 10:52:37.147735 6 log.go:172] (0xc001b140b0) (0xc001f300a0) Create stream I0508 10:52:37.147748 6 log.go:172] (0xc001b140b0) (0xc001f300a0) Stream added, broadcasting: 5 I0508 10:52:37.148463 6 log.go:172] (0xc001b140b0) Reply frame received for 5 I0508 10:52:37.228504 6 log.go:172] (0xc001b140b0) Data frame received for 3 I0508 10:52:37.228534 6 log.go:172] (0xc001f30000) (3) Data frame handling I0508 10:52:37.228542 6 log.go:172] (0xc001f30000) (3) Data frame sent I0508 10:52:37.228630 6 log.go:172] (0xc001b140b0) Data frame received for 3 I0508 10:52:37.228654 6 log.go:172] (0xc001f30000) (3) Data frame handling I0508 10:52:37.228673 6 log.go:172] (0xc001b140b0) Data frame received for 5 I0508 10:52:37.228680 6 log.go:172] (0xc001f300a0) (5) Data frame handling I0508 10:52:37.229806 6 log.go:172] (0xc001b140b0) Data frame received for 1 I0508 10:52:37.229823 6 log.go:172] (0xc001ba2000) (1) Data frame handling I0508 10:52:37.229855 6 log.go:172] (0xc001ba2000) (1) Data frame sent I0508 10:52:37.229873 6 log.go:172] (0xc001b140b0) (0xc001ba2000) Stream removed, broadcasting: 1 I0508 10:52:37.229886 6 log.go:172] (0xc001b140b0) Go away received I0508 10:52:37.230020 6 log.go:172] (0xc001b140b0) (0xc001ba2000) Stream removed, broadcasting: 1 I0508 10:52:37.230038 6 log.go:172] (0xc001b140b0) (0xc001f30000) Stream removed, broadcasting: 3 I0508 10:52:37.230053 6 log.go:172] (0xc001b140b0) (0xc001f300a0) Stream removed, broadcasting: 5 May 8 10:52:37.230: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 10:52:37.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-r7zkn" for this suite. May 8 10:53:01.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 10:53:01.420: INFO: namespace: e2e-tests-pod-network-test-r7zkn, resource: bindings, ignored listing per whitelist May 8 10:53:01.475: INFO: namespace e2e-tests-pod-network-test-r7zkn deletion completed in 24.166689383s • [SLOW TEST:53.011 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 10:53:01.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-jnjk4 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet May 8 10:53:01.597: INFO: Found 0 stateful pods, waiting for 3 May 8 10:53:11.601: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 8 10:53:11.601: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 8 10:53:11.601: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 8 10:53:21.602: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 8 10:53:21.602: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 8 10:53:21.602: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 8 10:53:21.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jnjk4 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 8 10:53:21.882: INFO: stderr: "I0508 10:53:21.760616 40 log.go:172] (0xc000138790) (0xc0005e3220) Create stream\nI0508 10:53:21.760674 40 log.go:172] (0xc000138790) (0xc0005e3220) Stream added, broadcasting: 1\nI0508 10:53:21.762845 40 log.go:172] (0xc000138790) Reply frame received for 1\nI0508 10:53:21.762881 40 log.go:172] (0xc000138790) (0xc000726000) Create stream\nI0508 10:53:21.762891 40 log.go:172] (0xc000138790) (0xc000726000) Stream added, broadcasting: 3\nI0508 10:53:21.763731 40 log.go:172] (0xc000138790) Reply frame received for 3\nI0508 10:53:21.763784 40 log.go:172] (0xc000138790) (0xc0005e32c0) Create stream\nI0508 10:53:21.763798 40 log.go:172] (0xc000138790) (0xc0005e32c0) Stream added, broadcasting: 5\nI0508 10:53:21.764542 40 log.go:172] (0xc000138790) Reply frame received for 5\nI0508 10:53:21.875542 40 log.go:172] (0xc000138790) Data frame received for 5\nI0508 10:53:21.875646 40 log.go:172] (0xc0005e32c0) (5) Data frame handling\nI0508 10:53:21.875692 40 log.go:172] (0xc000138790) Data frame received for 3\nI0508 10:53:21.875702 40 log.go:172] (0xc000726000) (3) Data frame handling\nI0508 10:53:21.875721 40 log.go:172] (0xc000726000) (3) Data frame sent\nI0508 10:53:21.875737 40 log.go:172] (0xc000138790) Data frame received for 3\nI0508 10:53:21.875751 40 log.go:172] (0xc000726000) (3) Data frame handling\nI0508 10:53:21.878015 40 log.go:172] (0xc000138790) Data frame received for 1\nI0508 10:53:21.878042 40 log.go:172] (0xc0005e3220) (1) Data frame handling\nI0508 10:53:21.878059 40 log.go:172] (0xc0005e3220) (1) Data frame sent\nI0508 10:53:21.878077 40 log.go:172] (0xc000138790) (0xc0005e3220) Stream removed, broadcasting: 1\nI0508 10:53:21.878093 40 log.go:172] (0xc000138790) Go away received\nI0508 10:53:21.878403 40 log.go:172] (0xc000138790) (0xc0005e3220) Stream removed, broadcasting: 1\nI0508 10:53:21.878435 40 log.go:172] (0xc000138790) (0xc000726000) Stream removed, broadcasting: 3\nI0508 10:53:21.878447 40 log.go:172] (0xc000138790) (0xc0005e32c0) Stream removed, broadcasting: 5\n" May 8 10:53:21.882: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 8 10:53:21.882: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 8 10:53:31.916: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 8 10:53:41.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jnjk4 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 10:53:42.219: INFO: stderr: "I0508 10:53:42.134562 63 log.go:172] (0xc000138840) (0xc0007a6640) Create stream\nI0508 10:53:42.134618 63 log.go:172] (0xc000138840) (0xc0007a6640) Stream added, broadcasting: 1\nI0508 10:53:42.146835 63 log.go:172] (0xc000138840) Reply frame received for 1\nI0508 10:53:42.146898 63 log.go:172] (0xc000138840) (0xc0007a66e0) Create stream\nI0508 10:53:42.146911 63 log.go:172] (0xc000138840) (0xc0007a66e0) Stream added, broadcasting: 3\nI0508 10:53:42.148006 63 log.go:172] (0xc000138840) Reply frame received for 3\nI0508 10:53:42.148043 63 log.go:172] (0xc000138840) (0xc000570e60) Create stream\nI0508 10:53:42.148060 63 log.go:172] (0xc000138840) (0xc000570e60) Stream added, broadcasting: 5\nI0508 10:53:42.148982 63 log.go:172] (0xc000138840) Reply frame received for 5\nI0508 10:53:42.212699 63 log.go:172] (0xc000138840) Data frame received for 3\nI0508 10:53:42.212721 63 log.go:172] (0xc0007a66e0) (3) Data frame handling\nI0508 10:53:42.212734 63 log.go:172] (0xc0007a66e0) (3) Data frame sent\nI0508 10:53:42.212739 63 log.go:172] (0xc000138840) Data frame received for 3\nI0508 10:53:42.212742 63 log.go:172] (0xc0007a66e0) (3) Data frame handling\nI0508 10:53:42.212825 63 log.go:172] (0xc000138840) Data frame received for 5\nI0508 10:53:42.212840 63 log.go:172] (0xc000570e60) (5) Data frame handling\nI0508 10:53:42.214738 63 log.go:172] (0xc000138840) Data frame received for 1\nI0508 10:53:42.214752 63 log.go:172] (0xc0007a6640) (1) Data frame handling\nI0508 10:53:42.214758 63 log.go:172] (0xc0007a6640) (1) Data frame sent\nI0508 10:53:42.214960 63 log.go:172] (0xc000138840) (0xc0007a6640) Stream removed, broadcasting: 1\nI0508 10:53:42.215006 63 log.go:172] (0xc000138840) Go away received\nI0508 10:53:42.215147 63 log.go:172] (0xc000138840) (0xc0007a6640) Stream removed, broadcasting: 1\nI0508 10:53:42.215174 63 log.go:172] (0xc000138840) (0xc0007a66e0) Stream removed, broadcasting: 3\nI0508 10:53:42.215184 63 log.go:172] (0xc000138840) (0xc000570e60) Stream removed, broadcasting: 5\n" May 8 10:53:42.219: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 8 10:53:42.219: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 8 10:54:02.320: INFO: Waiting for StatefulSet e2e-tests-statefulset-jnjk4/ss2 to complete update May 8 10:54:02.320: INFO: Waiting for Pod e2e-tests-statefulset-jnjk4/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 8 10:54:12.328: INFO: Waiting for StatefulSet e2e-tests-statefulset-jnjk4/ss2 to complete update STEP: Rolling back to a previous revision May 8 10:54:22.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jnjk4 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 8 10:54:22.621: INFO: stderr: "I0508 10:54:22.482415 87 log.go:172] (0xc00013a840) (0xc0005af220) Create stream\nI0508 10:54:22.482474 87 log.go:172] (0xc00013a840) (0xc0005af220) Stream added, broadcasting: 1\nI0508 10:54:22.485108 87 log.go:172] (0xc00013a840) Reply frame received for 1\nI0508 10:54:22.485320 87 log.go:172] (0xc00013a840) (0xc0005af2c0) Create stream\nI0508 10:54:22.485342 87 log.go:172] (0xc00013a840) (0xc0005af2c0) Stream added, broadcasting: 3\nI0508 10:54:22.486357 87 log.go:172] (0xc00013a840) Reply frame received for 3\nI0508 10:54:22.486393 87 log.go:172] (0xc00013a840) (0xc000656000) Create stream\nI0508 10:54:22.486405 87 log.go:172] (0xc00013a840) (0xc000656000) Stream added, broadcasting: 5\nI0508 10:54:22.487368 87 log.go:172] (0xc00013a840) Reply frame received for 5\nI0508 10:54:22.612998 87 log.go:172] (0xc00013a840) Data frame received for 5\nI0508 10:54:22.613045 87 log.go:172] (0xc000656000) (5) Data frame handling\nI0508 10:54:22.613076 87 log.go:172] (0xc00013a840) Data frame received for 3\nI0508 10:54:22.613096 87 log.go:172] (0xc0005af2c0) (3) Data frame handling\nI0508 10:54:22.613277 87 log.go:172] (0xc0005af2c0) (3) Data frame sent\nI0508 10:54:22.613811 87 log.go:172] (0xc00013a840) Data frame received for 3\nI0508 10:54:22.613837 87 log.go:172] (0xc0005af2c0) (3) Data frame handling\nI0508 10:54:22.616041 87 log.go:172] (0xc00013a840) Data frame received for 1\nI0508 10:54:22.616149 87 log.go:172] (0xc0005af220) (1) Data frame handling\nI0508 10:54:22.616184 87 log.go:172] (0xc0005af220) (1) Data frame sent\nI0508 10:54:22.616198 87 log.go:172] (0xc00013a840) (0xc0005af220) Stream removed, broadcasting: 1\nI0508 10:54:22.616215 87 log.go:172] (0xc00013a840) Go away received\nI0508 10:54:22.616494 87 log.go:172] (0xc00013a840) (0xc0005af220) Stream removed, broadcasting: 1\nI0508 10:54:22.616514 87 log.go:172] (0xc00013a840) (0xc0005af2c0) Stream removed, broadcasting: 3\nI0508 10:54:22.616525 87 log.go:172] (0xc00013a840) (0xc000656000) Stream removed, broadcasting: 5\n" May 8 10:54:22.621: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 8 10:54:22.621: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 8 10:54:32.651: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 8 10:54:43.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jnjk4 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 10:54:43.474: INFO: stderr: "I0508 10:54:43.392956 108 log.go:172] (0xc000138630) (0xc00038a780) Create stream\nI0508 10:54:43.393010 108 log.go:172] (0xc000138630) (0xc00038a780) Stream added, broadcasting: 1\nI0508 10:54:43.407718 108 log.go:172] (0xc000138630) Reply frame received for 1\nI0508 10:54:43.407783 108 log.go:172] (0xc000138630) (0xc0006b8000) Create stream\nI0508 10:54:43.407799 108 log.go:172] (0xc000138630) (0xc0006b8000) Stream added, broadcasting: 3\nI0508 10:54:43.410058 108 log.go:172] (0xc000138630) Reply frame received for 3\nI0508 10:54:43.410101 108 log.go:172] (0xc000138630) (0xc00038a820) Create stream\nI0508 10:54:43.410112 108 log.go:172] (0xc000138630) (0xc00038a820) Stream added, broadcasting: 5\nI0508 10:54:43.414875 108 log.go:172] (0xc000138630) Reply frame received for 5\nI0508 10:54:43.469391 108 log.go:172] (0xc000138630) Data frame received for 5\nI0508 10:54:43.469446 108 log.go:172] (0xc00038a820) (5) Data frame handling\nI0508 10:54:43.469482 108 log.go:172] (0xc000138630) Data frame received for 3\nI0508 10:54:43.469497 108 log.go:172] (0xc0006b8000) (3) Data frame handling\nI0508 10:54:43.469517 108 log.go:172] (0xc0006b8000) (3) Data frame sent\nI0508 10:54:43.469566 108 log.go:172] (0xc000138630) Data frame received for 3\nI0508 10:54:43.469581 108 log.go:172] (0xc0006b8000) (3) Data frame handling\nI0508 10:54:43.471284 108 log.go:172] (0xc000138630) Data frame received for 1\nI0508 10:54:43.471306 108 log.go:172] (0xc00038a780) (1) Data frame handling\nI0508 10:54:43.471320 108 log.go:172] (0xc00038a780) (1) Data frame sent\nI0508 10:54:43.471337 108 log.go:172] (0xc000138630) (0xc00038a780) Stream removed, broadcasting: 1\nI0508 10:54:43.471372 108 log.go:172] (0xc000138630) Go away received\nI0508 10:54:43.471548 108 log.go:172] (0xc000138630) (0xc00038a780) Stream removed, broadcasting: 1\nI0508 10:54:43.471566 108 log.go:172] (0xc000138630) (0xc0006b8000) Stream removed, broadcasting: 3\nI0508 10:54:43.471572 108 log.go:172] (0xc000138630) (0xc00038a820) Stream removed, broadcasting: 5\n" May 8 10:54:43.474: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 8 10:54:43.474: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 8 10:54:53.532: INFO: Waiting for StatefulSet e2e-tests-statefulset-jnjk4/ss2 to complete update May 8 10:54:53.533: INFO: Waiting for Pod e2e-tests-statefulset-jnjk4/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 8 10:54:53.533: INFO: Waiting for Pod e2e-tests-statefulset-jnjk4/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 8 10:55:03.574: INFO: Waiting for StatefulSet e2e-tests-statefulset-jnjk4/ss2 to complete update May 8 10:55:03.574: INFO: Waiting for Pod e2e-tests-statefulset-jnjk4/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 8 10:55:13.542: INFO: Deleting all statefulset in ns e2e-tests-statefulset-jnjk4 May 8 10:55:13.544: INFO: Scaling statefulset ss2 to 0 May 8 10:55:33.587: INFO: Waiting for statefulset status.replicas updated to 0 May 8 10:55:33.590: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 10:55:33.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-jnjk4" for this suite. May 8 10:55:41.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 10:55:41.679: INFO: namespace: e2e-tests-statefulset-jnjk4, resource: bindings, ignored listing per whitelist May 8 10:55:41.723: INFO: namespace e2e-tests-statefulset-jnjk4 deletion completed in 8.117110666s • [SLOW TEST:160.248 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 10:55:41.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 8 10:55:41.955: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-cg7nx,SelfLink:/api/v1/namespaces/e2e-tests-watch-cg7nx/configmaps/e2e-watch-test-watch-closed,UID:75bcab18-911a-11ea-99e8-0242ac110002,ResourceVersion:9398191,Generation:0,CreationTimestamp:2020-05-08 10:55:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 8 10:55:41.955: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-cg7nx,SelfLink:/api/v1/namespaces/e2e-tests-watch-cg7nx/configmaps/e2e-watch-test-watch-closed,UID:75bcab18-911a-11ea-99e8-0242ac110002,ResourceVersion:9398192,Generation:0,CreationTimestamp:2020-05-08 10:55:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 8 10:55:41.992: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-cg7nx,SelfLink:/api/v1/namespaces/e2e-tests-watch-cg7nx/configmaps/e2e-watch-test-watch-closed,UID:75bcab18-911a-11ea-99e8-0242ac110002,ResourceVersion:9398193,Generation:0,CreationTimestamp:2020-05-08 10:55:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 8 10:55:41.993: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-cg7nx,SelfLink:/api/v1/namespaces/e2e-tests-watch-cg7nx/configmaps/e2e-watch-test-watch-closed,UID:75bcab18-911a-11ea-99e8-0242ac110002,ResourceVersion:9398194,Generation:0,CreationTimestamp:2020-05-08 10:55:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 10:55:41.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-cg7nx" for this suite. May 8 10:55:48.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 10:55:48.207: INFO: namespace: e2e-tests-watch-cg7nx, resource: bindings, ignored listing per whitelist May 8 10:55:48.267: INFO: namespace e2e-tests-watch-cg7nx deletion completed in 6.220723263s • [SLOW TEST:6.543 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 10:55:48.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-4nglc STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-4nglc STEP: Deleting pre-stop pod May 8 10:56:01.561: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 10:56:01.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-4nglc" for this suite. May 8 10:56:39.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 10:56:39.626: INFO: namespace: e2e-tests-prestop-4nglc, resource: bindings, ignored listing per whitelist May 8 10:56:39.664: INFO: namespace e2e-tests-prestop-4nglc deletion completed in 38.084747868s • [SLOW TEST:51.397 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 10:56:39.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-9853a5b8-911a-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume configMaps May 8 10:56:39.953: INFO: Waiting up to 5m0s for pod "pod-configmaps-98569382-911a-11ea-8adb-0242ac110017" in namespace "e2e-tests-configmap-phmpq" to be "success or failure" May 8 10:56:39.980: INFO: Pod "pod-configmaps-98569382-911a-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 27.103804ms May 8 10:56:42.099: INFO: Pod "pod-configmaps-98569382-911a-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145642756s May 8 10:56:44.103: INFO: Pod "pod-configmaps-98569382-911a-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150083728s May 8 10:56:46.107: INFO: Pod "pod-configmaps-98569382-911a-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.154005722s STEP: Saw pod success May 8 10:56:46.107: INFO: Pod "pod-configmaps-98569382-911a-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 10:56:46.110: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-98569382-911a-11ea-8adb-0242ac110017 container configmap-volume-test: STEP: delete the pod May 8 10:56:46.137: INFO: Waiting for pod pod-configmaps-98569382-911a-11ea-8adb-0242ac110017 to disappear May 8 10:56:46.156: INFO: Pod pod-configmaps-98569382-911a-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 10:56:46.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-phmpq" for this suite. May 8 10:56:52.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 10:56:52.435: INFO: namespace: e2e-tests-configmap-phmpq, resource: bindings, ignored listing per whitelist May 8 10:56:52.453: INFO: namespace e2e-tests-configmap-phmpq deletion completed in 6.293155886s • [SLOW TEST:12.789 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 10:56:52.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 8 10:56:52.575: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9fe01907-911a-11ea-8adb-0242ac110017" in namespace "e2e-tests-downward-api-q2865" to be "success or failure" May 8 10:56:52.579: INFO: Pod "downwardapi-volume-9fe01907-911a-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.462388ms May 8 10:56:54.583: INFO: Pod "downwardapi-volume-9fe01907-911a-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007905867s May 8 10:56:56.588: INFO: Pod "downwardapi-volume-9fe01907-911a-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012689285s STEP: Saw pod success May 8 10:56:56.588: INFO: Pod "downwardapi-volume-9fe01907-911a-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 10:56:56.591: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-9fe01907-911a-11ea-8adb-0242ac110017 container client-container: STEP: delete the pod May 8 10:56:56.692: INFO: Waiting for pod downwardapi-volume-9fe01907-911a-11ea-8adb-0242ac110017 to disappear May 8 10:56:56.759: INFO: Pod downwardapi-volume-9fe01907-911a-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 10:56:56.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-q2865" for this suite. May 8 10:57:02.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 10:57:02.788: INFO: namespace: e2e-tests-downward-api-q2865, resource: bindings, ignored listing per whitelist May 8 10:57:02.870: INFO: namespace e2e-tests-downward-api-q2865 deletion completed in 6.107080128s • [SLOW TEST:10.417 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 10:57:02.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 8 10:57:03.024: INFO: Creating deployment "test-recreate-deployment" May 8 10:57:03.028: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 8 10:57:03.049: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created May 8 10:57:05.058: INFO: Waiting deployment "test-recreate-deployment" to complete May 8 10:57:05.060: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532223, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532223, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532223, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532223, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 10:57:07.064: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532223, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532223, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532223, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532223, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 10:57:09.197: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 8 10:57:09.203: INFO: Updating deployment test-recreate-deployment May 8 10:57:09.203: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 8 10:57:11.131: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-jtmp9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jtmp9/deployments/test-recreate-deployment,UID:a61c21ba-911a-11ea-99e8-0242ac110002,ResourceVersion:9398505,Generation:2,CreationTimestamp:2020-05-08 10:57:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-08 10:57:10 +0000 UTC 2020-05-08 10:57:10 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-08 10:57:11 +0000 UTC 2020-05-08 10:57:03 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 8 10:57:11.134: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-jtmp9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jtmp9/replicasets/test-recreate-deployment-589c4bfd,UID:aa43ad97-911a-11ea-99e8-0242ac110002,ResourceVersion:9398503,Generation:1,CreationTimestamp:2020-05-08 10:57:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment a61c21ba-911a-11ea-99e8-0242ac110002 0xc0015c202f 0xc0015c2040}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 8 10:57:11.134: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 8 10:57:11.134: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-jtmp9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jtmp9/replicasets/test-recreate-deployment-5bf7f65dc,UID:a61f9ea2-911a-11ea-99e8-0242ac110002,ResourceVersion:9398488,Generation:2,CreationTimestamp:2020-05-08 10:57:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment a61c21ba-911a-11ea-99e8-0242ac110002 0xc0015c2100 0xc0015c2101}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 8 10:57:11.136: INFO: Pod "test-recreate-deployment-589c4bfd-zp9xk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-zp9xk,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-jtmp9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-jtmp9/pods/test-recreate-deployment-589c4bfd-zp9xk,UID:aa4429cd-911a-11ea-99e8-0242ac110002,ResourceVersion:9398502,Generation:0,CreationTimestamp:2020-05-08 10:57:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd aa43ad97-911a-11ea-99e8-0242ac110002 0xc001d45e2f 0xc001d45e40}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6gk47 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6gk47,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6gk47 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d45eb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d45ed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 10:57:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 10:57:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 10:57:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 10:57:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-08 10:57:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 10:57:11.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-jtmp9" for this suite. May 8 10:57:17.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 10:57:17.424: INFO: namespace: e2e-tests-deployment-jtmp9, resource: bindings, ignored listing per whitelist May 8 10:57:17.435: INFO: namespace e2e-tests-deployment-jtmp9 deletion completed in 6.296268574s • [SLOW TEST:14.565 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 10:57:17.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 8 10:57:27.598: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-98xrh PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 10:57:27.598: INFO: >>> kubeConfig: /root/.kube/config I0508 10:57:27.628501 6 log.go:172] (0xc000afcfd0) (0xc001a103c0) Create stream I0508 10:57:27.628531 6 log.go:172] (0xc000afcfd0) (0xc001a103c0) Stream added, broadcasting: 1 I0508 10:57:27.630858 6 log.go:172] (0xc000afcfd0) Reply frame received for 1 I0508 10:57:27.630903 6 log.go:172] (0xc000afcfd0) (0xc001a9c000) Create stream I0508 10:57:27.630917 6 log.go:172] (0xc000afcfd0) (0xc001a9c000) Stream added, broadcasting: 3 I0508 10:57:27.631683 6 log.go:172] (0xc000afcfd0) Reply frame received for 3 I0508 10:57:27.631722 6 log.go:172] (0xc000afcfd0) (0xc001a9c0a0) Create stream I0508 10:57:27.631736 6 log.go:172] (0xc000afcfd0) (0xc001a9c0a0) Stream added, broadcasting: 5 I0508 10:57:27.632617 6 log.go:172] (0xc000afcfd0) Reply frame received for 5 I0508 10:57:27.690764 6 log.go:172] (0xc000afcfd0) Data frame received for 5 I0508 10:57:27.690805 6 log.go:172] (0xc001a9c0a0) (5) Data frame handling I0508 10:57:27.691120 6 log.go:172] (0xc000afcfd0) Data frame received for 3 I0508 10:57:27.691139 6 log.go:172] (0xc001a9c000) (3) Data frame handling I0508 10:57:27.691151 6 log.go:172] (0xc001a9c000) (3) Data frame sent I0508 10:57:27.691164 6 log.go:172] (0xc000afcfd0) Data frame received for 3 I0508 10:57:27.691174 6 log.go:172] (0xc001a9c000) (3) Data frame handling I0508 10:57:27.692666 6 log.go:172] (0xc000afcfd0) Data frame received for 1 I0508 10:57:27.692679 6 log.go:172] (0xc001a103c0) (1) Data frame handling I0508 10:57:27.692688 6 log.go:172] (0xc001a103c0) (1) Data frame sent I0508 10:57:27.692754 6 log.go:172] (0xc000afcfd0) (0xc001a103c0) Stream removed, broadcasting: 1 I0508 10:57:27.692832 6 log.go:172] (0xc000afcfd0) (0xc001a103c0) Stream removed, broadcasting: 1 I0508 10:57:27.692846 6 log.go:172] (0xc000afcfd0) (0xc001a9c000) Stream removed, broadcasting: 3 I0508 10:57:27.692938 6 log.go:172] (0xc000afcfd0) Go away received I0508 10:57:27.692965 6 log.go:172] (0xc000afcfd0) (0xc001a9c0a0) Stream removed, broadcasting: 5 May 8 10:57:27.692: INFO: Exec stderr: "" May 8 10:57:27.693: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-98xrh PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 10:57:27.693: INFO: >>> kubeConfig: /root/.kube/config I0508 10:57:27.718195 6 log.go:172] (0xc000afd4a0) (0xc001a10640) Create stream I0508 10:57:27.718226 6 log.go:172] (0xc000afd4a0) (0xc001a10640) Stream added, broadcasting: 1 I0508 10:57:27.720145 6 log.go:172] (0xc000afd4a0) Reply frame received for 1 I0508 10:57:27.720194 6 log.go:172] (0xc000afd4a0) (0xc001dae1e0) Create stream I0508 10:57:27.720208 6 log.go:172] (0xc000afd4a0) (0xc001dae1e0) Stream added, broadcasting: 3 I0508 10:57:27.721014 6 log.go:172] (0xc000afd4a0) Reply frame received for 3 I0508 10:57:27.721067 6 log.go:172] (0xc000afd4a0) (0xc001dae280) Create stream I0508 10:57:27.721080 6 log.go:172] (0xc000afd4a0) (0xc001dae280) Stream added, broadcasting: 5 I0508 10:57:27.722058 6 log.go:172] (0xc000afd4a0) Reply frame received for 5 I0508 10:57:27.771721 6 log.go:172] (0xc000afd4a0) Data frame received for 3 I0508 10:57:27.771755 6 log.go:172] (0xc001dae1e0) (3) Data frame handling I0508 10:57:27.771764 6 log.go:172] (0xc001dae1e0) (3) Data frame sent I0508 10:57:27.771782 6 log.go:172] (0xc000afd4a0) Data frame received for 5 I0508 10:57:27.771813 6 log.go:172] (0xc001dae280) (5) Data frame handling I0508 10:57:27.771843 6 log.go:172] (0xc000afd4a0) Data frame received for 3 I0508 10:57:27.771859 6 log.go:172] (0xc001dae1e0) (3) Data frame handling I0508 10:57:27.773363 6 log.go:172] (0xc000afd4a0) Data frame received for 1 I0508 10:57:27.773392 6 log.go:172] (0xc001a10640) (1) Data frame handling I0508 10:57:27.773415 6 log.go:172] (0xc001a10640) (1) Data frame sent I0508 10:57:27.773434 6 log.go:172] (0xc000afd4a0) (0xc001a10640) Stream removed, broadcasting: 1 I0508 10:57:27.773477 6 log.go:172] (0xc000afd4a0) Go away received I0508 10:57:27.773581 6 log.go:172] (0xc000afd4a0) (0xc001a10640) Stream removed, broadcasting: 1 I0508 10:57:27.773610 6 log.go:172] (0xc000afd4a0) (0xc001dae1e0) Stream removed, broadcasting: 3 I0508 10:57:27.773674 6 log.go:172] (0xc000afd4a0) (0xc001dae280) Stream removed, broadcasting: 5 May 8 10:57:27.773: INFO: Exec stderr: "" May 8 10:57:27.773: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-98xrh PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 10:57:27.773: INFO: >>> kubeConfig: /root/.kube/config I0508 10:57:27.802428 6 log.go:172] (0xc001b14420) (0xc001a9c3c0) Create stream I0508 10:57:27.802459 6 log.go:172] (0xc001b14420) (0xc001a9c3c0) Stream added, broadcasting: 1 I0508 10:57:27.812830 6 log.go:172] (0xc001b14420) Reply frame received for 1 I0508 10:57:27.812884 6 log.go:172] (0xc001b14420) (0xc001c800a0) Create stream I0508 10:57:27.812901 6 log.go:172] (0xc001b14420) (0xc001c800a0) Stream added, broadcasting: 3 I0508 10:57:27.814752 6 log.go:172] (0xc001b14420) Reply frame received for 3 I0508 10:57:27.814838 6 log.go:172] (0xc001b14420) (0xc001a9c460) Create stream I0508 10:57:27.814889 6 log.go:172] (0xc001b14420) (0xc001a9c460) Stream added, broadcasting: 5 I0508 10:57:27.817956 6 log.go:172] (0xc001b14420) Reply frame received for 5 I0508 10:57:27.880088 6 log.go:172] (0xc001b14420) Data frame received for 5 I0508 10:57:27.880117 6 log.go:172] (0xc001a9c460) (5) Data frame handling I0508 10:57:27.880143 6 log.go:172] (0xc001b14420) Data frame received for 3 I0508 10:57:27.880169 6 log.go:172] (0xc001c800a0) (3) Data frame handling I0508 10:57:27.880190 6 log.go:172] (0xc001c800a0) (3) Data frame sent I0508 10:57:27.880202 6 log.go:172] (0xc001b14420) Data frame received for 3 I0508 10:57:27.880214 6 log.go:172] (0xc001c800a0) (3) Data frame handling I0508 10:57:27.881907 6 log.go:172] (0xc001b14420) Data frame received for 1 I0508 10:57:27.881932 6 log.go:172] (0xc001a9c3c0) (1) Data frame handling I0508 10:57:27.881947 6 log.go:172] (0xc001a9c3c0) (1) Data frame sent I0508 10:57:27.881966 6 log.go:172] (0xc001b14420) (0xc001a9c3c0) Stream removed, broadcasting: 1 I0508 10:57:27.881992 6 log.go:172] (0xc001b14420) Go away received I0508 10:57:27.882217 6 log.go:172] (0xc001b14420) (0xc001a9c3c0) Stream removed, broadcasting: 1 I0508 10:57:27.882264 6 log.go:172] (0xc001b14420) (0xc001c800a0) Stream removed, broadcasting: 3 I0508 10:57:27.882286 6 log.go:172] (0xc001b14420) (0xc001a9c460) Stream removed, broadcasting: 5 May 8 10:57:27.882: INFO: Exec stderr: "" May 8 10:57:27.882: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-98xrh PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 10:57:27.882: INFO: >>> kubeConfig: /root/.kube/config I0508 10:57:27.918365 6 log.go:172] (0xc000c92420) (0xc001c80320) Create stream I0508 10:57:27.918418 6 log.go:172] (0xc000c92420) (0xc001c80320) Stream added, broadcasting: 1 I0508 10:57:27.921006 6 log.go:172] (0xc000c92420) Reply frame received for 1 I0508 10:57:27.921043 6 log.go:172] (0xc000c92420) (0xc001dae320) Create stream I0508 10:57:27.921054 6 log.go:172] (0xc000c92420) (0xc001dae320) Stream added, broadcasting: 3 I0508 10:57:27.922462 6 log.go:172] (0xc000c92420) Reply frame received for 3 I0508 10:57:27.922523 6 log.go:172] (0xc000c92420) (0xc00180e000) Create stream I0508 10:57:27.922554 6 log.go:172] (0xc000c92420) (0xc00180e000) Stream added, broadcasting: 5 I0508 10:57:27.923571 6 log.go:172] (0xc000c92420) Reply frame received for 5 I0508 10:57:27.985078 6 log.go:172] (0xc000c92420) Data frame received for 3 I0508 10:57:27.985280 6 log.go:172] (0xc001dae320) (3) Data frame handling I0508 10:57:27.985306 6 log.go:172] (0xc001dae320) (3) Data frame sent I0508 10:57:27.985327 6 log.go:172] (0xc000c92420) Data frame received for 3 I0508 10:57:27.985340 6 log.go:172] (0xc001dae320) (3) Data frame handling I0508 10:57:27.985392 6 log.go:172] (0xc000c92420) Data frame received for 5 I0508 10:57:27.985420 6 log.go:172] (0xc00180e000) (5) Data frame handling I0508 10:57:27.986766 6 log.go:172] (0xc000c92420) Data frame received for 1 I0508 10:57:27.986783 6 log.go:172] (0xc001c80320) (1) Data frame handling I0508 10:57:27.986804 6 log.go:172] (0xc001c80320) (1) Data frame sent I0508 10:57:27.986829 6 log.go:172] (0xc000c92420) (0xc001c80320) Stream removed, broadcasting: 1 I0508 10:57:27.986935 6 log.go:172] (0xc000c92420) (0xc001c80320) Stream removed, broadcasting: 1 I0508 10:57:27.986981 6 log.go:172] (0xc000c92420) Go away received I0508 10:57:27.987045 6 log.go:172] (0xc000c92420) (0xc001dae320) Stream removed, broadcasting: 3 I0508 10:57:27.987093 6 log.go:172] (0xc000c92420) (0xc00180e000) Stream removed, broadcasting: 5 May 8 10:57:27.987: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 8 10:57:27.987: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-98xrh PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 10:57:27.987: INFO: >>> kubeConfig: /root/.kube/config I0508 10:57:28.020942 6 log.go:172] (0xc000c342c0) (0xc00180e1e0) Create stream I0508 10:57:28.020966 6 log.go:172] (0xc000c342c0) (0xc00180e1e0) Stream added, broadcasting: 1 I0508 10:57:28.023197 6 log.go:172] (0xc000c342c0) Reply frame received for 1 I0508 10:57:28.023243 6 log.go:172] (0xc000c342c0) (0xc001c803c0) Create stream I0508 10:57:28.023259 6 log.go:172] (0xc000c342c0) (0xc001c803c0) Stream added, broadcasting: 3 I0508 10:57:28.024538 6 log.go:172] (0xc000c342c0) Reply frame received for 3 I0508 10:57:28.024582 6 log.go:172] (0xc000c342c0) (0xc001c80460) Create stream I0508 10:57:28.024590 6 log.go:172] (0xc000c342c0) (0xc001c80460) Stream added, broadcasting: 5 I0508 10:57:28.025808 6 log.go:172] (0xc000c342c0) Reply frame received for 5 I0508 10:57:28.084852 6 log.go:172] (0xc000c342c0) Data frame received for 3 I0508 10:57:28.084899 6 log.go:172] (0xc001c803c0) (3) Data frame handling I0508 10:57:28.084911 6 log.go:172] (0xc001c803c0) (3) Data frame sent I0508 10:57:28.084917 6 log.go:172] (0xc000c342c0) Data frame received for 3 I0508 10:57:28.084923 6 log.go:172] (0xc001c803c0) (3) Data frame handling I0508 10:57:28.084952 6 log.go:172] (0xc000c342c0) Data frame received for 5 I0508 10:57:28.084968 6 log.go:172] (0xc001c80460) (5) Data frame handling I0508 10:57:28.086605 6 log.go:172] (0xc000c342c0) Data frame received for 1 I0508 10:57:28.086646 6 log.go:172] (0xc00180e1e0) (1) Data frame handling I0508 10:57:28.086673 6 log.go:172] (0xc00180e1e0) (1) Data frame sent I0508 10:57:28.086692 6 log.go:172] (0xc000c342c0) (0xc00180e1e0) Stream removed, broadcasting: 1 I0508 10:57:28.086822 6 log.go:172] (0xc000c342c0) (0xc00180e1e0) Stream removed, broadcasting: 1 I0508 10:57:28.086846 6 log.go:172] (0xc000c342c0) (0xc001c803c0) Stream removed, broadcasting: 3 I0508 10:57:28.086861 6 log.go:172] (0xc000c342c0) (0xc001c80460) Stream removed, broadcasting: 5 May 8 10:57:28.086: INFO: Exec stderr: "" I0508 10:57:28.086891 6 log.go:172] (0xc000c342c0) Go away received May 8 10:57:28.086: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-98xrh PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 10:57:28.086: INFO: >>> kubeConfig: /root/.kube/config I0508 10:57:28.122760 6 log.go:172] (0xc000afd970) (0xc001a108c0) Create stream I0508 10:57:28.122783 6 log.go:172] (0xc000afd970) (0xc001a108c0) Stream added, broadcasting: 1 I0508 10:57:28.125793 6 log.go:172] (0xc000afd970) Reply frame received for 1 I0508 10:57:28.125843 6 log.go:172] (0xc000afd970) (0xc001a10960) Create stream I0508 10:57:28.125857 6 log.go:172] (0xc000afd970) (0xc001a10960) Stream added, broadcasting: 3 I0508 10:57:28.126873 6 log.go:172] (0xc000afd970) Reply frame received for 3 I0508 10:57:28.126902 6 log.go:172] (0xc000afd970) (0xc001a10a00) Create stream I0508 10:57:28.126910 6 log.go:172] (0xc000afd970) (0xc001a10a00) Stream added, broadcasting: 5 I0508 10:57:28.127786 6 log.go:172] (0xc000afd970) Reply frame received for 5 I0508 10:57:28.178459 6 log.go:172] (0xc000afd970) Data frame received for 3 I0508 10:57:28.178501 6 log.go:172] (0xc001a10960) (3) Data frame handling I0508 10:57:28.178526 6 log.go:172] (0xc001a10960) (3) Data frame sent I0508 10:57:28.178550 6 log.go:172] (0xc000afd970) Data frame received for 3 I0508 10:57:28.178566 6 log.go:172] (0xc001a10960) (3) Data frame handling I0508 10:57:28.178596 6 log.go:172] (0xc000afd970) Data frame received for 5 I0508 10:57:28.178625 6 log.go:172] (0xc001a10a00) (5) Data frame handling I0508 10:57:28.180605 6 log.go:172] (0xc000afd970) Data frame received for 1 I0508 10:57:28.180634 6 log.go:172] (0xc001a108c0) (1) Data frame handling I0508 10:57:28.180651 6 log.go:172] (0xc001a108c0) (1) Data frame sent I0508 10:57:28.180673 6 log.go:172] (0xc000afd970) (0xc001a108c0) Stream removed, broadcasting: 1 I0508 10:57:28.180736 6 log.go:172] (0xc000afd970) Go away received I0508 10:57:28.180777 6 log.go:172] (0xc000afd970) (0xc001a108c0) Stream removed, broadcasting: 1 I0508 10:57:28.180794 6 log.go:172] (0xc000afd970) (0xc001a10960) Stream removed, broadcasting: 3 I0508 10:57:28.180806 6 log.go:172] (0xc000afd970) (0xc001a10a00) Stream removed, broadcasting: 5 May 8 10:57:28.180: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 8 10:57:28.180: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-98xrh PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 10:57:28.180: INFO: >>> kubeConfig: /root/.kube/config I0508 10:57:28.211655 6 log.go:172] (0xc00120e2c0) (0xc001dae5a0) Create stream I0508 10:57:28.211683 6 log.go:172] (0xc00120e2c0) (0xc001dae5a0) Stream added, broadcasting: 1 I0508 10:57:28.214624 6 log.go:172] (0xc00120e2c0) Reply frame received for 1 I0508 10:57:28.214674 6 log.go:172] (0xc00120e2c0) (0xc001c80500) Create stream I0508 10:57:28.214688 6 log.go:172] (0xc00120e2c0) (0xc001c80500) Stream added, broadcasting: 3 I0508 10:57:28.215626 6 log.go:172] (0xc00120e2c0) Reply frame received for 3 I0508 10:57:28.215672 6 log.go:172] (0xc00120e2c0) (0xc00180e280) Create stream I0508 10:57:28.215689 6 log.go:172] (0xc00120e2c0) (0xc00180e280) Stream added, broadcasting: 5 I0508 10:57:28.216527 6 log.go:172] (0xc00120e2c0) Reply frame received for 5 I0508 10:57:28.280500 6 log.go:172] (0xc00120e2c0) Data frame received for 5 I0508 10:57:28.280533 6 log.go:172] (0xc00180e280) (5) Data frame handling I0508 10:57:28.280559 6 log.go:172] (0xc00120e2c0) Data frame received for 3 I0508 10:57:28.280568 6 log.go:172] (0xc001c80500) (3) Data frame handling I0508 10:57:28.280578 6 log.go:172] (0xc001c80500) (3) Data frame sent I0508 10:57:28.280584 6 log.go:172] (0xc00120e2c0) Data frame received for 3 I0508 10:57:28.280590 6 log.go:172] (0xc001c80500) (3) Data frame handling I0508 10:57:28.281868 6 log.go:172] (0xc00120e2c0) Data frame received for 1 I0508 10:57:28.281899 6 log.go:172] (0xc001dae5a0) (1) Data frame handling I0508 10:57:28.281917 6 log.go:172] (0xc001dae5a0) (1) Data frame sent I0508 10:57:28.281944 6 log.go:172] (0xc00120e2c0) (0xc001dae5a0) Stream removed, broadcasting: 1 I0508 10:57:28.281958 6 log.go:172] (0xc00120e2c0) Go away received I0508 10:57:28.282089 6 log.go:172] (0xc00120e2c0) (0xc001dae5a0) Stream removed, broadcasting: 1 I0508 10:57:28.282103 6 log.go:172] (0xc00120e2c0) (0xc001c80500) Stream removed, broadcasting: 3 I0508 10:57:28.282110 6 log.go:172] (0xc00120e2c0) (0xc00180e280) Stream removed, broadcasting: 5 May 8 10:57:28.282: INFO: Exec stderr: "" May 8 10:57:28.282: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-98xrh PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 10:57:28.282: INFO: >>> kubeConfig: /root/.kube/config I0508 10:57:28.309979 6 log.go:172] (0xc000afde40) (0xc001a10c80) Create stream I0508 10:57:28.310009 6 log.go:172] (0xc000afde40) (0xc001a10c80) Stream added, broadcasting: 1 I0508 10:57:28.312862 6 log.go:172] (0xc000afde40) Reply frame received for 1 I0508 10:57:28.312892 6 log.go:172] (0xc000afde40) (0xc001dae640) Create stream I0508 10:57:28.312907 6 log.go:172] (0xc000afde40) (0xc001dae640) Stream added, broadcasting: 3 I0508 10:57:28.314087 6 log.go:172] (0xc000afde40) Reply frame received for 3 I0508 10:57:28.314124 6 log.go:172] (0xc000afde40) (0xc00180e320) Create stream I0508 10:57:28.314146 6 log.go:172] (0xc000afde40) (0xc00180e320) Stream added, broadcasting: 5 I0508 10:57:28.315147 6 log.go:172] (0xc000afde40) Reply frame received for 5 I0508 10:57:28.373465 6 log.go:172] (0xc000afde40) Data frame received for 3 I0508 10:57:28.373493 6 log.go:172] (0xc001dae640) (3) Data frame handling I0508 10:57:28.373501 6 log.go:172] (0xc001dae640) (3) Data frame sent I0508 10:57:28.373505 6 log.go:172] (0xc000afde40) Data frame received for 3 I0508 10:57:28.373509 6 log.go:172] (0xc001dae640) (3) Data frame handling I0508 10:57:28.373528 6 log.go:172] (0xc000afde40) Data frame received for 5 I0508 10:57:28.373534 6 log.go:172] (0xc00180e320) (5) Data frame handling I0508 10:57:28.374710 6 log.go:172] (0xc000afde40) Data frame received for 1 I0508 10:57:28.374733 6 log.go:172] (0xc001a10c80) (1) Data frame handling I0508 10:57:28.374748 6 log.go:172] (0xc001a10c80) (1) Data frame sent I0508 10:57:28.374765 6 log.go:172] (0xc000afde40) (0xc001a10c80) Stream removed, broadcasting: 1 I0508 10:57:28.374801 6 log.go:172] (0xc000afde40) Go away received I0508 10:57:28.374890 6 log.go:172] (0xc000afde40) (0xc001a10c80) Stream removed, broadcasting: 1 I0508 10:57:28.374956 6 log.go:172] (0xc000afde40) (0xc001dae640) Stream removed, broadcasting: 3 I0508 10:57:28.374981 6 log.go:172] (0xc000afde40) (0xc00180e320) Stream removed, broadcasting: 5 May 8 10:57:28.374: INFO: Exec stderr: "" May 8 10:57:28.375: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-98xrh PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 10:57:28.375: INFO: >>> kubeConfig: /root/.kube/config I0508 10:57:28.408612 6 log.go:172] (0xc00120e790) (0xc001daea00) Create stream I0508 10:57:28.408654 6 log.go:172] (0xc00120e790) (0xc001daea00) Stream added, broadcasting: 1 I0508 10:57:28.411538 6 log.go:172] (0xc00120e790) Reply frame received for 1 I0508 10:57:28.411577 6 log.go:172] (0xc00120e790) (0xc001daeaa0) Create stream I0508 10:57:28.411591 6 log.go:172] (0xc00120e790) (0xc001daeaa0) Stream added, broadcasting: 3 I0508 10:57:28.412502 6 log.go:172] (0xc00120e790) Reply frame received for 3 I0508 10:57:28.412562 6 log.go:172] (0xc00120e790) (0xc001a10d20) Create stream I0508 10:57:28.412579 6 log.go:172] (0xc00120e790) (0xc001a10d20) Stream added, broadcasting: 5 I0508 10:57:28.413740 6 log.go:172] (0xc00120e790) Reply frame received for 5 I0508 10:57:28.482343 6 log.go:172] (0xc00120e790) Data frame received for 5 I0508 10:57:28.482386 6 log.go:172] (0xc001a10d20) (5) Data frame handling I0508 10:57:28.482420 6 log.go:172] (0xc00120e790) Data frame received for 3 I0508 10:57:28.482437 6 log.go:172] (0xc001daeaa0) (3) Data frame handling I0508 10:57:28.482457 6 log.go:172] (0xc001daeaa0) (3) Data frame sent I0508 10:57:28.482469 6 log.go:172] (0xc00120e790) Data frame received for 3 I0508 10:57:28.482477 6 log.go:172] (0xc001daeaa0) (3) Data frame handling I0508 10:57:28.483565 6 log.go:172] (0xc00120e790) Data frame received for 1 I0508 10:57:28.483579 6 log.go:172] (0xc001daea00) (1) Data frame handling I0508 10:57:28.483599 6 log.go:172] (0xc001daea00) (1) Data frame sent I0508 10:57:28.483626 6 log.go:172] (0xc00120e790) (0xc001daea00) Stream removed, broadcasting: 1 I0508 10:57:28.483714 6 log.go:172] (0xc00120e790) Go away received I0508 10:57:28.483756 6 log.go:172] (0xc00120e790) (0xc001daea00) Stream removed, broadcasting: 1 I0508 10:57:28.483782 6 log.go:172] (0xc00120e790) (0xc001daeaa0) Stream removed, broadcasting: 3 I0508 10:57:28.483800 6 log.go:172] (0xc00120e790) (0xc001a10d20) Stream removed, broadcasting: 5 May 8 10:57:28.483: INFO: Exec stderr: "" May 8 10:57:28.483: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-98xrh PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 10:57:28.483: INFO: >>> kubeConfig: /root/.kube/config I0508 10:57:28.513422 6 log.go:172] (0xc00120ec60) (0xc001daec80) Create stream I0508 10:57:28.513463 6 log.go:172] (0xc00120ec60) (0xc001daec80) Stream added, broadcasting: 1 I0508 10:57:28.516284 6 log.go:172] (0xc00120ec60) Reply frame received for 1 I0508 10:57:28.516330 6 log.go:172] (0xc00120ec60) (0xc001daed20) Create stream I0508 10:57:28.516344 6 log.go:172] (0xc00120ec60) (0xc001daed20) Stream added, broadcasting: 3 I0508 10:57:28.517451 6 log.go:172] (0xc00120ec60) Reply frame received for 3 I0508 10:57:28.517504 6 log.go:172] (0xc00120ec60) (0xc00180e3c0) Create stream I0508 10:57:28.517516 6 log.go:172] (0xc00120ec60) (0xc00180e3c0) Stream added, broadcasting: 5 I0508 10:57:28.518367 6 log.go:172] (0xc00120ec60) Reply frame received for 5 I0508 10:57:28.579894 6 log.go:172] (0xc00120ec60) Data frame received for 5 I0508 10:57:28.579957 6 log.go:172] (0xc00180e3c0) (5) Data frame handling I0508 10:57:28.579995 6 log.go:172] (0xc00120ec60) Data frame received for 3 I0508 10:57:28.580013 6 log.go:172] (0xc001daed20) (3) Data frame handling I0508 10:57:28.580040 6 log.go:172] (0xc001daed20) (3) Data frame sent I0508 10:57:28.580054 6 log.go:172] (0xc00120ec60) Data frame received for 3 I0508 10:57:28.580065 6 log.go:172] (0xc001daed20) (3) Data frame handling I0508 10:57:28.581508 6 log.go:172] (0xc00120ec60) Data frame received for 1 I0508 10:57:28.581535 6 log.go:172] (0xc001daec80) (1) Data frame handling I0508 10:57:28.581552 6 log.go:172] (0xc001daec80) (1) Data frame sent I0508 10:57:28.581577 6 log.go:172] (0xc00120ec60) (0xc001daec80) Stream removed, broadcasting: 1 I0508 10:57:28.581599 6 log.go:172] (0xc00120ec60) Go away received I0508 10:57:28.581712 6 log.go:172] (0xc00120ec60) (0xc001daec80) Stream removed, broadcasting: 1 I0508 10:57:28.581735 6 log.go:172] (0xc00120ec60) (0xc001daed20) Stream removed, broadcasting: 3 I0508 10:57:28.581743 6 log.go:172] (0xc00120ec60) (0xc00180e3c0) Stream removed, broadcasting: 5 May 8 10:57:28.581: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 10:57:28.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-98xrh" for this suite. May 8 10:58:08.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 10:58:08.803: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-98xrh, resource: bindings, ignored listing per whitelist May 8 10:58:08.841: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-98xrh deletion completed in 40.255934648s • [SLOW TEST:51.406 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 10:58:08.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 8 10:58:09.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-tlpll' May 8 10:58:11.881: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 8 10:58:11.881: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 May 8 10:58:15.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-tlpll' May 8 10:58:16.190: INFO: stderr: "" May 8 10:58:16.190: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 10:58:16.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tlpll" for this suite. May 8 11:00:18.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:00:18.476: INFO: namespace: e2e-tests-kubectl-tlpll, resource: bindings, ignored listing per whitelist May 8 11:00:18.514: INFO: namespace e2e-tests-kubectl-tlpll deletion completed in 2m2.319446009s • [SLOW TEST:129.672 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:00:18.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-r7rmr A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-r7rmr;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-r7rmr A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-r7rmr;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-r7rmr.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-r7rmr.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-r7rmr.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-r7rmr.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-r7rmr.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-r7rmr.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-r7rmr.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-r7rmr.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-r7rmr.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 105.61.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.61.105_udp@PTR;check="$$(dig +tcp +noall +answer +search 105.61.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.61.105_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-r7rmr A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-r7rmr;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-r7rmr A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-r7rmr;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-r7rmr.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-r7rmr.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-r7rmr.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-r7rmr.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-r7rmr.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-r7rmr.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-r7rmr.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-r7rmr.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-r7rmr.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 105.61.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.61.105_udp@PTR;check="$$(dig +tcp +noall +answer +search 105.61.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.61.105_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 8 11:00:26.750: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:26.753: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:26.756: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-r7rmr from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:26.769: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:26.795: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:26.799: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:26.802: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-r7rmr from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:26.806: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-r7rmr from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:26.809: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-r7rmr.svc from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:26.812: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-r7rmr.svc from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:26.816: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:26.819: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:26.836: INFO: Lookups using e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-r7rmr wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-r7rmr jessie_tcp@dns-test-service.e2e-tests-dns-r7rmr jessie_udp@dns-test-service.e2e-tests-dns-r7rmr.svc jessie_tcp@dns-test-service.e2e-tests-dns-r7rmr.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc] May 8 11:00:31.841: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:31.844: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:31.847: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-r7rmr from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:31.856: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:31.887: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:31.889: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:31.891: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-r7rmr from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:31.894: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-r7rmr from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:31.896: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-r7rmr.svc from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:31.899: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-r7rmr.svc from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:31.901: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:31.903: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:31.915: INFO: Lookups using e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-r7rmr wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-r7rmr jessie_tcp@dns-test-service.e2e-tests-dns-r7rmr jessie_udp@dns-test-service.e2e-tests-dns-r7rmr.svc jessie_tcp@dns-test-service.e2e-tests-dns-r7rmr.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc] May 8 11:00:36.841: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:36.844: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:36.847: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-r7rmr from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:36.860: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:36.884: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:36.887: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:36.890: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-r7rmr from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:36.893: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-r7rmr from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:36.896: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-r7rmr.svc from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:36.899: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-r7rmr.svc from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:36.902: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:36.905: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:36.935: INFO: Lookups using e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-r7rmr wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-r7rmr jessie_tcp@dns-test-service.e2e-tests-dns-r7rmr jessie_udp@dns-test-service.e2e-tests-dns-r7rmr.svc jessie_tcp@dns-test-service.e2e-tests-dns-r7rmr.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc] May 8 11:00:41.841: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:41.845: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:41.849: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-r7rmr from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:41.860: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:41.885: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:41.888: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:41.891: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-r7rmr from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:41.894: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-r7rmr from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:41.898: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-r7rmr.svc from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:41.901: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-r7rmr.svc from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:41.921: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:41.938: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:41.953: INFO: Lookups using e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-r7rmr wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-r7rmr jessie_tcp@dns-test-service.e2e-tests-dns-r7rmr jessie_udp@dns-test-service.e2e-tests-dns-r7rmr.svc jessie_tcp@dns-test-service.e2e-tests-dns-r7rmr.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc] May 8 11:00:46.841: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:46.845: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:46.849: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-r7rmr from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:46.863: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:46.887: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:46.890: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:46.892: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-r7rmr from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:46.895: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-r7rmr from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:46.898: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-r7rmr.svc from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:46.901: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-r7rmr.svc from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:46.904: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:46.907: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:46.927: INFO: Lookups using e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-r7rmr wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-r7rmr jessie_tcp@dns-test-service.e2e-tests-dns-r7rmr jessie_udp@dns-test-service.e2e-tests-dns-r7rmr.svc jessie_tcp@dns-test-service.e2e-tests-dns-r7rmr.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc] May 8 11:00:51.841: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:51.844: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:51.847: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-r7rmr from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:51.857: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:51.877: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:51.880: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:51.882: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-r7rmr from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:51.885: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-r7rmr from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:51.887: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-r7rmr.svc from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:51.890: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-r7rmr.svc from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:51.892: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:51.894: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc from pod e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017: the server could not find the requested resource (get pods dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017) May 8 11:00:51.910: INFO: Lookups using e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-r7rmr wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-r7rmr jessie_tcp@dns-test-service.e2e-tests-dns-r7rmr jessie_udp@dns-test-service.e2e-tests-dns-r7rmr.svc jessie_tcp@dns-test-service.e2e-tests-dns-r7rmr.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-r7rmr.svc] May 8 11:00:56.943: INFO: DNS probes using e2e-tests-dns-r7rmr/dns-test-1abb4b8c-911b-11ea-8adb-0242ac110017 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:00:57.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-r7rmr" for this suite. May 8 11:01:03.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:01:03.106: INFO: namespace: e2e-tests-dns-r7rmr, resource: bindings, ignored listing per whitelist May 8 11:01:03.150: INFO: namespace e2e-tests-dns-r7rmr deletion completed in 6.098194173s • [SLOW TEST:44.636 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:01:03.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 8 11:01:03.232: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8kdrk,SelfLink:/api/v1/namespaces/e2e-tests-watch-8kdrk/configmaps/e2e-watch-test-configmap-a,UID:3548763c-911b-11ea-99e8-0242ac110002,ResourceVersion:9399125,Generation:0,CreationTimestamp:2020-05-08 11:01:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 8 11:01:03.232: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8kdrk,SelfLink:/api/v1/namespaces/e2e-tests-watch-8kdrk/configmaps/e2e-watch-test-configmap-a,UID:3548763c-911b-11ea-99e8-0242ac110002,ResourceVersion:9399125,Generation:0,CreationTimestamp:2020-05-08 11:01:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 8 11:01:13.239: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8kdrk,SelfLink:/api/v1/namespaces/e2e-tests-watch-8kdrk/configmaps/e2e-watch-test-configmap-a,UID:3548763c-911b-11ea-99e8-0242ac110002,ResourceVersion:9399145,Generation:0,CreationTimestamp:2020-05-08 11:01:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 8 11:01:13.239: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8kdrk,SelfLink:/api/v1/namespaces/e2e-tests-watch-8kdrk/configmaps/e2e-watch-test-configmap-a,UID:3548763c-911b-11ea-99e8-0242ac110002,ResourceVersion:9399145,Generation:0,CreationTimestamp:2020-05-08 11:01:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 8 11:01:23.247: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8kdrk,SelfLink:/api/v1/namespaces/e2e-tests-watch-8kdrk/configmaps/e2e-watch-test-configmap-a,UID:3548763c-911b-11ea-99e8-0242ac110002,ResourceVersion:9399165,Generation:0,CreationTimestamp:2020-05-08 11:01:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 8 11:01:23.247: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8kdrk,SelfLink:/api/v1/namespaces/e2e-tests-watch-8kdrk/configmaps/e2e-watch-test-configmap-a,UID:3548763c-911b-11ea-99e8-0242ac110002,ResourceVersion:9399165,Generation:0,CreationTimestamp:2020-05-08 11:01:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 8 11:01:33.253: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8kdrk,SelfLink:/api/v1/namespaces/e2e-tests-watch-8kdrk/configmaps/e2e-watch-test-configmap-a,UID:3548763c-911b-11ea-99e8-0242ac110002,ResourceVersion:9399185,Generation:0,CreationTimestamp:2020-05-08 11:01:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 8 11:01:33.254: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8kdrk,SelfLink:/api/v1/namespaces/e2e-tests-watch-8kdrk/configmaps/e2e-watch-test-configmap-a,UID:3548763c-911b-11ea-99e8-0242ac110002,ResourceVersion:9399185,Generation:0,CreationTimestamp:2020-05-08 11:01:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 8 11:01:43.260: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-8kdrk,SelfLink:/api/v1/namespaces/e2e-tests-watch-8kdrk/configmaps/e2e-watch-test-configmap-b,UID:4d23ca02-911b-11ea-99e8-0242ac110002,ResourceVersion:9399206,Generation:0,CreationTimestamp:2020-05-08 11:01:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 8 11:01:43.260: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-8kdrk,SelfLink:/api/v1/namespaces/e2e-tests-watch-8kdrk/configmaps/e2e-watch-test-configmap-b,UID:4d23ca02-911b-11ea-99e8-0242ac110002,ResourceVersion:9399206,Generation:0,CreationTimestamp:2020-05-08 11:01:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 8 11:01:53.268: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-8kdrk,SelfLink:/api/v1/namespaces/e2e-tests-watch-8kdrk/configmaps/e2e-watch-test-configmap-b,UID:4d23ca02-911b-11ea-99e8-0242ac110002,ResourceVersion:9399226,Generation:0,CreationTimestamp:2020-05-08 11:01:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 8 11:01:53.268: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-8kdrk,SelfLink:/api/v1/namespaces/e2e-tests-watch-8kdrk/configmaps/e2e-watch-test-configmap-b,UID:4d23ca02-911b-11ea-99e8-0242ac110002,ResourceVersion:9399226,Generation:0,CreationTimestamp:2020-05-08 11:01:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:02:03.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-8kdrk" for this suite. May 8 11:02:09.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:02:09.358: INFO: namespace: e2e-tests-watch-8kdrk, resource: bindings, ignored listing per whitelist May 8 11:02:09.360: INFO: namespace e2e-tests-watch-8kdrk deletion completed in 6.086031718s • [SLOW TEST:66.209 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:02:09.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0508 11:02:19.505634 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 8 11:02:19.505: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:02:19.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-h95fr" for this suite. May 8 11:02:25.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:02:25.558: INFO: namespace: e2e-tests-gc-h95fr, resource: bindings, ignored listing per whitelist May 8 11:02:25.610: INFO: namespace e2e-tests-gc-h95fr deletion completed in 6.101284646s • [SLOW TEST:16.250 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:02:25.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:02:32.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-4w9dk" for this suite. May 8 11:02:54.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:02:54.814: INFO: namespace: e2e-tests-replication-controller-4w9dk, resource: bindings, ignored listing per whitelist May 8 11:02:54.858: INFO: namespace e2e-tests-replication-controller-4w9dk deletion completed in 22.08058943s • [SLOW TEST:29.248 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:02:54.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 8 11:02:54.997: INFO: Waiting up to 5m0s for pod "pod-77e4be08-911b-11ea-8adb-0242ac110017" in namespace "e2e-tests-emptydir-smfhd" to be "success or failure" May 8 11:02:55.049: INFO: Pod "pod-77e4be08-911b-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 52.709136ms May 8 11:02:57.054: INFO: Pod "pod-77e4be08-911b-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056910454s May 8 11:02:59.058: INFO: Pod "pod-77e4be08-911b-11ea-8adb-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.061377987s May 8 11:03:01.062: INFO: Pod "pod-77e4be08-911b-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.065288683s STEP: Saw pod success May 8 11:03:01.062: INFO: Pod "pod-77e4be08-911b-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:03:01.065: INFO: Trying to get logs from node hunter-worker pod pod-77e4be08-911b-11ea-8adb-0242ac110017 container test-container: STEP: delete the pod May 8 11:03:01.085: INFO: Waiting for pod pod-77e4be08-911b-11ea-8adb-0242ac110017 to disappear May 8 11:03:01.090: INFO: Pod pod-77e4be08-911b-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:03:01.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-smfhd" for this suite. May 8 11:03:07.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:03:07.141: INFO: namespace: e2e-tests-emptydir-smfhd, resource: bindings, ignored listing per whitelist May 8 11:03:07.182: INFO: namespace e2e-tests-emptydir-smfhd deletion completed in 6.088908557s • [SLOW TEST:12.324 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:03:07.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 8 11:03:07.326: INFO: Waiting up to 5m0s for pod "downward-api-7f3efd4e-911b-11ea-8adb-0242ac110017" in namespace "e2e-tests-downward-api-9qf7z" to be "success or failure" May 8 11:03:07.345: INFO: Pod "downward-api-7f3efd4e-911b-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 19.278754ms May 8 11:03:09.348: INFO: Pod "downward-api-7f3efd4e-911b-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022583946s May 8 11:03:11.354: INFO: Pod "downward-api-7f3efd4e-911b-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027740631s STEP: Saw pod success May 8 11:03:11.354: INFO: Pod "downward-api-7f3efd4e-911b-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:03:11.357: INFO: Trying to get logs from node hunter-worker2 pod downward-api-7f3efd4e-911b-11ea-8adb-0242ac110017 container dapi-container: STEP: delete the pod May 8 11:03:11.408: INFO: Waiting for pod downward-api-7f3efd4e-911b-11ea-8adb-0242ac110017 to disappear May 8 11:03:11.442: INFO: Pod downward-api-7f3efd4e-911b-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:03:11.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-9qf7z" for this suite. May 8 11:03:17.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:03:17.577: INFO: namespace: e2e-tests-downward-api-9qf7z, resource: bindings, ignored listing per whitelist May 8 11:03:17.602: INFO: namespace e2e-tests-downward-api-9qf7z deletion completed in 6.155583818s • [SLOW TEST:10.419 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:03:17.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 8 11:03:17.751: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:03:17.753: INFO: Number of nodes with available pods: 0 May 8 11:03:17.753: INFO: Node hunter-worker is running more than one daemon pod May 8 11:03:18.758: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:03:18.761: INFO: Number of nodes with available pods: 0 May 8 11:03:18.761: INFO: Node hunter-worker is running more than one daemon pod May 8 11:03:19.759: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:03:19.763: INFO: Number of nodes with available pods: 0 May 8 11:03:19.763: INFO: Node hunter-worker is running more than one daemon pod May 8 11:03:20.759: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:03:20.762: INFO: Number of nodes with available pods: 0 May 8 11:03:20.762: INFO: Node hunter-worker is running more than one daemon pod May 8 11:03:21.759: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:03:21.766: INFO: Number of nodes with available pods: 1 May 8 11:03:21.766: INFO: Node hunter-worker2 is running more than one daemon pod May 8 11:03:22.758: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:03:22.760: INFO: Number of nodes with available pods: 2 May 8 11:03:22.760: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 8 11:03:22.773: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:03:22.775: INFO: Number of nodes with available pods: 1 May 8 11:03:22.775: INFO: Node hunter-worker2 is running more than one daemon pod May 8 11:03:23.781: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:03:23.784: INFO: Number of nodes with available pods: 1 May 8 11:03:23.784: INFO: Node hunter-worker2 is running more than one daemon pod May 8 11:03:24.803: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:03:24.806: INFO: Number of nodes with available pods: 1 May 8 11:03:24.806: INFO: Node hunter-worker2 is running more than one daemon pod May 8 11:03:25.781: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:03:25.785: INFO: Number of nodes with available pods: 1 May 8 11:03:25.785: INFO: Node hunter-worker2 is running more than one daemon pod May 8 11:03:26.781: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:03:26.784: INFO: Number of nodes with available pods: 1 May 8 11:03:26.784: INFO: Node hunter-worker2 is running more than one daemon pod May 8 11:03:27.781: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:03:27.785: INFO: Number of nodes with available pods: 1 May 8 11:03:27.785: INFO: Node hunter-worker2 is running more than one daemon pod May 8 11:03:28.781: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:03:28.784: INFO: Number of nodes with available pods: 1 May 8 11:03:28.784: INFO: Node hunter-worker2 is running more than one daemon pod May 8 11:03:29.781: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:03:29.785: INFO: Number of nodes with available pods: 1 May 8 11:03:29.785: INFO: Node hunter-worker2 is running more than one daemon pod May 8 11:03:30.782: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:03:30.786: INFO: Number of nodes with available pods: 1 May 8 11:03:30.786: INFO: Node hunter-worker2 is running more than one daemon pod May 8 11:03:31.804: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:03:31.806: INFO: Number of nodes with available pods: 1 May 8 11:03:31.806: INFO: Node hunter-worker2 is running more than one daemon pod May 8 11:03:32.780: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:03:32.784: INFO: Number of nodes with available pods: 1 May 8 11:03:32.784: INFO: Node hunter-worker2 is running more than one daemon pod May 8 11:03:33.781: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:03:33.784: INFO: Number of nodes with available pods: 1 May 8 11:03:33.784: INFO: Node hunter-worker2 is running more than one daemon pod May 8 11:03:34.781: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:03:34.785: INFO: Number of nodes with available pods: 1 May 8 11:03:34.785: INFO: Node hunter-worker2 is running more than one daemon pod May 8 11:03:35.781: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:03:35.785: INFO: Number of nodes with available pods: 2 May 8 11:03:35.785: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-ptgxs, will wait for the garbage collector to delete the pods May 8 11:03:35.848: INFO: Deleting DaemonSet.extensions daemon-set took: 6.810752ms May 8 11:03:35.948: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.181404ms May 8 11:03:41.760: INFO: Number of nodes with available pods: 0 May 8 11:03:41.760: INFO: Number of running nodes: 0, number of available pods: 0 May 8 11:03:41.765: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-ptgxs/daemonsets","resourceVersion":"9399624"},"items":null} May 8 11:03:41.768: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-ptgxs/pods","resourceVersion":"9399624"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:03:41.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-ptgxs" for this suite. May 8 11:03:47.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:03:47.830: INFO: namespace: e2e-tests-daemonsets-ptgxs, resource: bindings, ignored listing per whitelist May 8 11:03:47.871: INFO: namespace e2e-tests-daemonsets-ptgxs deletion completed in 6.091057492s • [SLOW TEST:30.268 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:03:47.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 8 11:03:48.001: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.41137ms) May 8 11:03:48.004: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.599117ms) May 8 11:03:48.007: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.05317ms) May 8 11:03:48.010: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.175623ms) May 8 11:03:48.013: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.153198ms) May 8 11:03:48.016: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.812448ms) May 8 11:03:48.019: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.657ms) May 8 11:03:48.022: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.802396ms) May 8 11:03:48.025: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.083027ms) May 8 11:03:48.028: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.105333ms) May 8 11:03:48.032: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.556236ms) May 8 11:03:48.035: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.961412ms) May 8 11:03:48.038: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.234259ms) May 8 11:03:48.041: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.237562ms) May 8 11:03:48.050: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 8.893156ms) May 8 11:03:48.053: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.143985ms) May 8 11:03:48.056: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.296552ms) May 8 11:03:48.058: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.195421ms) May 8 11:03:48.060: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.002419ms) May 8 11:03:48.062: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.324265ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:03:48.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-29pmz" for this suite. May 8 11:03:54.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:03:54.178: INFO: namespace: e2e-tests-proxy-29pmz, resource: bindings, ignored listing per whitelist May 8 11:03:54.196: INFO: namespace e2e-tests-proxy-29pmz deletion completed in 6.131710226s • [SLOW TEST:6.325 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:03:54.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info May 8 11:03:54.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 8 11:03:54.382: INFO: stderr: "" May 8 11:03:54.382: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:03:54.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-rn7dd" for this suite. May 8 11:04:00.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:04:00.445: INFO: namespace: e2e-tests-kubectl-rn7dd, resource: bindings, ignored listing per whitelist May 8 11:04:00.474: INFO: namespace e2e-tests-kubectl-rn7dd deletion completed in 6.08759165s • [SLOW TEST:6.278 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:04:00.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 8 11:04:00.642: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-m6k25,SelfLink:/api/v1/namespaces/e2e-tests-watch-m6k25/configmaps/e2e-watch-test-label-changed,UID:9f00fe8f-911b-11ea-99e8-0242ac110002,ResourceVersion:9399707,Generation:0,CreationTimestamp:2020-05-08 11:04:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 8 11:04:00.642: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-m6k25,SelfLink:/api/v1/namespaces/e2e-tests-watch-m6k25/configmaps/e2e-watch-test-label-changed,UID:9f00fe8f-911b-11ea-99e8-0242ac110002,ResourceVersion:9399708,Generation:0,CreationTimestamp:2020-05-08 11:04:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 8 11:04:00.642: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-m6k25,SelfLink:/api/v1/namespaces/e2e-tests-watch-m6k25/configmaps/e2e-watch-test-label-changed,UID:9f00fe8f-911b-11ea-99e8-0242ac110002,ResourceVersion:9399709,Generation:0,CreationTimestamp:2020-05-08 11:04:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 8 11:04:10.691: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-m6k25,SelfLink:/api/v1/namespaces/e2e-tests-watch-m6k25/configmaps/e2e-watch-test-label-changed,UID:9f00fe8f-911b-11ea-99e8-0242ac110002,ResourceVersion:9399730,Generation:0,CreationTimestamp:2020-05-08 11:04:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 8 11:04:10.691: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-m6k25,SelfLink:/api/v1/namespaces/e2e-tests-watch-m6k25/configmaps/e2e-watch-test-label-changed,UID:9f00fe8f-911b-11ea-99e8-0242ac110002,ResourceVersion:9399731,Generation:0,CreationTimestamp:2020-05-08 11:04:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 8 11:04:10.692: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-m6k25,SelfLink:/api/v1/namespaces/e2e-tests-watch-m6k25/configmaps/e2e-watch-test-label-changed,UID:9f00fe8f-911b-11ea-99e8-0242ac110002,ResourceVersion:9399732,Generation:0,CreationTimestamp:2020-05-08 11:04:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:04:10.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-m6k25" for this suite. May 8 11:04:16.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:04:16.794: INFO: namespace: e2e-tests-watch-m6k25, resource: bindings, ignored listing per whitelist May 8 11:04:16.801: INFO: namespace e2e-tests-watch-m6k25 deletion completed in 6.104463046s • [SLOW TEST:16.326 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:04:16.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token May 8 11:04:17.426: INFO: Waiting up to 5m0s for pod "pod-service-account-a9077fa2-911b-11ea-8adb-0242ac110017-22h5f" in namespace "e2e-tests-svcaccounts-bgxlw" to be "success or failure" May 8 11:04:17.467: INFO: Pod "pod-service-account-a9077fa2-911b-11ea-8adb-0242ac110017-22h5f": Phase="Pending", Reason="", readiness=false. Elapsed: 41.29386ms May 8 11:04:19.472: INFO: Pod "pod-service-account-a9077fa2-911b-11ea-8adb-0242ac110017-22h5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045672954s May 8 11:04:21.623: INFO: Pod "pod-service-account-a9077fa2-911b-11ea-8adb-0242ac110017-22h5f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.197169334s May 8 11:04:23.626: INFO: Pod "pod-service-account-a9077fa2-911b-11ea-8adb-0242ac110017-22h5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.200344391s STEP: Saw pod success May 8 11:04:23.626: INFO: Pod "pod-service-account-a9077fa2-911b-11ea-8adb-0242ac110017-22h5f" satisfied condition "success or failure" May 8 11:04:23.629: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-a9077fa2-911b-11ea-8adb-0242ac110017-22h5f container token-test: STEP: delete the pod May 8 11:04:23.926: INFO: Waiting for pod pod-service-account-a9077fa2-911b-11ea-8adb-0242ac110017-22h5f to disappear May 8 11:04:23.937: INFO: Pod pod-service-account-a9077fa2-911b-11ea-8adb-0242ac110017-22h5f no longer exists STEP: Creating a pod to test consume service account root CA May 8 11:04:23.941: INFO: Waiting up to 5m0s for pod "pod-service-account-a9077fa2-911b-11ea-8adb-0242ac110017-vgk7k" in namespace "e2e-tests-svcaccounts-bgxlw" to be "success or failure" May 8 11:04:23.943: INFO: Pod "pod-service-account-a9077fa2-911b-11ea-8adb-0242ac110017-vgk7k": Phase="Pending", Reason="", readiness=false. Elapsed: 1.982898ms May 8 11:04:25.947: INFO: Pod "pod-service-account-a9077fa2-911b-11ea-8adb-0242ac110017-vgk7k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0062389s May 8 11:04:27.950: INFO: Pod "pod-service-account-a9077fa2-911b-11ea-8adb-0242ac110017-vgk7k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009873237s May 8 11:04:29.955: INFO: Pod "pod-service-account-a9077fa2-911b-11ea-8adb-0242ac110017-vgk7k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014267363s STEP: Saw pod success May 8 11:04:29.955: INFO: Pod "pod-service-account-a9077fa2-911b-11ea-8adb-0242ac110017-vgk7k" satisfied condition "success or failure" May 8 11:04:29.958: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-a9077fa2-911b-11ea-8adb-0242ac110017-vgk7k container root-ca-test: STEP: delete the pod May 8 11:04:30.036: INFO: Waiting for pod pod-service-account-a9077fa2-911b-11ea-8adb-0242ac110017-vgk7k to disappear May 8 11:04:30.066: INFO: Pod pod-service-account-a9077fa2-911b-11ea-8adb-0242ac110017-vgk7k no longer exists STEP: Creating a pod to test consume service account namespace May 8 11:04:30.070: INFO: Waiting up to 5m0s for pod "pod-service-account-a9077fa2-911b-11ea-8adb-0242ac110017-59czx" in namespace "e2e-tests-svcaccounts-bgxlw" to be "success or failure" May 8 11:04:30.105: INFO: Pod "pod-service-account-a9077fa2-911b-11ea-8adb-0242ac110017-59czx": Phase="Pending", Reason="", readiness=false. Elapsed: 34.972948ms May 8 11:04:32.109: INFO: Pod "pod-service-account-a9077fa2-911b-11ea-8adb-0242ac110017-59czx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038887749s May 8 11:04:34.113: INFO: Pod "pod-service-account-a9077fa2-911b-11ea-8adb-0242ac110017-59czx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043391119s May 8 11:04:36.117: INFO: Pod "pod-service-account-a9077fa2-911b-11ea-8adb-0242ac110017-59czx": Phase="Running", Reason="", readiness=false. Elapsed: 6.047182582s May 8 11:04:38.122: INFO: Pod "pod-service-account-a9077fa2-911b-11ea-8adb-0242ac110017-59czx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051957422s STEP: Saw pod success May 8 11:04:38.122: INFO: Pod "pod-service-account-a9077fa2-911b-11ea-8adb-0242ac110017-59czx" satisfied condition "success or failure" May 8 11:04:38.126: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-a9077fa2-911b-11ea-8adb-0242ac110017-59czx container namespace-test: STEP: delete the pod May 8 11:04:38.183: INFO: Waiting for pod pod-service-account-a9077fa2-911b-11ea-8adb-0242ac110017-59czx to disappear May 8 11:04:38.188: INFO: Pod pod-service-account-a9077fa2-911b-11ea-8adb-0242ac110017-59czx no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:04:38.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-bgxlw" for this suite. May 8 11:04:44.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:04:44.266: INFO: namespace: e2e-tests-svcaccounts-bgxlw, resource: bindings, ignored listing per whitelist May 8 11:04:44.288: INFO: namespace e2e-tests-svcaccounts-bgxlw deletion completed in 6.097237816s • [SLOW TEST:27.487 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:04:44.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command May 8 11:04:44.470: INFO: Waiting up to 5m0s for pod "client-containers-b9261b11-911b-11ea-8adb-0242ac110017" in namespace "e2e-tests-containers-4t4f5" to be "success or failure" May 8 11:04:44.476: INFO: Pod "client-containers-b9261b11-911b-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115801ms May 8 11:04:46.535: INFO: Pod "client-containers-b9261b11-911b-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064148953s May 8 11:04:48.538: INFO: Pod "client-containers-b9261b11-911b-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068114882s STEP: Saw pod success May 8 11:04:48.539: INFO: Pod "client-containers-b9261b11-911b-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:04:48.541: INFO: Trying to get logs from node hunter-worker pod client-containers-b9261b11-911b-11ea-8adb-0242ac110017 container test-container: STEP: delete the pod May 8 11:04:48.568: INFO: Waiting for pod client-containers-b9261b11-911b-11ea-8adb-0242ac110017 to disappear May 8 11:04:48.635: INFO: Pod client-containers-b9261b11-911b-11ea-8adb-0242ac110017 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:04:48.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-4t4f5" for this suite. May 8 11:04:54.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:04:54.731: INFO: namespace: e2e-tests-containers-4t4f5, resource: bindings, ignored listing per whitelist May 8 11:04:54.849: INFO: namespace e2e-tests-containers-4t4f5 deletion completed in 6.209189895s • [SLOW TEST:10.561 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:04:54.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-bf6ed723-911b-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume secrets May 8 11:04:55.070: INFO: Waiting up to 5m0s for pod "pod-secrets-bf761996-911b-11ea-8adb-0242ac110017" in namespace "e2e-tests-secrets-dkrc5" to be "success or failure" May 8 11:04:55.074: INFO: Pod "pod-secrets-bf761996-911b-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116299ms May 8 11:04:57.077: INFO: Pod "pod-secrets-bf761996-911b-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007302368s May 8 11:04:59.080: INFO: Pod "pod-secrets-bf761996-911b-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010615875s STEP: Saw pod success May 8 11:04:59.081: INFO: Pod "pod-secrets-bf761996-911b-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:04:59.083: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-bf761996-911b-11ea-8adb-0242ac110017 container secret-env-test: STEP: delete the pod May 8 11:04:59.113: INFO: Waiting for pod pod-secrets-bf761996-911b-11ea-8adb-0242ac110017 to disappear May 8 11:04:59.128: INFO: Pod pod-secrets-bf761996-911b-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:04:59.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-dkrc5" for this suite. May 8 11:05:05.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:05:05.160: INFO: namespace: e2e-tests-secrets-dkrc5, resource: bindings, ignored listing per whitelist May 8 11:05:05.212: INFO: namespace e2e-tests-secrets-dkrc5 deletion completed in 6.08062024s • [SLOW TEST:10.362 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:05:05.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-c591f509-911b-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume secrets May 8 11:05:05.315: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c592883d-911b-11ea-8adb-0242ac110017" in namespace "e2e-tests-projected-qmvw4" to be "success or failure" May 8 11:05:05.360: INFO: Pod "pod-projected-secrets-c592883d-911b-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 45.181859ms May 8 11:05:07.365: INFO: Pod "pod-projected-secrets-c592883d-911b-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049501552s May 8 11:05:09.369: INFO: Pod "pod-projected-secrets-c592883d-911b-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053918615s STEP: Saw pod success May 8 11:05:09.369: INFO: Pod "pod-projected-secrets-c592883d-911b-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:05:09.377: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-c592883d-911b-11ea-8adb-0242ac110017 container projected-secret-volume-test: STEP: delete the pod May 8 11:05:09.409: INFO: Waiting for pod pod-projected-secrets-c592883d-911b-11ea-8adb-0242ac110017 to disappear May 8 11:05:09.417: INFO: Pod pod-projected-secrets-c592883d-911b-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:05:09.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qmvw4" for this suite. May 8 11:05:15.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:05:15.539: INFO: namespace: e2e-tests-projected-qmvw4, resource: bindings, ignored listing per whitelist May 8 11:05:15.563: INFO: namespace e2e-tests-projected-qmvw4 deletion completed in 6.143223241s • [SLOW TEST:10.352 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:05:15.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 8 11:05:15.825: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"cbcca99e-911b-11ea-99e8-0242ac110002", Controller:(*bool)(0xc001d53af2), BlockOwnerDeletion:(*bool)(0xc001d53af3)}} May 8 11:05:15.855: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"cbcb6652-911b-11ea-99e8-0242ac110002", Controller:(*bool)(0xc0016264ea), BlockOwnerDeletion:(*bool)(0xc0016264eb)}} May 8 11:05:15.886: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"cbcbef36-911b-11ea-99e8-0242ac110002", Controller:(*bool)(0xc0016db4a2), BlockOwnerDeletion:(*bool)(0xc0016db4a3)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:05:20.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-kvsmm" for this suite. May 8 11:05:26.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:05:27.009: INFO: namespace: e2e-tests-gc-kvsmm, resource: bindings, ignored listing per whitelist May 8 11:05:27.080: INFO: namespace e2e-tests-gc-kvsmm deletion completed in 6.115794108s • [SLOW TEST:11.516 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:05:27.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-9k2z7/configmap-test-d2997ef1-911b-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume configMaps May 8 11:05:27.189: INFO: Waiting up to 5m0s for pod "pod-configmaps-d29b858a-911b-11ea-8adb-0242ac110017" in namespace "e2e-tests-configmap-9k2z7" to be "success or failure" May 8 11:05:27.247: INFO: Pod "pod-configmaps-d29b858a-911b-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 58.112898ms May 8 11:05:29.251: INFO: Pod "pod-configmaps-d29b858a-911b-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062150049s May 8 11:05:31.255: INFO: Pod "pod-configmaps-d29b858a-911b-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066104665s STEP: Saw pod success May 8 11:05:31.255: INFO: Pod "pod-configmaps-d29b858a-911b-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:05:31.258: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-d29b858a-911b-11ea-8adb-0242ac110017 container env-test: STEP: delete the pod May 8 11:05:31.305: INFO: Waiting for pod pod-configmaps-d29b858a-911b-11ea-8adb-0242ac110017 to disappear May 8 11:05:31.310: INFO: Pod pod-configmaps-d29b858a-911b-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:05:31.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-9k2z7" for this suite. May 8 11:05:37.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:05:37.395: INFO: namespace: e2e-tests-configmap-9k2z7, resource: bindings, ignored listing per whitelist May 8 11:05:37.454: INFO: namespace e2e-tests-configmap-9k2z7 deletion completed in 6.141215506s • [SLOW TEST:10.374 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:05:37.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller May 8 11:05:37.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hgz75' May 8 11:05:38.223: INFO: stderr: "" May 8 11:05:38.223: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 8 11:05:38.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hgz75' May 8 11:05:38.372: INFO: stderr: "" May 8 11:05:38.372: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 May 8 11:05:43.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hgz75' May 8 11:05:43.483: INFO: stderr: "" May 8 11:05:43.484: INFO: stdout: "update-demo-nautilus-4dgps update-demo-nautilus-mnmfq " May 8 11:05:43.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4dgps -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hgz75' May 8 11:05:43.583: INFO: stderr: "" May 8 11:05:43.583: INFO: stdout: "" May 8 11:05:43.583: INFO: update-demo-nautilus-4dgps is created but not running May 8 11:05:48.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hgz75' May 8 11:05:48.693: INFO: stderr: "" May 8 11:05:48.693: INFO: stdout: "update-demo-nautilus-4dgps update-demo-nautilus-mnmfq " May 8 11:05:48.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4dgps -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hgz75' May 8 11:05:48.799: INFO: stderr: "" May 8 11:05:48.799: INFO: stdout: "true" May 8 11:05:48.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4dgps -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hgz75' May 8 11:05:48.898: INFO: stderr: "" May 8 11:05:48.898: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 11:05:48.898: INFO: validating pod update-demo-nautilus-4dgps May 8 11:05:48.902: INFO: got data: { "image": "nautilus.jpg" } May 8 11:05:48.902: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 11:05:48.902: INFO: update-demo-nautilus-4dgps is verified up and running May 8 11:05:48.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mnmfq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hgz75' May 8 11:05:49.013: INFO: stderr: "" May 8 11:05:49.013: INFO: stdout: "true" May 8 11:05:49.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mnmfq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hgz75' May 8 11:05:49.117: INFO: stderr: "" May 8 11:05:49.117: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 11:05:49.117: INFO: validating pod update-demo-nautilus-mnmfq May 8 11:05:49.121: INFO: got data: { "image": "nautilus.jpg" } May 8 11:05:49.121: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 11:05:49.121: INFO: update-demo-nautilus-mnmfq is verified up and running STEP: rolling-update to new replication controller May 8 11:05:49.123: INFO: scanned /root for discovery docs: May 8 11:05:49.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-hgz75' May 8 11:06:13.011: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 8 11:06:13.011: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 8 11:06:13.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hgz75' May 8 11:06:13.123: INFO: stderr: "" May 8 11:06:13.123: INFO: stdout: "update-demo-kitten-q7gpd update-demo-kitten-r968l " May 8 11:06:13.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-q7gpd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hgz75' May 8 11:06:13.234: INFO: stderr: "" May 8 11:06:13.234: INFO: stdout: "true" May 8 11:06:13.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-q7gpd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hgz75' May 8 11:06:13.337: INFO: stderr: "" May 8 11:06:13.337: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 8 11:06:13.337: INFO: validating pod update-demo-kitten-q7gpd May 8 11:06:13.340: INFO: got data: { "image": "kitten.jpg" } May 8 11:06:13.340: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 8 11:06:13.340: INFO: update-demo-kitten-q7gpd is verified up and running May 8 11:06:13.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-r968l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hgz75' May 8 11:06:13.434: INFO: stderr: "" May 8 11:06:13.434: INFO: stdout: "true" May 8 11:06:13.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-r968l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hgz75' May 8 11:06:13.533: INFO: stderr: "" May 8 11:06:13.533: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 8 11:06:13.533: INFO: validating pod update-demo-kitten-r968l May 8 11:06:13.536: INFO: got data: { "image": "kitten.jpg" } May 8 11:06:13.537: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 8 11:06:13.537: INFO: update-demo-kitten-r968l is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:06:13.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hgz75" for this suite. May 8 11:06:37.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:06:37.605: INFO: namespace: e2e-tests-kubectl-hgz75, resource: bindings, ignored listing per whitelist May 8 11:06:37.633: INFO: namespace e2e-tests-kubectl-hgz75 deletion completed in 24.093464339s • [SLOW TEST:60.179 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:06:37.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token May 8 11:06:38.269: INFO: created pod pod-service-account-defaultsa May 8 11:06:38.269: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 8 11:06:38.276: INFO: created pod pod-service-account-mountsa May 8 11:06:38.276: INFO: pod pod-service-account-mountsa service account token volume mount: true May 8 11:06:38.306: INFO: created pod pod-service-account-nomountsa May 8 11:06:38.306: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 8 11:06:38.324: INFO: created pod pod-service-account-defaultsa-mountspec May 8 11:06:38.324: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 8 11:06:38.422: INFO: created pod pod-service-account-mountsa-mountspec May 8 11:06:38.422: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 8 11:06:38.432: INFO: created pod pod-service-account-nomountsa-mountspec May 8 11:06:38.432: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 8 11:06:38.468: INFO: created pod pod-service-account-defaultsa-nomountspec May 8 11:06:38.468: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 8 11:06:38.499: INFO: created pod pod-service-account-mountsa-nomountspec May 8 11:06:38.499: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 8 11:06:38.584: INFO: created pod pod-service-account-nomountsa-nomountspec May 8 11:06:38.584: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:06:38.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-2hqwx" for this suite. May 8 11:07:08.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:07:09.000: INFO: namespace: e2e-tests-svcaccounts-2hqwx, resource: bindings, ignored listing per whitelist May 8 11:07:09.078: INFO: namespace e2e-tests-svcaccounts-2hqwx deletion completed in 30.483865449s • [SLOW TEST:31.445 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:07:09.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod May 8 11:07:09.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-w9smw' May 8 11:07:09.466: INFO: stderr: "" May 8 11:07:09.466: INFO: stdout: "pod/pause created\n" May 8 11:07:09.466: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 8 11:07:09.466: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-w9smw" to be "running and ready" May 8 11:07:09.472: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 5.276355ms May 8 11:07:11.512: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045379114s May 8 11:07:13.516: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.049163854s May 8 11:07:13.516: INFO: Pod "pause" satisfied condition "running and ready" May 8 11:07:13.516: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod May 8 11:07:13.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-w9smw' May 8 11:07:13.635: INFO: stderr: "" May 8 11:07:13.635: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 8 11:07:13.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-w9smw' May 8 11:07:13.729: INFO: stderr: "" May 8 11:07:13.729: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 8 11:07:13.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-w9smw' May 8 11:07:13.832: INFO: stderr: "" May 8 11:07:13.832: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 8 11:07:13.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-w9smw' May 8 11:07:13.928: INFO: stderr: "" May 8 11:07:13.928: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources May 8 11:07:13.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-w9smw' May 8 11:07:14.027: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 8 11:07:14.027: INFO: stdout: "pod \"pause\" force deleted\n" May 8 11:07:14.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-w9smw' May 8 11:07:14.137: INFO: stderr: "No resources found.\n" May 8 11:07:14.137: INFO: stdout: "" May 8 11:07:14.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-w9smw -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 8 11:07:14.290: INFO: stderr: "" May 8 11:07:14.290: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:07:14.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-w9smw" for this suite. May 8 11:07:20.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:07:20.502: INFO: namespace: e2e-tests-kubectl-w9smw, resource: bindings, ignored listing per whitelist May 8 11:07:20.572: INFO: namespace e2e-tests-kubectl-w9smw deletion completed in 6.278767203s • [SLOW TEST:11.494 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:07:20.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 8 11:07:20.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 8 11:07:20.827: INFO: stderr: "" May 8 11:07:20.827: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:07:20.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-kgzhc" for this suite. May 8 11:07:28.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:07:28.895: INFO: namespace: e2e-tests-kubectl-kgzhc, resource: bindings, ignored listing per whitelist May 8 11:07:28.921: INFO: namespace e2e-tests-kubectl-kgzhc deletion completed in 8.089586852s • [SLOW TEST:8.349 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:07:28.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-7rwvf in namespace e2e-tests-proxy-7rndg I0508 11:07:29.062903 6 runners.go:184] Created replication controller with name: proxy-service-7rwvf, namespace: e2e-tests-proxy-7rndg, replica count: 1 I0508 11:07:30.113484 6 runners.go:184] proxy-service-7rwvf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0508 11:07:31.113746 6 runners.go:184] proxy-service-7rwvf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0508 11:07:32.113953 6 runners.go:184] proxy-service-7rwvf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0508 11:07:33.114175 6 runners.go:184] proxy-service-7rwvf Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 8 11:07:33.118: INFO: setup took 4.109123067s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 8 11:07:33.124: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-7rndg/pods/proxy-service-7rwvf-6k8sx:160/proxy/: foo (200; 5.800763ms) May 8 11:07:33.124: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-7rndg/pods/proxy-service-7rwvf-6k8sx:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-263c5d7c-911c-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume secrets May 8 11:07:47.534: INFO: Waiting up to 5m0s for pod "pod-secrets-2643afca-911c-11ea-8adb-0242ac110017" in namespace "e2e-tests-secrets-b4jhg" to be "success or failure" May 8 11:07:47.563: INFO: Pod "pod-secrets-2643afca-911c-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 28.664797ms May 8 11:07:49.567: INFO: Pod "pod-secrets-2643afca-911c-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032915873s May 8 11:07:51.571: INFO: Pod "pod-secrets-2643afca-911c-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03683666s STEP: Saw pod success May 8 11:07:51.571: INFO: Pod "pod-secrets-2643afca-911c-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:07:51.574: INFO: Trying to get logs from node hunter-worker pod pod-secrets-2643afca-911c-11ea-8adb-0242ac110017 container secret-volume-test: STEP: delete the pod May 8 11:07:51.591: INFO: Waiting for pod pod-secrets-2643afca-911c-11ea-8adb-0242ac110017 to disappear May 8 11:07:51.595: INFO: Pod pod-secrets-2643afca-911c-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:07:51.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-b4jhg" for this suite. May 8 11:07:57.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:07:57.652: INFO: namespace: e2e-tests-secrets-b4jhg, resource: bindings, ignored listing per whitelist May 8 11:07:57.696: INFO: namespace e2e-tests-secrets-b4jhg deletion completed in 6.098132475s • [SLOW TEST:10.307 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:07:57.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 8 11:07:57.974: INFO: Waiting up to 5m0s for pod "downward-api-2c7c7ef1-911c-11ea-8adb-0242ac110017" in namespace "e2e-tests-downward-api-tfzkv" to be "success or failure" May 8 11:07:58.052: INFO: Pod "downward-api-2c7c7ef1-911c-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 77.77115ms May 8 11:08:00.055: INFO: Pod "downward-api-2c7c7ef1-911c-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08171239s May 8 11:08:02.059: INFO: Pod "downward-api-2c7c7ef1-911c-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085140089s May 8 11:08:04.063: INFO: Pod "downward-api-2c7c7ef1-911c-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.089301483s STEP: Saw pod success May 8 11:08:04.063: INFO: Pod "downward-api-2c7c7ef1-911c-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:08:04.067: INFO: Trying to get logs from node hunter-worker2 pod downward-api-2c7c7ef1-911c-11ea-8adb-0242ac110017 container dapi-container: STEP: delete the pod May 8 11:08:04.092: INFO: Waiting for pod downward-api-2c7c7ef1-911c-11ea-8adb-0242ac110017 to disappear May 8 11:08:04.147: INFO: Pod downward-api-2c7c7ef1-911c-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:08:04.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tfzkv" for this suite. May 8 11:08:10.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:08:10.386: INFO: namespace: e2e-tests-downward-api-tfzkv, resource: bindings, ignored listing per whitelist May 8 11:08:10.619: INFO: namespace e2e-tests-downward-api-tfzkv deletion completed in 6.468225446s • [SLOW TEST:12.922 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:08:10.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:08:14.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-6qm8b" for this suite. May 8 11:09:04.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:09:05.091: INFO: namespace: e2e-tests-kubelet-test-6qm8b, resource: bindings, ignored listing per whitelist May 8 11:09:05.111: INFO: namespace e2e-tests-kubelet-test-6qm8b deletion completed in 50.130963525s • [SLOW TEST:54.492 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:09:05.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-54938ab3-911c-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume configMaps May 8 11:09:05.239: INFO: Waiting up to 5m0s for pod "pod-configmaps-5493f723-911c-11ea-8adb-0242ac110017" in namespace "e2e-tests-configmap-2nhkc" to be "success or failure" May 8 11:09:05.243: INFO: Pod "pod-configmaps-5493f723-911c-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015739ms May 8 11:09:07.328: INFO: Pod "pod-configmaps-5493f723-911c-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088904311s May 8 11:09:09.375: INFO: Pod "pod-configmaps-5493f723-911c-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.136240841s STEP: Saw pod success May 8 11:09:09.375: INFO: Pod "pod-configmaps-5493f723-911c-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:09:09.378: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-5493f723-911c-11ea-8adb-0242ac110017 container configmap-volume-test: STEP: delete the pod May 8 11:09:09.394: INFO: Waiting for pod pod-configmaps-5493f723-911c-11ea-8adb-0242ac110017 to disappear May 8 11:09:09.413: INFO: Pod pod-configmaps-5493f723-911c-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:09:09.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-2nhkc" for this suite. May 8 11:09:15.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:09:15.466: INFO: namespace: e2e-tests-configmap-2nhkc, resource: bindings, ignored listing per whitelist May 8 11:09:15.513: INFO: namespace e2e-tests-configmap-2nhkc deletion completed in 6.095900298s • [SLOW TEST:10.402 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:09:15.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 8 11:09:15.683: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5ac9b5e3-911c-11ea-8adb-0242ac110017" in namespace "e2e-tests-projected-x9vkk" to be "success or failure" May 8 11:09:15.696: INFO: Pod "downwardapi-volume-5ac9b5e3-911c-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 13.466982ms May 8 11:09:17.802: INFO: Pod "downwardapi-volume-5ac9b5e3-911c-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119167255s May 8 11:09:19.804: INFO: Pod "downwardapi-volume-5ac9b5e3-911c-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.121826144s STEP: Saw pod success May 8 11:09:19.805: INFO: Pod "downwardapi-volume-5ac9b5e3-911c-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:09:19.807: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-5ac9b5e3-911c-11ea-8adb-0242ac110017 container client-container: STEP: delete the pod May 8 11:09:19.831: INFO: Waiting for pod downwardapi-volume-5ac9b5e3-911c-11ea-8adb-0242ac110017 to disappear May 8 11:09:19.841: INFO: Pod downwardapi-volume-5ac9b5e3-911c-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:09:19.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-x9vkk" for this suite. May 8 11:09:25.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:09:25.914: INFO: namespace: e2e-tests-projected-x9vkk, resource: bindings, ignored listing per whitelist May 8 11:09:25.919: INFO: namespace e2e-tests-projected-x9vkk deletion completed in 6.074949937s • [SLOW TEST:10.406 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:09:25.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 8 11:09:26.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-ldmb7' May 8 11:09:28.975: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 8 11:09:28.975: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 May 8 11:09:31.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-ldmb7' May 8 11:09:31.363: INFO: stderr: "" May 8 11:09:31.363: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:09:31.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ldmb7" for this suite. May 8 11:09:37.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:09:37.577: INFO: namespace: e2e-tests-kubectl-ldmb7, resource: bindings, ignored listing per whitelist May 8 11:09:37.594: INFO: namespace e2e-tests-kubectl-ldmb7 deletion completed in 6.138703812s • [SLOW TEST:11.674 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:09:37.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-pd9mt May 8 11:09:41.715: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-pd9mt STEP: checking the pod's current state and verifying that restartCount is present May 8 11:09:41.718: INFO: Initial restart count of pod liveness-http is 0 May 8 11:10:01.863: INFO: Restart count of pod e2e-tests-container-probe-pd9mt/liveness-http is now 1 (20.144753547s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:10:01.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-pd9mt" for this suite. May 8 11:10:07.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:10:08.023: INFO: namespace: e2e-tests-container-probe-pd9mt, resource: bindings, ignored listing per whitelist May 8 11:10:08.026: INFO: namespace e2e-tests-container-probe-pd9mt deletion completed in 6.111538011s • [SLOW TEST:30.432 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:10:08.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:10:12.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-qn4fk" for this suite. May 8 11:11:04.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:11:04.489: INFO: namespace: e2e-tests-kubelet-test-qn4fk, resource: bindings, ignored listing per whitelist May 8 11:11:04.498: INFO: namespace e2e-tests-kubelet-test-qn4fk deletion completed in 52.173801978s • [SLOW TEST:56.473 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:11:04.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-wc85 STEP: Creating a pod to test atomic-volume-subpath May 8 11:11:04.732: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-wc85" in namespace "e2e-tests-subpath-m9qzl" to be "success or failure" May 8 11:11:04.762: INFO: Pod "pod-subpath-test-projected-wc85": Phase="Pending", Reason="", readiness=false. Elapsed: 29.997281ms May 8 11:11:06.765: INFO: Pod "pod-subpath-test-projected-wc85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03331348s May 8 11:11:08.770: INFO: Pod "pod-subpath-test-projected-wc85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037828513s May 8 11:11:10.994: INFO: Pod "pod-subpath-test-projected-wc85": Phase="Pending", Reason="", readiness=false. Elapsed: 6.262177687s May 8 11:11:12.999: INFO: Pod "pod-subpath-test-projected-wc85": Phase="Running", Reason="", readiness=false. Elapsed: 8.266741135s May 8 11:11:15.003: INFO: Pod "pod-subpath-test-projected-wc85": Phase="Running", Reason="", readiness=false. Elapsed: 10.270775653s May 8 11:11:17.007: INFO: Pod "pod-subpath-test-projected-wc85": Phase="Running", Reason="", readiness=false. Elapsed: 12.275387322s May 8 11:11:19.013: INFO: Pod "pod-subpath-test-projected-wc85": Phase="Running", Reason="", readiness=false. Elapsed: 14.281422267s May 8 11:11:21.018: INFO: Pod "pod-subpath-test-projected-wc85": Phase="Running", Reason="", readiness=false. Elapsed: 16.286219002s May 8 11:11:23.022: INFO: Pod "pod-subpath-test-projected-wc85": Phase="Running", Reason="", readiness=false. Elapsed: 18.290547819s May 8 11:11:25.027: INFO: Pod "pod-subpath-test-projected-wc85": Phase="Running", Reason="", readiness=false. Elapsed: 20.295215097s May 8 11:11:27.032: INFO: Pod "pod-subpath-test-projected-wc85": Phase="Running", Reason="", readiness=false. Elapsed: 22.299768857s May 8 11:11:29.036: INFO: Pod "pod-subpath-test-projected-wc85": Phase="Running", Reason="", readiness=false. Elapsed: 24.303726653s May 8 11:11:31.040: INFO: Pod "pod-subpath-test-projected-wc85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.308026798s STEP: Saw pod success May 8 11:11:31.040: INFO: Pod "pod-subpath-test-projected-wc85" satisfied condition "success or failure" May 8 11:11:31.044: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-projected-wc85 container test-container-subpath-projected-wc85: STEP: delete the pod May 8 11:11:31.089: INFO: Waiting for pod pod-subpath-test-projected-wc85 to disappear May 8 11:11:31.103: INFO: Pod pod-subpath-test-projected-wc85 no longer exists STEP: Deleting pod pod-subpath-test-projected-wc85 May 8 11:11:31.103: INFO: Deleting pod "pod-subpath-test-projected-wc85" in namespace "e2e-tests-subpath-m9qzl" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:11:31.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-m9qzl" for this suite. May 8 11:11:37.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:11:37.260: INFO: namespace: e2e-tests-subpath-m9qzl, resource: bindings, ignored listing per whitelist May 8 11:11:37.270: INFO: namespace e2e-tests-subpath-m9qzl deletion completed in 6.150385032s • [SLOW TEST:32.771 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:11:37.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 8 11:11:45.439: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 11:11:45.464: INFO: Pod pod-with-prestop-exec-hook still exists May 8 11:11:47.464: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 11:11:47.467: INFO: Pod pod-with-prestop-exec-hook still exists May 8 11:11:49.464: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 11:11:49.469: INFO: Pod pod-with-prestop-exec-hook still exists May 8 11:11:51.464: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 11:11:51.468: INFO: Pod pod-with-prestop-exec-hook still exists May 8 11:11:53.464: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 11:11:53.467: INFO: Pod pod-with-prestop-exec-hook still exists May 8 11:11:55.464: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 11:11:55.468: INFO: Pod pod-with-prestop-exec-hook still exists May 8 11:11:57.464: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 11:11:57.468: INFO: Pod pod-with-prestop-exec-hook still exists May 8 11:11:59.464: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 11:11:59.468: INFO: Pod pod-with-prestop-exec-hook still exists May 8 11:12:01.464: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 11:12:01.468: INFO: Pod pod-with-prestop-exec-hook still exists May 8 11:12:03.464: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 11:12:03.468: INFO: Pod pod-with-prestop-exec-hook still exists May 8 11:12:05.464: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 11:12:05.468: INFO: Pod pod-with-prestop-exec-hook still exists May 8 11:12:07.464: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 11:12:07.467: INFO: Pod pod-with-prestop-exec-hook still exists May 8 11:12:09.464: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 11:12:09.468: INFO: Pod pod-with-prestop-exec-hook still exists May 8 11:12:11.464: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 11:12:11.468: INFO: Pod pod-with-prestop-exec-hook still exists May 8 11:12:13.464: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 11:12:13.468: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:12:13.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-dnt8b" for this suite. May 8 11:12:35.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:12:35.504: INFO: namespace: e2e-tests-container-lifecycle-hook-dnt8b, resource: bindings, ignored listing per whitelist May 8 11:12:35.572: INFO: namespace e2e-tests-container-lifecycle-hook-dnt8b deletion completed in 22.092394272s • [SLOW TEST:58.301 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:12:35.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 8 11:12:35.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' May 8 11:12:35.732: INFO: stderr: "" May 8 11:12:35.732: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" May 8 11:12:35.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-p8slb' May 8 11:12:35.980: INFO: stderr: "" May 8 11:12:35.980: INFO: stdout: "replicationcontroller/redis-master created\n" May 8 11:12:35.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-p8slb' May 8 11:12:36.292: INFO: stderr: "" May 8 11:12:36.292: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 8 11:12:37.295: INFO: Selector matched 1 pods for map[app:redis] May 8 11:12:37.296: INFO: Found 0 / 1 May 8 11:12:38.297: INFO: Selector matched 1 pods for map[app:redis] May 8 11:12:38.297: INFO: Found 0 / 1 May 8 11:12:39.296: INFO: Selector matched 1 pods for map[app:redis] May 8 11:12:39.296: INFO: Found 0 / 1 May 8 11:12:40.297: INFO: Selector matched 1 pods for map[app:redis] May 8 11:12:40.297: INFO: Found 1 / 1 May 8 11:12:40.297: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 8 11:12:40.300: INFO: Selector matched 1 pods for map[app:redis] May 8 11:12:40.300: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 8 11:12:40.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-b9xqs --namespace=e2e-tests-kubectl-p8slb' May 8 11:12:40.424: INFO: stderr: "" May 8 11:12:40.425: INFO: stdout: "Name: redis-master-b9xqs\nNamespace: e2e-tests-kubectl-p8slb\nPriority: 0\nPriorityClassName: \nNode: hunter-worker/172.17.0.3\nStart Time: Fri, 08 May 2020 11:12:36 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.116\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://579436f18546b5480a4c53e065b9524158814b14b436fcae63cf17eddcd21c6d\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 08 May 2020 11:12:39 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-52kgj (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-52kgj:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-52kgj\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5s default-scheduler Successfully assigned e2e-tests-kubectl-p8slb/redis-master-b9xqs to hunter-worker\n Normal Pulled 3s kubelet, hunter-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, hunter-worker Created container\n Normal Started 1s kubelet, hunter-worker Started container\n" May 8 11:12:40.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-p8slb' May 8 11:12:40.546: INFO: stderr: "" May 8 11:12:40.546: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-p8slb\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: redis-master-b9xqs\n" May 8 11:12:40.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-p8slb' May 8 11:12:40.660: INFO: stderr: "" May 8 11:12:40.660: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-p8slb\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.98.187.77\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.116:6379\nSession Affinity: None\nEvents: \n" May 8 11:12:40.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' May 8 11:12:40.809: INFO: stderr: "" May 8 11:12:40.809: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:22:50 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 08 May 2020 11:12:32 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 08 May 2020 11:12:32 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 08 May 2020 11:12:32 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 08 May 2020 11:12:32 +0000 Sun, 15 Mar 2020 18:23:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3c4716968dac483293a23c2100ad64a5\n System UUID: 683417f7-64ca-431d-b8ac-22e73b26255e\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 53d\n kube-system kindnet-l2xm6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 53d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 53d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 53d\n kube-system kube-proxy-mmppc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 53d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 53d\n local-path-storage local-path-provisioner-77cfdd744c-q47vg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 53d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 8 11:12:40.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-p8slb' May 8 11:12:40.919: INFO: stderr: "" May 8 11:12:40.919: INFO: stdout: "Name: e2e-tests-kubectl-p8slb\nLabels: e2e-framework=kubectl\n e2e-run=34dad46a-9119-11ea-8adb-0242ac110017\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:12:40.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-p8slb" for this suite. May 8 11:13:02.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:13:02.971: INFO: namespace: e2e-tests-kubectl-p8slb, resource: bindings, ignored listing per whitelist May 8 11:13:03.017: INFO: namespace e2e-tests-kubectl-p8slb deletion completed in 22.093854809s • [SLOW TEST:27.445 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:13:03.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin May 8 11:13:03.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-ntw89 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 8 11:13:06.826: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0508 11:13:06.753561 964 log.go:172] (0xc000138790) (0xc00074c640) Create stream\nI0508 11:13:06.753627 964 log.go:172] (0xc000138790) (0xc00074c640) Stream added, broadcasting: 1\nI0508 11:13:06.755954 964 log.go:172] (0xc000138790) Reply frame received for 1\nI0508 11:13:06.755999 964 log.go:172] (0xc000138790) (0xc000760280) Create stream\nI0508 11:13:06.756013 964 log.go:172] (0xc000138790) (0xc000760280) Stream added, broadcasting: 3\nI0508 11:13:06.757431 964 log.go:172] (0xc000138790) Reply frame received for 3\nI0508 11:13:06.757506 964 log.go:172] (0xc000138790) (0xc00074c6e0) Create stream\nI0508 11:13:06.757526 964 log.go:172] (0xc000138790) (0xc00074c6e0) Stream added, broadcasting: 5\nI0508 11:13:06.758568 964 log.go:172] (0xc000138790) Reply frame received for 5\nI0508 11:13:06.758610 964 log.go:172] (0xc000138790) (0xc00074c780) Create stream\nI0508 11:13:06.758622 964 log.go:172] (0xc000138790) (0xc00074c780) Stream added, broadcasting: 7\nI0508 11:13:06.759689 964 log.go:172] (0xc000138790) Reply frame received for 7\nI0508 11:13:06.759893 964 log.go:172] (0xc000760280) (3) Writing data frame\nI0508 11:13:06.759991 964 log.go:172] (0xc000760280) (3) Writing data frame\nI0508 11:13:06.760970 964 log.go:172] (0xc000138790) Data frame received for 5\nI0508 11:13:06.760993 964 log.go:172] (0xc00074c6e0) (5) Data frame handling\nI0508 11:13:06.761014 964 log.go:172] (0xc00074c6e0) (5) Data frame sent\nI0508 11:13:06.762217 964 log.go:172] (0xc000138790) Data frame received for 5\nI0508 11:13:06.762237 964 log.go:172] (0xc00074c6e0) (5) Data frame handling\nI0508 11:13:06.762252 964 log.go:172] (0xc00074c6e0) (5) Data frame sent\nI0508 11:13:06.799389 964 log.go:172] (0xc000138790) Data frame received for 5\nI0508 11:13:06.799436 964 log.go:172] (0xc00074c6e0) (5) Data frame handling\nI0508 11:13:06.799469 964 log.go:172] (0xc000138790) Data frame received for 7\nI0508 11:13:06.799509 964 log.go:172] (0xc00074c780) (7) Data frame handling\nI0508 11:13:06.799897 964 log.go:172] (0xc000138790) (0xc000760280) Stream removed, broadcasting: 3\nI0508 11:13:06.799935 964 log.go:172] (0xc000138790) Data frame received for 1\nI0508 11:13:06.799955 964 log.go:172] (0xc00074c640) (1) Data frame handling\nI0508 11:13:06.799986 964 log.go:172] (0xc00074c640) (1) Data frame sent\nI0508 11:13:06.800000 964 log.go:172] (0xc000138790) (0xc00074c640) Stream removed, broadcasting: 1\nI0508 11:13:06.800103 964 log.go:172] (0xc000138790) Go away received\nI0508 11:13:06.800208 964 log.go:172] (0xc000138790) (0xc00074c640) Stream removed, broadcasting: 1\nI0508 11:13:06.800257 964 log.go:172] (0xc000138790) (0xc000760280) Stream removed, broadcasting: 3\nI0508 11:13:06.800282 964 log.go:172] (0xc000138790) (0xc00074c6e0) Stream removed, broadcasting: 5\nI0508 11:13:06.800308 964 log.go:172] (0xc000138790) (0xc00074c780) Stream removed, broadcasting: 7\n" May 8 11:13:06.826: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:13:08.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ntw89" for this suite. May 8 11:13:14.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:13:14.884: INFO: namespace: e2e-tests-kubectl-ntw89, resource: bindings, ignored listing per whitelist May 8 11:13:14.918: INFO: namespace e2e-tests-kubectl-ntw89 deletion completed in 6.081336194s • [SLOW TEST:11.901 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:13:14.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 8 11:13:21.708: INFO: 0 pods remaining May 8 11:13:21.708: INFO: 0 pods has nil DeletionTimestamp May 8 11:13:21.708: INFO: May 8 11:13:22.131: INFO: 0 pods remaining May 8 11:13:22.131: INFO: 0 pods has nil DeletionTimestamp May 8 11:13:22.131: INFO: STEP: Gathering metrics W0508 11:13:23.433712 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 8 11:13:23.433: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:13:23.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-wbrch" for this suite. May 8 11:13:29.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:13:29.498: INFO: namespace: e2e-tests-gc-wbrch, resource: bindings, ignored listing per whitelist May 8 11:13:29.550: INFO: namespace e2e-tests-gc-wbrch deletion completed in 6.11310275s • [SLOW TEST:14.631 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:13:29.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:14:04.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-4b5gt" for this suite. May 8 11:14:10.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:14:10.551: INFO: namespace: e2e-tests-container-runtime-4b5gt, resource: bindings, ignored listing per whitelist May 8 11:14:10.571: INFO: namespace e2e-tests-container-runtime-4b5gt deletion completed in 6.089217222s • [SLOW TEST:41.021 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:14:10.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 8 11:14:10.697: INFO: Creating deployment "nginx-deployment" May 8 11:14:10.701: INFO: Waiting for observed generation 1 May 8 11:14:12.932: INFO: Waiting for all required pods to come up May 8 11:14:13.212: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 8 11:14:21.234: INFO: Waiting for deployment "nginx-deployment" to complete May 8 11:14:21.238: INFO: Updating deployment "nginx-deployment" with a non-existent image May 8 11:14:21.244: INFO: Updating deployment nginx-deployment May 8 11:14:21.244: INFO: Waiting for observed generation 2 May 8 11:14:23.261: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 8 11:14:23.264: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 8 11:14:23.266: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 8 11:14:23.374: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 8 11:14:23.374: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 8 11:14:23.377: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 8 11:14:23.381: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 8 11:14:23.381: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 8 11:14:23.387: INFO: Updating deployment nginx-deployment May 8 11:14:23.387: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 8 11:14:23.639: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 8 11:14:23.782: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 8 11:14:26.339: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-4ghrc/deployments/nginx-deployment,UID:0aa68b2a-911d-11ea-99e8-0242ac110002,ResourceVersion:9402279,Generation:3,CreationTimestamp:2020-05-08 11:14:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-05-08 11:14:23 +0000 UTC 2020-05-08 11:14:23 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-08 11:14:24 +0000 UTC 2020-05-08 11:14:10 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} May 8 11:14:26.379: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-4ghrc/replicasets/nginx-deployment-5c98f8fb5,UID:10f23805-911d-11ea-99e8-0242ac110002,ResourceVersion:9402274,Generation:3,CreationTimestamp:2020-05-08 11:14:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 0aa68b2a-911d-11ea-99e8-0242ac110002 0xc001717947 0xc001717948}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 8 11:14:26.379: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 8 11:14:26.379: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-4ghrc/replicasets/nginx-deployment-85ddf47c5d,UID:0aa8268f-911d-11ea-99e8-0242ac110002,ResourceVersion:9402257,Generation:3,CreationTimestamp:2020-05-08 11:14:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 0aa68b2a-911d-11ea-99e8-0242ac110002 0xc001717a87 0xc001717a88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 8 11:14:26.564: INFO: Pod "nginx-deployment-5c98f8fb5-9lr2c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9lr2c,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-5c98f8fb5-9lr2c,UID:12783e1e-911d-11ea-99e8-0242ac110002,ResourceVersion:9402260,Generation:0,CreationTimestamp:2020-05-08 11:14:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 10f23805-911d-11ea-99e8-0242ac110002 0xc000ecd057 0xc000ecd058}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ecd100} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ecd120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.564: INFO: Pod "nginx-deployment-5c98f8fb5-bpkk7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bpkk7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-5c98f8fb5-bpkk7,UID:1113e25a-911d-11ea-99e8-0242ac110002,ResourceVersion:9402192,Generation:0,CreationTimestamp:2020-05-08 11:14:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 10f23805-911d-11ea-99e8-0242ac110002 0xc000ecd217 0xc000ecd218}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ecd290} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ecd330}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:21 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-08 11:14:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.565: INFO: Pod "nginx-deployment-5c98f8fb5-d9vpc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-d9vpc,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-5c98f8fb5-d9vpc,UID:12652b45-911d-11ea-99e8-0242ac110002,ResourceVersion:9402297,Generation:0,CreationTimestamp:2020-05-08 11:14:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 10f23805-911d-11ea-99e8-0242ac110002 0xc000ecd480 0xc000ecd481}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ecd520} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ecd640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-08 11:14:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.565: INFO: Pod "nginx-deployment-5c98f8fb5-dml6v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dml6v,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-5c98f8fb5-dml6v,UID:1273a769-911d-11ea-99e8-0242ac110002,ResourceVersion:9402329,Generation:0,CreationTimestamp:2020-05-08 11:14:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 10f23805-911d-11ea-99e8-0242ac110002 0xc000ecd700 0xc000ecd701}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ecd7f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ecd810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-08 11:14:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.565: INFO: Pod "nginx-deployment-5c98f8fb5-fpplf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-fpplf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-5c98f8fb5-fpplf,UID:1273a43c-911d-11ea-99e8-0242ac110002,ResourceVersion:9402246,Generation:0,CreationTimestamp:2020-05-08 11:14:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 10f23805-911d-11ea-99e8-0242ac110002 0xc000ecd9d0 0xc000ecd9d1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ecda50} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ecda70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.565: INFO: Pod "nginx-deployment-5c98f8fb5-l4bcc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-l4bcc,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-5c98f8fb5-l4bcc,UID:125d1b8a-911d-11ea-99e8-0242ac110002,ResourceVersion:9402263,Generation:0,CreationTimestamp:2020-05-08 11:14:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 10f23805-911d-11ea-99e8-0242ac110002 0xc000ecdc27 0xc000ecdc28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ecdca0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ecdcc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-08 11:14:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.565: INFO: Pod "nginx-deployment-5c98f8fb5-nzkf4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-nzkf4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-5c98f8fb5-nzkf4,UID:12738a54-911d-11ea-99e8-0242ac110002,ResourceVersion:9402318,Generation:0,CreationTimestamp:2020-05-08 11:14:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 10f23805-911d-11ea-99e8-0242ac110002 0xc000ecde50 0xc000ecde51}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ecded0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ecdef0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-08 11:14:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.566: INFO: Pod "nginx-deployment-5c98f8fb5-px79t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-px79t,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-5c98f8fb5-px79t,UID:10fb4e17-911d-11ea-99e8-0242ac110002,ResourceVersion:9402177,Generation:0,CreationTimestamp:2020-05-08 11:14:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 10f23805-911d-11ea-99e8-0242ac110002 0xc00124e010 0xc00124e011}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00124e090} {node.kubernetes.io/unreachable Exists NoExecute 0xc00124e0b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:21 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-08 11:14:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.566: INFO: Pod "nginx-deployment-5c98f8fb5-sz9xm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-sz9xm,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-5c98f8fb5-sz9xm,UID:11160823-911d-11ea-99e8-0242ac110002,ResourceVersion:9402194,Generation:0,CreationTimestamp:2020-05-08 11:14:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 10f23805-911d-11ea-99e8-0242ac110002 0xc00124e170 0xc00124e171}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00124e220} {node.kubernetes.io/unreachable Exists NoExecute 0xc00124e240}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:21 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-08 11:14:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.566: INFO: Pod "nginx-deployment-5c98f8fb5-wtnhg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wtnhg,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-5c98f8fb5-wtnhg,UID:10fb40a5-911d-11ea-99e8-0242ac110002,ResourceVersion:9402169,Generation:0,CreationTimestamp:2020-05-08 11:14:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 10f23805-911d-11ea-99e8-0242ac110002 0xc00124e370 0xc00124e371}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00124e3f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00124e410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:21 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-08 11:14:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.566: INFO: Pod "nginx-deployment-5c98f8fb5-xhnnx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xhnnx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-5c98f8fb5-xhnnx,UID:12739da2-911d-11ea-99e8-0242ac110002,ResourceVersion:9402250,Generation:0,CreationTimestamp:2020-05-08 11:14:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 10f23805-911d-11ea-99e8-0242ac110002 0xc00124e4f0 0xc00124e4f1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00124e5f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00124e620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.566: INFO: Pod "nginx-deployment-5c98f8fb5-xshqv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xshqv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-5c98f8fb5-xshqv,UID:10fa2985-911d-11ea-99e8-0242ac110002,ResourceVersion:9402170,Generation:0,CreationTimestamp:2020-05-08 11:14:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 10f23805-911d-11ea-99e8-0242ac110002 0xc00124e707 0xc00124e708}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00124e780} {node.kubernetes.io/unreachable Exists NoExecute 0xc00124e7a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:21 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-08 11:14:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.566: INFO: Pod "nginx-deployment-5c98f8fb5-zqznj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zqznj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-5c98f8fb5-zqznj,UID:12651c8e-911d-11ea-99e8-0242ac110002,ResourceVersion:9402323,Generation:0,CreationTimestamp:2020-05-08 11:14:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 10f23805-911d-11ea-99e8-0242ac110002 0xc00124e870 0xc00124e871}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00124e8f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00124e910}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-08 11:14:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.566: INFO: Pod "nginx-deployment-85ddf47c5d-2hvk7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2hvk7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-85ddf47c5d-2hvk7,UID:12652d6f-911d-11ea-99e8-0242ac110002,ResourceVersion:9402238,Generation:0,CreationTimestamp:2020-05-08 11:14:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0aa8268f-911d-11ea-99e8-0242ac110002 0xc00124e9d0 0xc00124e9d1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00124ea40} {node.kubernetes.io/unreachable Exists NoExecute 0xc00124ea60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.567: INFO: Pod "nginx-deployment-85ddf47c5d-6fnvc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6fnvc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-85ddf47c5d-6fnvc,UID:1273f5bc-911d-11ea-99e8-0242ac110002,ResourceVersion:9402256,Generation:0,CreationTimestamp:2020-05-08 11:14:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0aa8268f-911d-11ea-99e8-0242ac110002 0xc00124ead7 0xc00124ead8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00124eb50} {node.kubernetes.io/unreachable Exists NoExecute 0xc00124eb70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.567: INFO: Pod "nginx-deployment-85ddf47c5d-6l9zn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6l9zn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-85ddf47c5d-6l9zn,UID:1274026c-911d-11ea-99e8-0242ac110002,ResourceVersion:9402255,Generation:0,CreationTimestamp:2020-05-08 11:14:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0aa8268f-911d-11ea-99e8-0242ac110002 0xc00124ebe7 0xc00124ebe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00124ec60} {node.kubernetes.io/unreachable Exists NoExecute 0xc00124ec90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.567: INFO: Pod "nginx-deployment-85ddf47c5d-8kdms" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8kdms,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-85ddf47c5d-8kdms,UID:123dbd22-911d-11ea-99e8-0242ac110002,ResourceVersion:9402245,Generation:0,CreationTimestamp:2020-05-08 11:14:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0aa8268f-911d-11ea-99e8-0242ac110002 0xc00124ed07 0xc00124ed08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00124ed80} {node.kubernetes.io/unreachable Exists NoExecute 0xc00124eda0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-08 11:14:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.567: INFO: Pod "nginx-deployment-85ddf47c5d-b5bwh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-b5bwh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-85ddf47c5d-b5bwh,UID:1273ef2f-911d-11ea-99e8-0242ac110002,ResourceVersion:9402252,Generation:0,CreationTimestamp:2020-05-08 11:14:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0aa8268f-911d-11ea-99e8-0242ac110002 0xc00124ee87 0xc00124ee88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00124ef00} {node.kubernetes.io/unreachable Exists NoExecute 0xc00124ef20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.567: INFO: Pod "nginx-deployment-85ddf47c5d-ct8b6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ct8b6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-85ddf47c5d-ct8b6,UID:12651ad1-911d-11ea-99e8-0242ac110002,ResourceVersion:9402276,Generation:0,CreationTimestamp:2020-05-08 11:14:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0aa8268f-911d-11ea-99e8-0242ac110002 0xc00124efa7 0xc00124efa8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00124f020} {node.kubernetes.io/unreachable Exists NoExecute 0xc00124f040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-08 11:14:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.567: INFO: Pod "nginx-deployment-85ddf47c5d-fpthf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fpthf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-85ddf47c5d-fpthf,UID:0ab07e0a-911d-11ea-99e8-0242ac110002,ResourceVersion:9402124,Generation:0,CreationTimestamp:2020-05-08 11:14:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0aa8268f-911d-11ea-99e8-0242ac110002 0xc00124f107 0xc00124f108}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00124f1c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00124f1e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.124,StartTime:2020-05-08 11:14:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-08 11:14:17 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://407408e553fac9c69b6e70c08a5529c66ea6a1efe4feed16e57a6be05d34c861}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.567: INFO: Pod "nginx-deployment-85ddf47c5d-jb82c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jb82c,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-85ddf47c5d-jb82c,UID:127400c3-911d-11ea-99e8-0242ac110002,ResourceVersion:9402254,Generation:0,CreationTimestamp:2020-05-08 11:14:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0aa8268f-911d-11ea-99e8-0242ac110002 0xc00124f2a7 0xc00124f2a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00124f320} {node.kubernetes.io/unreachable Exists NoExecute 0xc00124f340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.568: INFO: Pod "nginx-deployment-85ddf47c5d-kfbw9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kfbw9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-85ddf47c5d-kfbw9,UID:0ab085f1-911d-11ea-99e8-0242ac110002,ResourceVersion:9402143,Generation:0,CreationTimestamp:2020-05-08 11:14:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0aa8268f-911d-11ea-99e8-0242ac110002 0xc00124f3b7 0xc00124f3b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00124f490} {node.kubernetes.io/unreachable Exists NoExecute 0xc00124f4b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.126,StartTime:2020-05-08 11:14:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-08 11:14:19 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://db81dde30448f371adf227bc400a27c7d01582325524ad633d9bea95ff8b5c3b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.568: INFO: Pod "nginx-deployment-85ddf47c5d-lnjth" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lnjth,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-85ddf47c5d-lnjth,UID:0ab71e3e-911d-11ea-99e8-0242ac110002,ResourceVersion:9402134,Generation:0,CreationTimestamp:2020-05-08 11:14:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0aa8268f-911d-11ea-99e8-0242ac110002 0xc00124f577 0xc00124f578}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00124f600} {node.kubernetes.io/unreachable Exists NoExecute 0xc00124f660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.24,StartTime:2020-05-08 11:14:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-08 11:14:19 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://818043437b938d20de0cedd51e7507defe7a2703ef652860b6fe0b4056d0f97a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.568: INFO: Pod "nginx-deployment-85ddf47c5d-n6k27" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-n6k27,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-85ddf47c5d-n6k27,UID:12652cc9-911d-11ea-99e8-0242ac110002,ResourceVersion:9402235,Generation:0,CreationTimestamp:2020-05-08 11:14:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0aa8268f-911d-11ea-99e8-0242ac110002 0xc00124f727 0xc00124f728}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00124f7b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001488010}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.568: INFO: Pod "nginx-deployment-85ddf47c5d-n7lgw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-n7lgw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-85ddf47c5d-n7lgw,UID:0aada80e-911d-11ea-99e8-0242ac110002,ResourceVersion:9402110,Generation:0,CreationTimestamp:2020-05-08 11:14:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0aa8268f-911d-11ea-99e8-0242ac110002 0xc0014883e7 0xc0014883e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001489800} {node.kubernetes.io/unreachable Exists NoExecute 0xc001489840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:18 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.21,StartTime:2020-05-08 11:14:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-08 11:14:17 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9e889358c76da84e3d085b705a1660bfeeb97f05052bc79f7ef9d1021a96521e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.568: INFO: Pod "nginx-deployment-85ddf47c5d-pvs5c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pvs5c,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-85ddf47c5d-pvs5c,UID:125d243c-911d-11ea-99e8-0242ac110002,ResourceVersion:9402266,Generation:0,CreationTimestamp:2020-05-08 11:14:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0aa8268f-911d-11ea-99e8-0242ac110002 0xc001489ad7 0xc001489ad8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001489bb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001489bd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-08 11:14:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.568: INFO: Pod "nginx-deployment-85ddf47c5d-r78zf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-r78zf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-85ddf47c5d-r78zf,UID:125d2487-911d-11ea-99e8-0242ac110002,ResourceVersion:9402272,Generation:0,CreationTimestamp:2020-05-08 11:14:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0aa8268f-911d-11ea-99e8-0242ac110002 0xc00195e027 0xc00195e028}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00195e0a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00195e0c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-08 11:14:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.568: INFO: Pod "nginx-deployment-85ddf47c5d-t9pq6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-t9pq6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-85ddf47c5d-t9pq6,UID:0aad9f8e-911d-11ea-99e8-0242ac110002,ResourceVersion:9402113,Generation:0,CreationTimestamp:2020-05-08 11:14:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0aa8268f-911d-11ea-99e8-0242ac110002 0xc00195e1a7 0xc00195e1a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00195e260} {node.kubernetes.io/unreachable Exists NoExecute 0xc00195e280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:18 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.123,StartTime:2020-05-08 11:14:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-08 11:14:17 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://29302327461c22299d6fceb2c2763e947719ac58436f0d20386f6f6ee85d96c2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.569: INFO: Pod "nginx-deployment-85ddf47c5d-vq9pv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vq9pv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-85ddf47c5d-vq9pv,UID:0ab083bb-911d-11ea-99e8-0242ac110002,ResourceVersion:9402105,Generation:0,CreationTimestamp:2020-05-08 11:14:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0aa8268f-911d-11ea-99e8-0242ac110002 0xc00195ec67 0xc00195ec68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00195ed70} {node.kubernetes.io/unreachable Exists NoExecute 0xc00195edf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:18 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.22,StartTime:2020-05-08 11:14:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-08 11:14:17 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://4011cd69cecde7585b9ea1f187f6b59dddac2b0e7a12208ccecaabb4dcc34d27}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.569: INFO: Pod "nginx-deployment-85ddf47c5d-w8m87" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-w8m87,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-85ddf47c5d-w8m87,UID:0aace36f-911d-11ea-99e8-0242ac110002,ResourceVersion:9402087,Generation:0,CreationTimestamp:2020-05-08 11:14:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0aa8268f-911d-11ea-99e8-0242ac110002 0xc00195ef17 0xc00195ef18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00195f280} {node.kubernetes.io/unreachable Exists NoExecute 0xc00195f320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.20,StartTime:2020-05-08 11:14:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-08 11:14:14 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5322a3201abfe13d30d9014170305d57b8e310dfe5590cd3143639e39cda8c60}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.569: INFO: Pod "nginx-deployment-85ddf47c5d-wkxs5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wkxs5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-85ddf47c5d-wkxs5,UID:126522ff-911d-11ea-99e8-0242ac110002,ResourceVersion:9402311,Generation:0,CreationTimestamp:2020-05-08 11:14:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0aa8268f-911d-11ea-99e8-0242ac110002 0xc00195f4b7 0xc00195f4b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00195fa30} {node.kubernetes.io/unreachable Exists NoExecute 0xc00195fa50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-08 11:14:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.569: INFO: Pod "nginx-deployment-85ddf47c5d-zhzjf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zhzjf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-85ddf47c5d-zhzjf,UID:0ab084df-911d-11ea-99e8-0242ac110002,ResourceVersion:9402122,Generation:0,CreationTimestamp:2020-05-08 11:14:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0aa8268f-911d-11ea-99e8-0242ac110002 0xc00195fbb7 0xc00195fbb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00195ff10} {node.kubernetes.io/unreachable Exists NoExecute 0xc00195ff30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.125,StartTime:2020-05-08 11:14:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-08 11:14:19 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d881b1ef0ea722db44e7891ade9d3138359f897db05b7862701292c7cd691da7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 11:14:26.569: INFO: Pod "nginx-deployment-85ddf47c5d-zq87z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zq87z,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4ghrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4ghrc/pods/nginx-deployment-85ddf47c5d-zq87z,UID:1273ee4d-911d-11ea-99e8-0242ac110002,ResourceVersion:9402253,Generation:0,CreationTimestamp:2020-05-08 11:14:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 0aa8268f-911d-11ea-99e8-0242ac110002 0xc00195fff7 0xc00195fff8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s5gkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5gkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s5gkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001626290} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016262b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:14:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:14:26.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-4ghrc" for this suite. May 8 11:14:57.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:14:57.156: INFO: namespace: e2e-tests-deployment-4ghrc, resource: bindings, ignored listing per whitelist May 8 11:14:57.220: INFO: namespace e2e-tests-deployment-4ghrc deletion completed in 30.293970859s • [SLOW TEST:46.649 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:14:57.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-266fdc33-911d-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume configMaps May 8 11:14:57.365: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2672d279-911d-11ea-8adb-0242ac110017" in namespace "e2e-tests-projected-ktxkk" to be "success or failure" May 8 11:14:57.368: INFO: Pod "pod-projected-configmaps-2672d279-911d-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.082476ms May 8 11:14:59.373: INFO: Pod "pod-projected-configmaps-2672d279-911d-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007934701s May 8 11:15:01.377: INFO: Pod "pod-projected-configmaps-2672d279-911d-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011999385s STEP: Saw pod success May 8 11:15:01.377: INFO: Pod "pod-projected-configmaps-2672d279-911d-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:15:01.380: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-2672d279-911d-11ea-8adb-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 8 11:15:01.433: INFO: Waiting for pod pod-projected-configmaps-2672d279-911d-11ea-8adb-0242ac110017 to disappear May 8 11:15:01.439: INFO: Pod pod-projected-configmaps-2672d279-911d-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:15:01.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ktxkk" for this suite. May 8 11:15:07.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:15:07.478: INFO: namespace: e2e-tests-projected-ktxkk, resource: bindings, ignored listing per whitelist May 8 11:15:07.523: INFO: namespace e2e-tests-projected-ktxkk deletion completed in 6.081363602s • [SLOW TEST:10.302 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:15:07.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 8 11:15:07.615: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2c9078c0-911d-11ea-8adb-0242ac110017" in namespace "e2e-tests-downward-api-jbvj8" to be "success or failure" May 8 11:15:07.692: INFO: Pod "downwardapi-volume-2c9078c0-911d-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 76.440871ms May 8 11:15:09.696: INFO: Pod "downwardapi-volume-2c9078c0-911d-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080227612s May 8 11:15:11.699: INFO: Pod "downwardapi-volume-2c9078c0-911d-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083881645s STEP: Saw pod success May 8 11:15:11.699: INFO: Pod "downwardapi-volume-2c9078c0-911d-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:15:11.702: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-2c9078c0-911d-11ea-8adb-0242ac110017 container client-container: STEP: delete the pod May 8 11:15:11.790: INFO: Waiting for pod downwardapi-volume-2c9078c0-911d-11ea-8adb-0242ac110017 to disappear May 8 11:15:11.798: INFO: Pod downwardapi-volume-2c9078c0-911d-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:15:11.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-jbvj8" for this suite. May 8 11:15:17.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:15:17.858: INFO: namespace: e2e-tests-downward-api-jbvj8, resource: bindings, ignored listing per whitelist May 8 11:15:17.925: INFO: namespace e2e-tests-downward-api-jbvj8 deletion completed in 6.122856643s • [SLOW TEST:10.401 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:15:17.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-dk455 STEP: creating a selector STEP: Creating the service pods in kubernetes May 8 11:15:18.065: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 8 11:15:40.213: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.39:8080/dial?request=hostName&protocol=http&host=10.244.2.38&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-dk455 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 11:15:40.213: INFO: >>> kubeConfig: /root/.kube/config I0508 11:15:40.250636 6 log.go:172] (0xc000afd130) (0xc001daef00) Create stream I0508 11:15:40.250682 6 log.go:172] (0xc000afd130) (0xc001daef00) Stream added, broadcasting: 1 I0508 11:15:40.253093 6 log.go:172] (0xc000afd130) Reply frame received for 1 I0508 11:15:40.253407 6 log.go:172] (0xc000afd130) (0xc001c800a0) Create stream I0508 11:15:40.253423 6 log.go:172] (0xc000afd130) (0xc001c800a0) Stream added, broadcasting: 3 I0508 11:15:40.254939 6 log.go:172] (0xc000afd130) Reply frame received for 3 I0508 11:15:40.254986 6 log.go:172] (0xc000afd130) (0xc00180e000) Create stream I0508 11:15:40.255006 6 log.go:172] (0xc000afd130) (0xc00180e000) Stream added, broadcasting: 5 I0508 11:15:40.256116 6 log.go:172] (0xc000afd130) Reply frame received for 5 I0508 11:15:40.344859 6 log.go:172] (0xc000afd130) Data frame received for 3 I0508 11:15:40.344892 6 log.go:172] (0xc001c800a0) (3) Data frame handling I0508 11:15:40.344923 6 log.go:172] (0xc001c800a0) (3) Data frame sent I0508 11:15:40.345842 6 log.go:172] (0xc000afd130) Data frame received for 3 I0508 11:15:40.345903 6 log.go:172] (0xc001c800a0) (3) Data frame handling I0508 11:15:40.346263 6 log.go:172] (0xc000afd130) Data frame received for 5 I0508 11:15:40.346283 6 log.go:172] (0xc00180e000) (5) Data frame handling I0508 11:15:40.347836 6 log.go:172] (0xc000afd130) Data frame received for 1 I0508 11:15:40.347858 6 log.go:172] (0xc001daef00) (1) Data frame handling I0508 11:15:40.347873 6 log.go:172] (0xc001daef00) (1) Data frame sent I0508 11:15:40.347894 6 log.go:172] (0xc000afd130) (0xc001daef00) Stream removed, broadcasting: 1 I0508 11:15:40.347991 6 log.go:172] (0xc000afd130) (0xc001daef00) Stream removed, broadcasting: 1 I0508 11:15:40.348019 6 log.go:172] (0xc000afd130) (0xc001c800a0) Stream removed, broadcasting: 3 I0508 11:15:40.348144 6 log.go:172] (0xc000afd130) (0xc00180e000) Stream removed, broadcasting: 5 May 8 11:15:40.348: INFO: Waiting for endpoints: map[] I0508 11:15:40.348412 6 log.go:172] (0xc000afd130) Go away received May 8 11:15:40.350: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.39:8080/dial?request=hostName&protocol=http&host=10.244.1.142&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-dk455 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 11:15:40.350: INFO: >>> kubeConfig: /root/.kube/config I0508 11:15:40.377932 6 log.go:172] (0xc001b144d0) (0xc001a10960) Create stream I0508 11:15:40.377961 6 log.go:172] (0xc001b144d0) (0xc001a10960) Stream added, broadcasting: 1 I0508 11:15:40.388684 6 log.go:172] (0xc001b144d0) Reply frame received for 1 I0508 11:15:40.388739 6 log.go:172] (0xc001b144d0) (0xc0012e7360) Create stream I0508 11:15:40.388754 6 log.go:172] (0xc001b144d0) (0xc0012e7360) Stream added, broadcasting: 3 I0508 11:15:40.390303 6 log.go:172] (0xc001b144d0) Reply frame received for 3 I0508 11:15:40.390361 6 log.go:172] (0xc001b144d0) (0xc001daf040) Create stream I0508 11:15:40.390374 6 log.go:172] (0xc001b144d0) (0xc001daf040) Stream added, broadcasting: 5 I0508 11:15:40.391486 6 log.go:172] (0xc001b144d0) Reply frame received for 5 I0508 11:15:40.472853 6 log.go:172] (0xc001b144d0) Data frame received for 3 I0508 11:15:40.472889 6 log.go:172] (0xc0012e7360) (3) Data frame handling I0508 11:15:40.472909 6 log.go:172] (0xc0012e7360) (3) Data frame sent I0508 11:15:40.473661 6 log.go:172] (0xc001b144d0) Data frame received for 5 I0508 11:15:40.473701 6 log.go:172] (0xc001daf040) (5) Data frame handling I0508 11:15:40.473839 6 log.go:172] (0xc001b144d0) Data frame received for 3 I0508 11:15:40.473871 6 log.go:172] (0xc0012e7360) (3) Data frame handling I0508 11:15:40.475163 6 log.go:172] (0xc001b144d0) Data frame received for 1 I0508 11:15:40.475201 6 log.go:172] (0xc001a10960) (1) Data frame handling I0508 11:15:40.475221 6 log.go:172] (0xc001a10960) (1) Data frame sent I0508 11:15:40.475240 6 log.go:172] (0xc001b144d0) (0xc001a10960) Stream removed, broadcasting: 1 I0508 11:15:40.475285 6 log.go:172] (0xc001b144d0) Go away received I0508 11:15:40.475355 6 log.go:172] (0xc001b144d0) (0xc001a10960) Stream removed, broadcasting: 1 I0508 11:15:40.475381 6 log.go:172] (0xc001b144d0) (0xc0012e7360) Stream removed, broadcasting: 3 I0508 11:15:40.475403 6 log.go:172] (0xc001b144d0) (0xc001daf040) Stream removed, broadcasting: 5 May 8 11:15:40.475: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:15:40.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-dk455" for this suite. May 8 11:16:04.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:16:04.546: INFO: namespace: e2e-tests-pod-network-test-dk455, resource: bindings, ignored listing per whitelist May 8 11:16:04.563: INFO: namespace e2e-tests-pod-network-test-dk455 deletion completed in 24.083134926s • [SLOW TEST:46.638 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:16:04.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-4e91b753-911d-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume secrets May 8 11:16:04.698: INFO: Waiting up to 5m0s for pod "pod-secrets-4e943121-911d-11ea-8adb-0242ac110017" in namespace "e2e-tests-secrets-hqcfg" to be "success or failure" May 8 11:16:04.700: INFO: Pod "pod-secrets-4e943121-911d-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167026ms May 8 11:16:06.705: INFO: Pod "pod-secrets-4e943121-911d-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007143899s May 8 11:16:08.709: INFO: Pod "pod-secrets-4e943121-911d-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010997665s STEP: Saw pod success May 8 11:16:08.709: INFO: Pod "pod-secrets-4e943121-911d-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:16:08.711: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-4e943121-911d-11ea-8adb-0242ac110017 container secret-volume-test: STEP: delete the pod May 8 11:16:08.735: INFO: Waiting for pod pod-secrets-4e943121-911d-11ea-8adb-0242ac110017 to disappear May 8 11:16:08.746: INFO: Pod pod-secrets-4e943121-911d-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:16:08.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-hqcfg" for this suite. May 8 11:16:14.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:16:14.856: INFO: namespace: e2e-tests-secrets-hqcfg, resource: bindings, ignored listing per whitelist May 8 11:16:14.858: INFO: namespace e2e-tests-secrets-hqcfg deletion completed in 6.080477292s • [SLOW TEST:10.295 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:16:14.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition May 8 11:16:14.963: INFO: Waiting up to 5m0s for pod "var-expansion-54b6cc62-911d-11ea-8adb-0242ac110017" in namespace "e2e-tests-var-expansion-rk2db" to be "success or failure" May 8 11:16:14.981: INFO: Pod "var-expansion-54b6cc62-911d-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 17.785539ms May 8 11:16:16.984: INFO: Pod "var-expansion-54b6cc62-911d-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021119809s May 8 11:16:18.988: INFO: Pod "var-expansion-54b6cc62-911d-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024999261s STEP: Saw pod success May 8 11:16:18.988: INFO: Pod "var-expansion-54b6cc62-911d-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:16:18.991: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-54b6cc62-911d-11ea-8adb-0242ac110017 container dapi-container: STEP: delete the pod May 8 11:16:19.083: INFO: Waiting for pod var-expansion-54b6cc62-911d-11ea-8adb-0242ac110017 to disappear May 8 11:16:19.087: INFO: Pod var-expansion-54b6cc62-911d-11ea-8adb-0242ac110017 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:16:19.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-rk2db" for this suite. May 8 11:16:25.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:16:25.167: INFO: namespace: e2e-tests-var-expansion-rk2db, resource: bindings, ignored listing per whitelist May 8 11:16:25.199: INFO: namespace e2e-tests-var-expansion-rk2db deletion completed in 6.108713281s • [SLOW TEST:10.341 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:16:25.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 8 11:16:25.298: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 8 11:16:30.302: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 8 11:16:30.302: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 8 11:16:32.306: INFO: Creating deployment "test-rollover-deployment" May 8 11:16:32.316: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 8 11:16:34.322: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 8 11:16:34.329: INFO: Ensure that both replica sets have 1 created replica May 8 11:16:34.334: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 8 11:16:34.340: INFO: Updating deployment test-rollover-deployment May 8 11:16:34.340: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 8 11:16:36.759: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 8 11:16:36.765: INFO: Make sure deployment "test-rollover-deployment" is complete May 8 11:16:36.770: INFO: all replica sets need to contain the pod-template-hash label May 8 11:16:36.770: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533392, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533392, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533395, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533392, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 11:16:38.792: INFO: all replica sets need to contain the pod-template-hash label May 8 11:16:38.792: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533392, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533392, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533398, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533392, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 11:16:40.778: INFO: all replica sets need to contain the pod-template-hash label May 8 11:16:40.778: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533392, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533392, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533398, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533392, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 11:16:42.779: INFO: all replica sets need to contain the pod-template-hash label May 8 11:16:42.779: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533392, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533392, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533398, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533392, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 11:16:44.777: INFO: all replica sets need to contain the pod-template-hash label May 8 11:16:44.778: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533392, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533392, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533398, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533392, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 11:16:46.779: INFO: all replica sets need to contain the pod-template-hash label May 8 11:16:46.779: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533392, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533392, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533398, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533392, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 11:16:48.932: INFO: May 8 11:16:48.932: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 8 11:16:48.987: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-v2vj2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-v2vj2/deployments/test-rollover-deployment,UID:5f0e6561-911d-11ea-99e8-0242ac110002,ResourceVersion:9403162,Generation:2,CreationTimestamp:2020-05-08 11:16:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-08 11:16:32 +0000 UTC 2020-05-08 11:16:32 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-08 11:16:48 +0000 UTC 2020-05-08 11:16:32 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 8 11:16:48.991: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-v2vj2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-v2vj2/replicasets/test-rollover-deployment-5b8479fdb6,UID:6044be3d-911d-11ea-99e8-0242ac110002,ResourceVersion:9403153,Generation:2,CreationTimestamp:2020-05-08 11:16:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 5f0e6561-911d-11ea-99e8-0242ac110002 0xc00267da27 0xc00267da28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 8 11:16:48.991: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 8 11:16:48.992: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-v2vj2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-v2vj2/replicasets/test-rollover-controller,UID:5ade12a7-911d-11ea-99e8-0242ac110002,ResourceVersion:9403161,Generation:2,CreationTimestamp:2020-05-08 11:16:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 5f0e6561-911d-11ea-99e8-0242ac110002 0xc00267d87f 0xc00267d890}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 8 11:16:48.992: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-v2vj2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-v2vj2/replicasets/test-rollover-deployment-58494b7559,UID:5f1119a5-911d-11ea-99e8-0242ac110002,ResourceVersion:9403122,Generation:2,CreationTimestamp:2020-05-08 11:16:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 5f0e6561-911d-11ea-99e8-0242ac110002 0xc00267d957 0xc00267d958}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 8 11:16:48.995: INFO: Pod "test-rollover-deployment-5b8479fdb6-cs96j" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-cs96j,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-v2vj2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v2vj2/pods/test-rollover-deployment-5b8479fdb6-cs96j,UID:60a71e83-911d-11ea-99e8-0242ac110002,ResourceVersion:9403130,Generation:0,CreationTimestamp:2020-05-08 11:16:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 6044be3d-911d-11ea-99e8-0242ac110002 0xc002511a27 0xc002511a28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p75cl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p75cl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-p75cl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002511aa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002511ac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:16:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:16:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:16:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:16:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.145,StartTime:2020-05-08 11:16:35 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-08 11:16:38 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://d1090df757c6a91889c57ae53ab1e7a70daca594d6b4f86d0aaed825ee7c69ad}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:16:48.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-v2vj2" for this suite. May 8 11:16:57.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:16:57.028: INFO: namespace: e2e-tests-deployment-v2vj2, resource: bindings, ignored listing per whitelist May 8 11:16:57.089: INFO: namespace e2e-tests-deployment-v2vj2 deletion completed in 8.089687627s • [SLOW TEST:31.890 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:16:57.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 8 11:16:57.811: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6e3a4bee-911d-11ea-8adb-0242ac110017" in namespace "e2e-tests-projected-gmxnq" to be "success or failure" May 8 11:16:57.927: INFO: Pod "downwardapi-volume-6e3a4bee-911d-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 116.679956ms May 8 11:16:59.999: INFO: Pod "downwardapi-volume-6e3a4bee-911d-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.188689388s May 8 11:17:02.190: INFO: Pod "downwardapi-volume-6e3a4bee-911d-11ea-8adb-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.379521175s May 8 11:17:04.194: INFO: Pod "downwardapi-volume-6e3a4bee-911d-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.383610694s STEP: Saw pod success May 8 11:17:04.194: INFO: Pod "downwardapi-volume-6e3a4bee-911d-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:17:04.197: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-6e3a4bee-911d-11ea-8adb-0242ac110017 container client-container: STEP: delete the pod May 8 11:17:04.293: INFO: Waiting for pod downwardapi-volume-6e3a4bee-911d-11ea-8adb-0242ac110017 to disappear May 8 11:17:04.341: INFO: Pod downwardapi-volume-6e3a4bee-911d-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:17:04.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gmxnq" for this suite. May 8 11:17:10.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:17:10.410: INFO: namespace: e2e-tests-projected-gmxnq, resource: bindings, ignored listing per whitelist May 8 11:17:10.503: INFO: namespace e2e-tests-projected-gmxnq deletion completed in 6.143421627s • [SLOW TEST:13.413 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:17:10.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 8 11:17:10.630: INFO: Waiting up to 5m0s for pod "downwardapi-volume-75e4ee3b-911d-11ea-8adb-0242ac110017" in namespace "e2e-tests-projected-qbhqg" to be "success or failure" May 8 11:17:10.640: INFO: Pod "downwardapi-volume-75e4ee3b-911d-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 10.309581ms May 8 11:17:12.644: INFO: Pod "downwardapi-volume-75e4ee3b-911d-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01423263s May 8 11:17:14.648: INFO: Pod "downwardapi-volume-75e4ee3b-911d-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018511085s STEP: Saw pod success May 8 11:17:14.648: INFO: Pod "downwardapi-volume-75e4ee3b-911d-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:17:14.651: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-75e4ee3b-911d-11ea-8adb-0242ac110017 container client-container: STEP: delete the pod May 8 11:17:14.672: INFO: Waiting for pod downwardapi-volume-75e4ee3b-911d-11ea-8adb-0242ac110017 to disappear May 8 11:17:14.677: INFO: Pod downwardapi-volume-75e4ee3b-911d-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:17:14.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qbhqg" for this suite. May 8 11:17:20.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:17:20.797: INFO: namespace: e2e-tests-projected-qbhqg, resource: bindings, ignored listing per whitelist May 8 11:17:20.846: INFO: namespace e2e-tests-projected-qbhqg deletion completed in 6.165383916s • [SLOW TEST:10.343 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:17:20.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-7c156db0-911d-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume configMaps May 8 11:17:21.195: INFO: Waiting up to 5m0s for pod "pod-configmaps-7c314651-911d-11ea-8adb-0242ac110017" in namespace "e2e-tests-configmap-4tdnq" to be "success or failure" May 8 11:17:21.222: INFO: Pod "pod-configmaps-7c314651-911d-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 26.728233ms May 8 11:17:23.276: INFO: Pod "pod-configmaps-7c314651-911d-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080580389s May 8 11:17:25.280: INFO: Pod "pod-configmaps-7c314651-911d-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084914076s May 8 11:17:27.285: INFO: Pod "pod-configmaps-7c314651-911d-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.089465678s STEP: Saw pod success May 8 11:17:27.285: INFO: Pod "pod-configmaps-7c314651-911d-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:17:27.287: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-7c314651-911d-11ea-8adb-0242ac110017 container configmap-volume-test: STEP: delete the pod May 8 11:17:27.450: INFO: Waiting for pod pod-configmaps-7c314651-911d-11ea-8adb-0242ac110017 to disappear May 8 11:17:27.473: INFO: Pod pod-configmaps-7c314651-911d-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:17:27.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-4tdnq" for this suite. May 8 11:17:33.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:17:33.546: INFO: namespace: e2e-tests-configmap-4tdnq, resource: bindings, ignored listing per whitelist May 8 11:17:33.577: INFO: namespace e2e-tests-configmap-4tdnq deletion completed in 6.101011049s • [SLOW TEST:12.731 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:17:33.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 8 11:17:38.261: INFO: Successfully updated pod "annotationupdate83a969cc-911d-11ea-8adb-0242ac110017" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:17:40.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-x82nq" for this suite. May 8 11:18:02.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:18:02.349: INFO: namespace: e2e-tests-downward-api-x82nq, resource: bindings, ignored listing per whitelist May 8 11:18:02.390: INFO: namespace e2e-tests-downward-api-x82nq deletion completed in 22.089398825s • [SLOW TEST:28.812 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:18:02.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 8 11:18:02.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-w5db6' May 8 11:18:02.840: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 8 11:18:02.840: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 May 8 11:18:03.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-w5db6' May 8 11:18:03.294: INFO: stderr: "" May 8 11:18:03.295: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:18:03.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-w5db6" for this suite. May 8 11:18:09.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:18:09.678: INFO: namespace: e2e-tests-kubectl-w5db6, resource: bindings, ignored listing per whitelist May 8 11:18:09.682: INFO: namespace e2e-tests-kubectl-w5db6 deletion completed in 6.369271446s • [SLOW TEST:7.292 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:18:09.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-9942fbbd-911d-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume configMaps May 8 11:18:10.074: INFO: Waiting up to 5m0s for pod "pod-configmaps-9944a846-911d-11ea-8adb-0242ac110017" in namespace "e2e-tests-configmap-k96wx" to be "success or failure" May 8 11:18:10.246: INFO: Pod "pod-configmaps-9944a846-911d-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 171.335038ms May 8 11:18:12.250: INFO: Pod "pod-configmaps-9944a846-911d-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1757068s May 8 11:18:14.255: INFO: Pod "pod-configmaps-9944a846-911d-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.180337842s May 8 11:18:16.258: INFO: Pod "pod-configmaps-9944a846-911d-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.184293156s STEP: Saw pod success May 8 11:18:16.259: INFO: Pod "pod-configmaps-9944a846-911d-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:18:16.261: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-9944a846-911d-11ea-8adb-0242ac110017 container configmap-volume-test: STEP: delete the pod May 8 11:18:16.283: INFO: Waiting for pod pod-configmaps-9944a846-911d-11ea-8adb-0242ac110017 to disappear May 8 11:18:16.340: INFO: Pod pod-configmaps-9944a846-911d-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:18:16.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-k96wx" for this suite. May 8 11:18:22.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:18:22.537: INFO: namespace: e2e-tests-configmap-k96wx, resource: bindings, ignored listing per whitelist May 8 11:18:22.591: INFO: namespace e2e-tests-configmap-k96wx deletion completed in 6.245168447s • [SLOW TEST:12.909 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:18:22.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-a0e915b8-911d-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume secrets May 8 11:18:22.828: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a0ed85c2-911d-11ea-8adb-0242ac110017" in namespace "e2e-tests-projected-d5knr" to be "success or failure" May 8 11:18:22.893: INFO: Pod "pod-projected-secrets-a0ed85c2-911d-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 64.780997ms May 8 11:18:24.964: INFO: Pod "pod-projected-secrets-a0ed85c2-911d-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135093107s May 8 11:18:27.120: INFO: Pod "pod-projected-secrets-a0ed85c2-911d-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.291137874s May 8 11:18:29.123: INFO: Pod "pod-projected-secrets-a0ed85c2-911d-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.294720225s STEP: Saw pod success May 8 11:18:29.123: INFO: Pod "pod-projected-secrets-a0ed85c2-911d-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:18:29.126: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-a0ed85c2-911d-11ea-8adb-0242ac110017 container projected-secret-volume-test: STEP: delete the pod May 8 11:18:29.163: INFO: Waiting for pod pod-projected-secrets-a0ed85c2-911d-11ea-8adb-0242ac110017 to disappear May 8 11:18:29.209: INFO: Pod pod-projected-secrets-a0ed85c2-911d-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:18:29.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-d5knr" for this suite. May 8 11:18:35.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:18:35.313: INFO: namespace: e2e-tests-projected-d5knr, resource: bindings, ignored listing per whitelist May 8 11:18:35.318: INFO: namespace e2e-tests-projected-d5knr deletion completed in 6.103676501s • [SLOW TEST:12.727 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:18:35.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 8 11:18:35.433: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 5.697209ms) May 8 11:18:35.436: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.379347ms) May 8 11:18:35.440: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.233116ms) May 8 11:18:35.443: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.217339ms) May 8 11:18:35.467: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 24.663687ms) May 8 11:18:35.471: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.795529ms) May 8 11:18:35.475: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.248751ms) May 8 11:18:35.478: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.104591ms) May 8 11:18:35.481: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.383454ms) May 8 11:18:35.484: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.606269ms) May 8 11:18:35.487: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.915133ms) May 8 11:18:35.490: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.315871ms) May 8 11:18:35.493: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.323846ms) May 8 11:18:35.496: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.810093ms) May 8 11:18:35.499: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.139358ms) May 8 11:18:35.502: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.979457ms) May 8 11:18:35.506: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.136953ms) May 8 11:18:35.509: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.559918ms) May 8 11:18:35.512: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.233527ms) May 8 11:18:35.516: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.606021ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:18:35.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-zb4sq" for this suite. May 8 11:18:41.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:18:41.613: INFO: namespace: e2e-tests-proxy-zb4sq, resource: bindings, ignored listing per whitelist May 8 11:18:41.627: INFO: namespace e2e-tests-proxy-zb4sq deletion completed in 6.107053834s • [SLOW TEST:6.309 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:18:41.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-2996 STEP: Creating a pod to test atomic-volume-subpath May 8 11:18:41.889: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2996" in namespace "e2e-tests-subpath-fmhks" to be "success or failure" May 8 11:18:41.936: INFO: Pod "pod-subpath-test-configmap-2996": Phase="Pending", Reason="", readiness=false. Elapsed: 47.283337ms May 8 11:18:44.145: INFO: Pod "pod-subpath-test-configmap-2996": Phase="Pending", Reason="", readiness=false. Elapsed: 2.255880102s May 8 11:18:46.149: INFO: Pod "pod-subpath-test-configmap-2996": Phase="Pending", Reason="", readiness=false. Elapsed: 4.260228607s May 8 11:18:48.167: INFO: Pod "pod-subpath-test-configmap-2996": Phase="Pending", Reason="", readiness=false. Elapsed: 6.278193982s May 8 11:18:50.172: INFO: Pod "pod-subpath-test-configmap-2996": Phase="Running", Reason="", readiness=false. Elapsed: 8.282777346s May 8 11:18:52.176: INFO: Pod "pod-subpath-test-configmap-2996": Phase="Running", Reason="", readiness=false. Elapsed: 10.287246617s May 8 11:18:54.180: INFO: Pod "pod-subpath-test-configmap-2996": Phase="Running", Reason="", readiness=false. Elapsed: 12.290701269s May 8 11:18:56.184: INFO: Pod "pod-subpath-test-configmap-2996": Phase="Running", Reason="", readiness=false. Elapsed: 14.29501036s May 8 11:18:58.188: INFO: Pod "pod-subpath-test-configmap-2996": Phase="Running", Reason="", readiness=false. Elapsed: 16.298890969s May 8 11:19:00.192: INFO: Pod "pod-subpath-test-configmap-2996": Phase="Running", Reason="", readiness=false. Elapsed: 18.302446384s May 8 11:19:02.196: INFO: Pod "pod-subpath-test-configmap-2996": Phase="Running", Reason="", readiness=false. Elapsed: 20.306981887s May 8 11:19:04.200: INFO: Pod "pod-subpath-test-configmap-2996": Phase="Running", Reason="", readiness=false. Elapsed: 22.31094158s May 8 11:19:06.205: INFO: Pod "pod-subpath-test-configmap-2996": Phase="Running", Reason="", readiness=false. Elapsed: 24.315948778s May 8 11:19:08.276: INFO: Pod "pod-subpath-test-configmap-2996": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.38661813s STEP: Saw pod success May 8 11:19:08.276: INFO: Pod "pod-subpath-test-configmap-2996" satisfied condition "success or failure" May 8 11:19:08.279: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-2996 container test-container-subpath-configmap-2996: STEP: delete the pod May 8 11:19:09.223: INFO: Waiting for pod pod-subpath-test-configmap-2996 to disappear May 8 11:19:09.462: INFO: Pod pod-subpath-test-configmap-2996 no longer exists STEP: Deleting pod pod-subpath-test-configmap-2996 May 8 11:19:09.462: INFO: Deleting pod "pod-subpath-test-configmap-2996" in namespace "e2e-tests-subpath-fmhks" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:19:09.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-fmhks" for this suite. May 8 11:19:15.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:19:15.736: INFO: namespace: e2e-tests-subpath-fmhks, resource: bindings, ignored listing per whitelist May 8 11:19:15.792: INFO: namespace e2e-tests-subpath-fmhks deletion completed in 6.268109677s • [SLOW TEST:34.165 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:19:15.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 8 11:19:15.937: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c08b6cbd-911d-11ea-8adb-0242ac110017" in namespace "e2e-tests-downward-api-np4g4" to be "success or failure" May 8 11:19:15.942: INFO: Pod "downwardapi-volume-c08b6cbd-911d-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.543983ms May 8 11:19:17.946: INFO: Pod "downwardapi-volume-c08b6cbd-911d-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008091009s May 8 11:19:19.964: INFO: Pod "downwardapi-volume-c08b6cbd-911d-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026595957s May 8 11:19:21.968: INFO: Pod "downwardapi-volume-c08b6cbd-911d-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030845277s STEP: Saw pod success May 8 11:19:21.968: INFO: Pod "downwardapi-volume-c08b6cbd-911d-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:19:21.972: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-c08b6cbd-911d-11ea-8adb-0242ac110017 container client-container: STEP: delete the pod May 8 11:19:22.019: INFO: Waiting for pod downwardapi-volume-c08b6cbd-911d-11ea-8adb-0242ac110017 to disappear May 8 11:19:22.032: INFO: Pod downwardapi-volume-c08b6cbd-911d-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:19:22.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-np4g4" for this suite. May 8 11:19:28.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:19:28.178: INFO: namespace: e2e-tests-downward-api-np4g4, resource: bindings, ignored listing per whitelist May 8 11:19:28.182: INFO: namespace e2e-tests-downward-api-np4g4 deletion completed in 6.145208468s • [SLOW TEST:12.389 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:19:28.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed May 8 11:19:32.465: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-c7fa7f90-911d-11ea-8adb-0242ac110017", GenerateName:"", Namespace:"e2e-tests-pods-pjrn5", SelfLink:"/api/v1/namespaces/e2e-tests-pods-pjrn5/pods/pod-submit-remove-c7fa7f90-911d-11ea-8adb-0242ac110017", UID:"c8055790-911d-11ea-99e8-0242ac110002", ResourceVersion:"9403768", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724533568, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"338325079"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-tlqh4", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0026d2bc0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tlqh4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0025115f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001cf5b00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002511640)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002511660)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002511668), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00251166c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533568, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533571, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533571, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533568, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.148", StartTime:(*v1.Time)(0xc00229ef20), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00229ef40), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://6e5166147dd5a75111ef8bd2aba20e967b868dda6819bbee49c1dbb53d7cbe3f"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:19:41.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-pjrn5" for this suite. May 8 11:19:47.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:19:47.352: INFO: namespace: e2e-tests-pods-pjrn5, resource: bindings, ignored listing per whitelist May 8 11:19:47.394: INFO: namespace e2e-tests-pods-pjrn5 deletion completed in 6.106726089s • [SLOW TEST:19.212 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:19:47.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 8 11:19:47.470: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:19:57.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-kzrt4" for this suite. May 8 11:20:03.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:20:04.002: INFO: namespace: e2e-tests-init-container-kzrt4, resource: bindings, ignored listing per whitelist May 8 11:20:04.076: INFO: namespace e2e-tests-init-container-kzrt4 deletion completed in 6.178132631s • [SLOW TEST:16.682 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:20:04.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-dd5cbbd4-911d-11ea-8adb-0242ac110017 STEP: Creating secret with name s-test-opt-upd-dd5cbc54-911d-11ea-8adb-0242ac110017 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-dd5cbbd4-911d-11ea-8adb-0242ac110017 STEP: Updating secret s-test-opt-upd-dd5cbc54-911d-11ea-8adb-0242ac110017 STEP: Creating secret with name s-test-opt-create-dd5cbc82-911d-11ea-8adb-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:21:39.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-njshm" for this suite. May 8 11:22:03.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:22:03.321: INFO: namespace: e2e-tests-projected-njshm, resource: bindings, ignored listing per whitelist May 8 11:22:03.454: INFO: namespace e2e-tests-projected-njshm deletion completed in 24.282863341s • [SLOW TEST:119.378 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:22:03.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:22:03.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-hlgsd" for this suite. May 8 11:22:25.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:22:25.927: INFO: namespace: e2e-tests-kubelet-test-hlgsd, resource: bindings, ignored listing per whitelist May 8 11:22:25.948: INFO: namespace e2e-tests-kubelet-test-hlgsd deletion completed in 22.099645268s • [SLOW TEST:22.494 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:22:25.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 8 11:22:26.226: INFO: Waiting up to 5m0s for pod "pod-32012cd3-911e-11ea-8adb-0242ac110017" in namespace "e2e-tests-emptydir-lqxlf" to be "success or failure" May 8 11:22:26.263: INFO: Pod "pod-32012cd3-911e-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 36.856151ms May 8 11:22:28.267: INFO: Pod "pod-32012cd3-911e-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041228094s May 8 11:22:30.270: INFO: Pod "pod-32012cd3-911e-11ea-8adb-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.043573176s May 8 11:22:32.274: INFO: Pod "pod-32012cd3-911e-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.047457947s STEP: Saw pod success May 8 11:22:32.274: INFO: Pod "pod-32012cd3-911e-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:22:32.276: INFO: Trying to get logs from node hunter-worker2 pod pod-32012cd3-911e-11ea-8adb-0242ac110017 container test-container: STEP: delete the pod May 8 11:22:32.309: INFO: Waiting for pod pod-32012cd3-911e-11ea-8adb-0242ac110017 to disappear May 8 11:22:32.344: INFO: Pod pod-32012cd3-911e-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:22:32.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-lqxlf" for this suite. May 8 11:22:38.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:22:38.409: INFO: namespace: e2e-tests-emptydir-lqxlf, resource: bindings, ignored listing per whitelist May 8 11:22:38.431: INFO: namespace e2e-tests-emptydir-lqxlf deletion completed in 6.083253287s • [SLOW TEST:12.483 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:22:38.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 8 11:22:38.565: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:22:46.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-v9h8t" for this suite. May 8 11:22:52.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:22:52.393: INFO: namespace: e2e-tests-init-container-v9h8t, resource: bindings, ignored listing per whitelist May 8 11:22:52.405: INFO: namespace e2e-tests-init-container-v9h8t deletion completed in 6.201134513s • [SLOW TEST:13.974 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:22:52.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-41b626aa-911e-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume secrets May 8 11:22:52.607: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-41ba325c-911e-11ea-8adb-0242ac110017" in namespace "e2e-tests-projected-xckp8" to be "success or failure" May 8 11:22:52.616: INFO: Pod "pod-projected-secrets-41ba325c-911e-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 9.096876ms May 8 11:22:54.620: INFO: Pod "pod-projected-secrets-41ba325c-911e-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013483746s May 8 11:22:56.624: INFO: Pod "pod-projected-secrets-41ba325c-911e-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017373229s STEP: Saw pod success May 8 11:22:56.624: INFO: Pod "pod-projected-secrets-41ba325c-911e-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:22:56.626: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-41ba325c-911e-11ea-8adb-0242ac110017 container secret-volume-test: STEP: delete the pod May 8 11:22:56.773: INFO: Waiting for pod pod-projected-secrets-41ba325c-911e-11ea-8adb-0242ac110017 to disappear May 8 11:22:56.780: INFO: Pod pod-projected-secrets-41ba325c-911e-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:22:56.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xckp8" for this suite. May 8 11:23:02.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:23:02.832: INFO: namespace: e2e-tests-projected-xckp8, resource: bindings, ignored listing per whitelist May 8 11:23:02.895: INFO: namespace e2e-tests-projected-xckp8 deletion completed in 6.111247702s • [SLOW TEST:10.490 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:23:02.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 8 11:23:02.980: INFO: Waiting up to 5m0s for pod "pod-47e97dbe-911e-11ea-8adb-0242ac110017" in namespace "e2e-tests-emptydir-qt4tf" to be "success or failure" May 8 11:23:02.983: INFO: Pod "pod-47e97dbe-911e-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.005188ms May 8 11:23:04.987: INFO: Pod "pod-47e97dbe-911e-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007090798s May 8 11:23:06.991: INFO: Pod "pod-47e97dbe-911e-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010853493s STEP: Saw pod success May 8 11:23:06.991: INFO: Pod "pod-47e97dbe-911e-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:23:06.994: INFO: Trying to get logs from node hunter-worker2 pod pod-47e97dbe-911e-11ea-8adb-0242ac110017 container test-container: STEP: delete the pod May 8 11:23:07.024: INFO: Waiting for pod pod-47e97dbe-911e-11ea-8adb-0242ac110017 to disappear May 8 11:23:07.043: INFO: Pod pod-47e97dbe-911e-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:23:07.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-qt4tf" for this suite. May 8 11:23:13.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:23:13.100: INFO: namespace: e2e-tests-emptydir-qt4tf, resource: bindings, ignored listing per whitelist May 8 11:23:13.127: INFO: namespace e2e-tests-emptydir-qt4tf deletion completed in 6.080304178s • [SLOW TEST:10.231 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:23:13.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 8 11:23:13.377: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 8 11:23:18.507: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 8 11:23:18.507: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 8 11:23:18.606: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-pwwbf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-pwwbf/deployments/test-cleanup-deployment,UID:512c8e2d-911e-11ea-99e8-0242ac110002,ResourceVersion:9404439,Generation:1,CreationTimestamp:2020-05-08 11:23:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} May 8 11:23:18.723: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. May 8 11:23:18.723: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 8 11:23:18.723: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-pwwbf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-pwwbf/replicasets/test-cleanup-controller,UID:4e0c6cb7-911e-11ea-99e8-0242ac110002,ResourceVersion:9404440,Generation:1,CreationTimestamp:2020-05-08 11:23:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 512c8e2d-911e-11ea-99e8-0242ac110002 0xc0016b62bf 0xc0016b62d0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 8 11:23:18.727: INFO: Pod "test-cleanup-controller-dgzk9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-dgzk9,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-pwwbf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pwwbf/pods/test-cleanup-controller-dgzk9,UID:4e1d5a6c-911e-11ea-99e8-0242ac110002,ResourceVersion:9404434,Generation:0,CreationTimestamp:2020-05-08 11:23:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 4e0c6cb7-911e-11ea-99e8-0242ac110002 0xc0016b7277 0xc0016b7278}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-86xbg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-86xbg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-86xbg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016b7300} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016b7320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:23:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:23:17 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:23:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:23:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.153,StartTime:2020-05-08 11:23:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-08 11:23:16 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5a72442732cac2f3c89e562db6016eab0411979fc736c1982086e33686ebe823}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:23:18.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-pwwbf" for this suite. May 8 11:23:27.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:23:27.149: INFO: namespace: e2e-tests-deployment-pwwbf, resource: bindings, ignored listing per whitelist May 8 11:23:27.186: INFO: namespace e2e-tests-deployment-pwwbf deletion completed in 8.406270973s • [SLOW TEST:14.059 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:23:27.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 8 11:23:27.429: INFO: namespace e2e-tests-kubectl-bxtmh May 8 11:23:27.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bxtmh' May 8 11:23:32.340: INFO: stderr: "" May 8 11:23:32.340: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 8 11:23:33.345: INFO: Selector matched 1 pods for map[app:redis] May 8 11:23:33.345: INFO: Found 0 / 1 May 8 11:23:34.548: INFO: Selector matched 1 pods for map[app:redis] May 8 11:23:34.548: INFO: Found 0 / 1 May 8 11:23:35.344: INFO: Selector matched 1 pods for map[app:redis] May 8 11:23:35.344: INFO: Found 0 / 1 May 8 11:23:36.345: INFO: Selector matched 1 pods for map[app:redis] May 8 11:23:36.345: INFO: Found 0 / 1 May 8 11:23:37.344: INFO: Selector matched 1 pods for map[app:redis] May 8 11:23:37.344: INFO: Found 1 / 1 May 8 11:23:37.344: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 8 11:23:37.347: INFO: Selector matched 1 pods for map[app:redis] May 8 11:23:37.347: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 8 11:23:37.347: INFO: wait on redis-master startup in e2e-tests-kubectl-bxtmh May 8 11:23:37.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-nzw5p redis-master --namespace=e2e-tests-kubectl-bxtmh' May 8 11:23:37.451: INFO: stderr: "" May 8 11:23:37.451: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 08 May 11:23:36.287 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 08 May 11:23:36.287 # Server started, Redis version 3.2.12\n1:M 08 May 11:23:36.287 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 08 May 11:23:36.287 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 8 11:23:37.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-bxtmh' May 8 11:23:37.584: INFO: stderr: "" May 8 11:23:37.584: INFO: stdout: "service/rm2 exposed\n" May 8 11:23:37.608: INFO: Service rm2 in namespace e2e-tests-kubectl-bxtmh found. STEP: exposing service May 8 11:23:39.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-bxtmh' May 8 11:23:39.766: INFO: stderr: "" May 8 11:23:39.766: INFO: stdout: "service/rm3 exposed\n" May 8 11:23:39.769: INFO: Service rm3 in namespace e2e-tests-kubectl-bxtmh found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:23:41.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bxtmh" for this suite. May 8 11:24:05.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:24:05.861: INFO: namespace: e2e-tests-kubectl-bxtmh, resource: bindings, ignored listing per whitelist May 8 11:24:05.872: INFO: namespace e2e-tests-kubectl-bxtmh deletion completed in 24.094185516s • [SLOW TEST:38.686 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:24:05.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-6d9efc3d-911e-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume configMaps May 8 11:24:06.384: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6da75675-911e-11ea-8adb-0242ac110017" in namespace "e2e-tests-projected-wklhc" to be "success or failure" May 8 11:24:06.424: INFO: Pod "pod-projected-configmaps-6da75675-911e-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 40.047112ms May 8 11:24:08.428: INFO: Pod "pod-projected-configmaps-6da75675-911e-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043522889s May 8 11:24:10.431: INFO: Pod "pod-projected-configmaps-6da75675-911e-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047090983s May 8 11:24:12.543: INFO: Pod "pod-projected-configmaps-6da75675-911e-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 6.159019275s May 8 11:24:14.561: INFO: Pod "pod-projected-configmaps-6da75675-911e-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.177365319s STEP: Saw pod success May 8 11:24:14.561: INFO: Pod "pod-projected-configmaps-6da75675-911e-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:24:14.645: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-6da75675-911e-11ea-8adb-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 8 11:24:16.418: INFO: Waiting for pod pod-projected-configmaps-6da75675-911e-11ea-8adb-0242ac110017 to disappear May 8 11:24:16.423: INFO: Pod pod-projected-configmaps-6da75675-911e-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:24:16.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wklhc" for this suite. May 8 11:24:24.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:24:24.659: INFO: namespace: e2e-tests-projected-wklhc, resource: bindings, ignored listing per whitelist May 8 11:24:24.682: INFO: namespace e2e-tests-projected-wklhc deletion completed in 8.255644053s • [SLOW TEST:18.809 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:24:24.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 8 11:24:25.120: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:24:34.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-lwnmf" for this suite. May 8 11:25:24.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:25:24.287: INFO: namespace: e2e-tests-pods-lwnmf, resource: bindings, ignored listing per whitelist May 8 11:25:24.336: INFO: namespace e2e-tests-pods-lwnmf deletion completed in 50.097686709s • [SLOW TEST:59.654 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:25:24.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 8 11:25:28.994: INFO: Successfully updated pod "pod-update-9c3c7980-911e-11ea-8adb-0242ac110017" STEP: verifying the updated pod is in kubernetes May 8 11:25:29.128: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:25:29.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-gxwmr" for this suite. May 8 11:25:51.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:25:51.223: INFO: namespace: e2e-tests-pods-gxwmr, resource: bindings, ignored listing per whitelist May 8 11:25:51.319: INFO: namespace e2e-tests-pods-gxwmr deletion completed in 22.171568486s • [SLOW TEST:26.983 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:25:51.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 8 11:25:51.545: INFO: Waiting up to 5m0s for pod "pod-ac5c5700-911e-11ea-8adb-0242ac110017" in namespace "e2e-tests-emptydir-4722d" to be "success or failure" May 8 11:25:51.551: INFO: Pod "pod-ac5c5700-911e-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 5.989835ms May 8 11:25:53.555: INFO: Pod "pod-ac5c5700-911e-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00963785s May 8 11:25:55.623: INFO: Pod "pod-ac5c5700-911e-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07846939s STEP: Saw pod success May 8 11:25:55.624: INFO: Pod "pod-ac5c5700-911e-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:25:55.638: INFO: Trying to get logs from node hunter-worker pod pod-ac5c5700-911e-11ea-8adb-0242ac110017 container test-container: STEP: delete the pod May 8 11:25:55.657: INFO: Waiting for pod pod-ac5c5700-911e-11ea-8adb-0242ac110017 to disappear May 8 11:25:55.661: INFO: Pod pod-ac5c5700-911e-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:25:55.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-4722d" for this suite. May 8 11:26:03.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:26:03.802: INFO: namespace: e2e-tests-emptydir-4722d, resource: bindings, ignored listing per whitelist May 8 11:26:03.815: INFO: namespace e2e-tests-emptydir-4722d deletion completed in 8.150152601s • [SLOW TEST:12.495 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:26:03.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server May 8 11:26:03.952: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:26:04.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tcms7" for this suite. May 8 11:26:10.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:26:10.136: INFO: namespace: e2e-tests-kubectl-tcms7, resource: bindings, ignored listing per whitelist May 8 11:26:10.155: INFO: namespace e2e-tests-kubectl-tcms7 deletion completed in 6.100152037s • [SLOW TEST:6.340 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:26:10.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-2xlt7 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-2xlt7 STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-2xlt7 May 8 11:26:10.389: INFO: Found 0 stateful pods, waiting for 1 May 8 11:26:20.394: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 8 11:26:20.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2xlt7 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 8 11:26:20.676: INFO: stderr: "I0508 11:26:20.537688 1149 log.go:172] (0xc0008462c0) (0xc000742640) Create stream\nI0508 11:26:20.537747 1149 log.go:172] (0xc0008462c0) (0xc000742640) Stream added, broadcasting: 1\nI0508 11:26:20.540039 1149 log.go:172] (0xc0008462c0) Reply frame received for 1\nI0508 11:26:20.540080 1149 log.go:172] (0xc0008462c0) (0xc0005bebe0) Create stream\nI0508 11:26:20.540088 1149 log.go:172] (0xc0008462c0) (0xc0005bebe0) Stream added, broadcasting: 3\nI0508 11:26:20.540877 1149 log.go:172] (0xc0008462c0) Reply frame received for 3\nI0508 11:26:20.540905 1149 log.go:172] (0xc0008462c0) (0xc0005bed20) Create stream\nI0508 11:26:20.540914 1149 log.go:172] (0xc0008462c0) (0xc0005bed20) Stream added, broadcasting: 5\nI0508 11:26:20.542053 1149 log.go:172] (0xc0008462c0) Reply frame received for 5\nI0508 11:26:20.668091 1149 log.go:172] (0xc0008462c0) Data frame received for 3\nI0508 11:26:20.668125 1149 log.go:172] (0xc0005bebe0) (3) Data frame handling\nI0508 11:26:20.668148 1149 log.go:172] (0xc0005bebe0) (3) Data frame sent\nI0508 11:26:20.668314 1149 log.go:172] (0xc0008462c0) Data frame received for 3\nI0508 11:26:20.668341 1149 log.go:172] (0xc0005bebe0) (3) Data frame handling\nI0508 11:26:20.668577 1149 log.go:172] (0xc0008462c0) Data frame received for 5\nI0508 11:26:20.668590 1149 log.go:172] (0xc0005bed20) (5) Data frame handling\nI0508 11:26:20.670304 1149 log.go:172] (0xc0008462c0) Data frame received for 1\nI0508 11:26:20.670328 1149 log.go:172] (0xc000742640) (1) Data frame handling\nI0508 11:26:20.670340 1149 log.go:172] (0xc000742640) (1) Data frame sent\nI0508 11:26:20.670358 1149 log.go:172] (0xc0008462c0) (0xc000742640) Stream removed, broadcasting: 1\nI0508 11:26:20.670422 1149 log.go:172] (0xc0008462c0) Go away received\nI0508 11:26:20.670572 1149 log.go:172] (0xc0008462c0) (0xc000742640) Stream removed, broadcasting: 1\nI0508 11:26:20.670606 1149 log.go:172] (0xc0008462c0) (0xc0005bebe0) Stream removed, broadcasting: 3\nI0508 11:26:20.670618 1149 log.go:172] (0xc0008462c0) (0xc0005bed20) Stream removed, broadcasting: 5\n" May 8 11:26:20.676: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 8 11:26:20.676: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 8 11:26:20.680: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 8 11:26:30.684: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 8 11:26:30.684: INFO: Waiting for statefulset status.replicas updated to 0 May 8 11:26:30.699: INFO: POD NODE PHASE GRACE CONDITIONS May 8 11:26:30.699: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:10 +0000 UTC }] May 8 11:26:30.699: INFO: May 8 11:26:30.699: INFO: StatefulSet ss has not reached scale 3, at 1 May 8 11:26:31.731: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994546104s May 8 11:26:33.026: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.961856748s May 8 11:26:34.205: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.667043319s May 8 11:26:35.570: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.488245216s May 8 11:26:36.576: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.122694293s May 8 11:26:37.582: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.117146731s May 8 11:26:38.587: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.111189257s May 8 11:26:39.592: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.106110554s May 8 11:26:40.597: INFO: Verifying statefulset ss doesn't scale past 3 for another 100.99017ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-2xlt7 May 8 11:26:41.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2xlt7 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 11:26:41.807: INFO: stderr: "I0508 11:26:41.724551 1172 log.go:172] (0xc0007aa2c0) (0xc0006fc640) Create stream\nI0508 11:26:41.724611 1172 log.go:172] (0xc0007aa2c0) (0xc0006fc640) Stream added, broadcasting: 1\nI0508 11:26:41.726914 1172 log.go:172] (0xc0007aa2c0) Reply frame received for 1\nI0508 11:26:41.726969 1172 log.go:172] (0xc0007aa2c0) (0xc000122d20) Create stream\nI0508 11:26:41.726981 1172 log.go:172] (0xc0007aa2c0) (0xc000122d20) Stream added, broadcasting: 3\nI0508 11:26:41.727810 1172 log.go:172] (0xc0007aa2c0) Reply frame received for 3\nI0508 11:26:41.727848 1172 log.go:172] (0xc0007aa2c0) (0xc0006fc6e0) Create stream\nI0508 11:26:41.727864 1172 log.go:172] (0xc0007aa2c0) (0xc0006fc6e0) Stream added, broadcasting: 5\nI0508 11:26:41.728791 1172 log.go:172] (0xc0007aa2c0) Reply frame received for 5\nI0508 11:26:41.800136 1172 log.go:172] (0xc0007aa2c0) Data frame received for 3\nI0508 11:26:41.800171 1172 log.go:172] (0xc000122d20) (3) Data frame handling\nI0508 11:26:41.800185 1172 log.go:172] (0xc000122d20) (3) Data frame sent\nI0508 11:26:41.800195 1172 log.go:172] (0xc0007aa2c0) Data frame received for 3\nI0508 11:26:41.800203 1172 log.go:172] (0xc000122d20) (3) Data frame handling\nI0508 11:26:41.800263 1172 log.go:172] (0xc0007aa2c0) Data frame received for 5\nI0508 11:26:41.800282 1172 log.go:172] (0xc0006fc6e0) (5) Data frame handling\nI0508 11:26:41.801976 1172 log.go:172] (0xc0007aa2c0) Data frame received for 1\nI0508 11:26:41.802018 1172 log.go:172] (0xc0006fc640) (1) Data frame handling\nI0508 11:26:41.802060 1172 log.go:172] (0xc0006fc640) (1) Data frame sent\nI0508 11:26:41.802090 1172 log.go:172] (0xc0007aa2c0) (0xc0006fc640) Stream removed, broadcasting: 1\nI0508 11:26:41.802126 1172 log.go:172] (0xc0007aa2c0) Go away received\nI0508 11:26:41.802423 1172 log.go:172] (0xc0007aa2c0) (0xc0006fc640) Stream removed, broadcasting: 1\nI0508 11:26:41.802464 1172 log.go:172] (0xc0007aa2c0) (0xc000122d20) Stream removed, broadcasting: 3\nI0508 11:26:41.802500 1172 log.go:172] (0xc0007aa2c0) (0xc0006fc6e0) Stream removed, broadcasting: 5\n" May 8 11:26:41.807: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 8 11:26:41.807: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 8 11:26:41.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2xlt7 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 11:26:42.026: INFO: stderr: "I0508 11:26:41.943963 1194 log.go:172] (0xc000138580) (0xc00071c5a0) Create stream\nI0508 11:26:41.944024 1194 log.go:172] (0xc000138580) (0xc00071c5a0) Stream added, broadcasting: 1\nI0508 11:26:41.946215 1194 log.go:172] (0xc000138580) Reply frame received for 1\nI0508 11:26:41.946255 1194 log.go:172] (0xc000138580) (0xc00071c640) Create stream\nI0508 11:26:41.946262 1194 log.go:172] (0xc000138580) (0xc00071c640) Stream added, broadcasting: 3\nI0508 11:26:41.946971 1194 log.go:172] (0xc000138580) Reply frame received for 3\nI0508 11:26:41.947003 1194 log.go:172] (0xc000138580) (0xc0006e0dc0) Create stream\nI0508 11:26:41.947011 1194 log.go:172] (0xc000138580) (0xc0006e0dc0) Stream added, broadcasting: 5\nI0508 11:26:41.947792 1194 log.go:172] (0xc000138580) Reply frame received for 5\nI0508 11:26:42.019812 1194 log.go:172] (0xc000138580) Data frame received for 5\nI0508 11:26:42.019848 1194 log.go:172] (0xc0006e0dc0) (5) Data frame handling\nI0508 11:26:42.019859 1194 log.go:172] (0xc0006e0dc0) (5) Data frame sent\nI0508 11:26:42.019865 1194 log.go:172] (0xc000138580) Data frame received for 5\nI0508 11:26:42.019872 1194 log.go:172] (0xc0006e0dc0) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0508 11:26:42.019929 1194 log.go:172] (0xc000138580) Data frame received for 3\nI0508 11:26:42.019979 1194 log.go:172] (0xc00071c640) (3) Data frame handling\nI0508 11:26:42.020005 1194 log.go:172] (0xc00071c640) (3) Data frame sent\nI0508 11:26:42.020025 1194 log.go:172] (0xc000138580) Data frame received for 3\nI0508 11:26:42.020039 1194 log.go:172] (0xc00071c640) (3) Data frame handling\nI0508 11:26:42.021526 1194 log.go:172] (0xc000138580) Data frame received for 1\nI0508 11:26:42.021558 1194 log.go:172] (0xc00071c5a0) (1) Data frame handling\nI0508 11:26:42.021570 1194 log.go:172] (0xc00071c5a0) (1) Data frame sent\nI0508 11:26:42.021594 1194 log.go:172] (0xc000138580) (0xc00071c5a0) Stream removed, broadcasting: 1\nI0508 11:26:42.021715 1194 log.go:172] (0xc000138580) Go away received\nI0508 11:26:42.021795 1194 log.go:172] (0xc000138580) (0xc00071c5a0) Stream removed, broadcasting: 1\nI0508 11:26:42.021816 1194 log.go:172] (0xc000138580) (0xc00071c640) Stream removed, broadcasting: 3\nI0508 11:26:42.021827 1194 log.go:172] (0xc000138580) (0xc0006e0dc0) Stream removed, broadcasting: 5\n" May 8 11:26:42.026: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 8 11:26:42.026: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 8 11:26:42.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2xlt7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 11:26:42.228: INFO: stderr: "I0508 11:26:42.160402 1217 log.go:172] (0xc0007024d0) (0xc0002dd540) Create stream\nI0508 11:26:42.160458 1217 log.go:172] (0xc0007024d0) (0xc0002dd540) Stream added, broadcasting: 1\nI0508 11:26:42.163011 1217 log.go:172] (0xc0007024d0) Reply frame received for 1\nI0508 11:26:42.163069 1217 log.go:172] (0xc0007024d0) (0xc000216000) Create stream\nI0508 11:26:42.163095 1217 log.go:172] (0xc0007024d0) (0xc000216000) Stream added, broadcasting: 3\nI0508 11:26:42.164251 1217 log.go:172] (0xc0007024d0) Reply frame received for 3\nI0508 11:26:42.164288 1217 log.go:172] (0xc0007024d0) (0xc0002160a0) Create stream\nI0508 11:26:42.164302 1217 log.go:172] (0xc0007024d0) (0xc0002160a0) Stream added, broadcasting: 5\nI0508 11:26:42.165295 1217 log.go:172] (0xc0007024d0) Reply frame received for 5\nI0508 11:26:42.221324 1217 log.go:172] (0xc0007024d0) Data frame received for 5\nI0508 11:26:42.221368 1217 log.go:172] (0xc0002160a0) (5) Data frame handling\nI0508 11:26:42.221396 1217 log.go:172] (0xc0002160a0) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0508 11:26:42.221428 1217 log.go:172] (0xc0007024d0) Data frame received for 3\nI0508 11:26:42.221445 1217 log.go:172] (0xc000216000) (3) Data frame handling\nI0508 11:26:42.221469 1217 log.go:172] (0xc000216000) (3) Data frame sent\nI0508 11:26:42.221501 1217 log.go:172] (0xc0007024d0) Data frame received for 3\nI0508 11:26:42.221514 1217 log.go:172] (0xc000216000) (3) Data frame handling\nI0508 11:26:42.221768 1217 log.go:172] (0xc0007024d0) Data frame received for 5\nI0508 11:26:42.221806 1217 log.go:172] (0xc0002160a0) (5) Data frame handling\nI0508 11:26:42.223364 1217 log.go:172] (0xc0007024d0) Data frame received for 1\nI0508 11:26:42.223402 1217 log.go:172] (0xc0002dd540) (1) Data frame handling\nI0508 11:26:42.223416 1217 log.go:172] (0xc0002dd540) (1) Data frame sent\nI0508 11:26:42.223427 1217 log.go:172] (0xc0007024d0) (0xc0002dd540) Stream removed, broadcasting: 1\nI0508 11:26:42.223451 1217 log.go:172] (0xc0007024d0) Go away received\nI0508 11:26:42.223700 1217 log.go:172] (0xc0007024d0) (0xc0002dd540) Stream removed, broadcasting: 1\nI0508 11:26:42.223723 1217 log.go:172] (0xc0007024d0) (0xc000216000) Stream removed, broadcasting: 3\nI0508 11:26:42.223738 1217 log.go:172] (0xc0007024d0) (0xc0002160a0) Stream removed, broadcasting: 5\n" May 8 11:26:42.228: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 8 11:26:42.228: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 8 11:26:42.232: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 8 11:26:52.288: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 8 11:26:52.288: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 8 11:26:52.288: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 8 11:26:52.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2xlt7 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 8 11:26:52.565: INFO: stderr: "I0508 11:26:52.490180 1241 log.go:172] (0xc000138840) (0xc0005e7220) Create stream\nI0508 11:26:52.490257 1241 log.go:172] (0xc000138840) (0xc0005e7220) Stream added, broadcasting: 1\nI0508 11:26:52.492567 1241 log.go:172] (0xc000138840) Reply frame received for 1\nI0508 11:26:52.492633 1241 log.go:172] (0xc000138840) (0xc00076c000) Create stream\nI0508 11:26:52.492663 1241 log.go:172] (0xc000138840) (0xc00076c000) Stream added, broadcasting: 3\nI0508 11:26:52.493736 1241 log.go:172] (0xc000138840) Reply frame received for 3\nI0508 11:26:52.493775 1241 log.go:172] (0xc000138840) (0xc0005e72c0) Create stream\nI0508 11:26:52.493787 1241 log.go:172] (0xc000138840) (0xc0005e72c0) Stream added, broadcasting: 5\nI0508 11:26:52.494665 1241 log.go:172] (0xc000138840) Reply frame received for 5\nI0508 11:26:52.559069 1241 log.go:172] (0xc000138840) Data frame received for 5\nI0508 11:26:52.559166 1241 log.go:172] (0xc0005e72c0) (5) Data frame handling\nI0508 11:26:52.559200 1241 log.go:172] (0xc000138840) Data frame received for 3\nI0508 11:26:52.559212 1241 log.go:172] (0xc00076c000) (3) Data frame handling\nI0508 11:26:52.559225 1241 log.go:172] (0xc00076c000) (3) Data frame sent\nI0508 11:26:52.559232 1241 log.go:172] (0xc000138840) Data frame received for 3\nI0508 11:26:52.559237 1241 log.go:172] (0xc00076c000) (3) Data frame handling\nI0508 11:26:52.560845 1241 log.go:172] (0xc000138840) Data frame received for 1\nI0508 11:26:52.560891 1241 log.go:172] (0xc0005e7220) (1) Data frame handling\nI0508 11:26:52.560912 1241 log.go:172] (0xc0005e7220) (1) Data frame sent\nI0508 11:26:52.560940 1241 log.go:172] (0xc000138840) (0xc0005e7220) Stream removed, broadcasting: 1\nI0508 11:26:52.560971 1241 log.go:172] (0xc000138840) Go away received\nI0508 11:26:52.561439 1241 log.go:172] (0xc000138840) (0xc0005e7220) Stream removed, broadcasting: 1\nI0508 11:26:52.561497 1241 log.go:172] (0xc000138840) (0xc00076c000) Stream removed, broadcasting: 3\nI0508 11:26:52.561535 1241 log.go:172] (0xc000138840) (0xc0005e72c0) Stream removed, broadcasting: 5\n" May 8 11:26:52.565: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 8 11:26:52.565: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 8 11:26:52.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2xlt7 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 8 11:26:53.033: INFO: stderr: "I0508 11:26:52.680382 1263 log.go:172] (0xc000138630) (0xc0001ff360) Create stream\nI0508 11:26:52.680439 1263 log.go:172] (0xc000138630) (0xc0001ff360) Stream added, broadcasting: 1\nI0508 11:26:52.683274 1263 log.go:172] (0xc000138630) Reply frame received for 1\nI0508 11:26:52.683323 1263 log.go:172] (0xc000138630) (0xc00054a000) Create stream\nI0508 11:26:52.683335 1263 log.go:172] (0xc000138630) (0xc00054a000) Stream added, broadcasting: 3\nI0508 11:26:52.684294 1263 log.go:172] (0xc000138630) Reply frame received for 3\nI0508 11:26:52.684325 1263 log.go:172] (0xc000138630) (0xc000504000) Create stream\nI0508 11:26:52.684335 1263 log.go:172] (0xc000138630) (0xc000504000) Stream added, broadcasting: 5\nI0508 11:26:52.685509 1263 log.go:172] (0xc000138630) Reply frame received for 5\nI0508 11:26:53.026472 1263 log.go:172] (0xc000138630) Data frame received for 3\nI0508 11:26:53.026501 1263 log.go:172] (0xc00054a000) (3) Data frame handling\nI0508 11:26:53.026512 1263 log.go:172] (0xc00054a000) (3) Data frame sent\nI0508 11:26:53.026584 1263 log.go:172] (0xc000138630) Data frame received for 5\nI0508 11:26:53.026594 1263 log.go:172] (0xc000504000) (5) Data frame handling\nI0508 11:26:53.026782 1263 log.go:172] (0xc000138630) Data frame received for 3\nI0508 11:26:53.026824 1263 log.go:172] (0xc00054a000) (3) Data frame handling\nI0508 11:26:53.028709 1263 log.go:172] (0xc000138630) Data frame received for 1\nI0508 11:26:53.028762 1263 log.go:172] (0xc0001ff360) (1) Data frame handling\nI0508 11:26:53.028803 1263 log.go:172] (0xc0001ff360) (1) Data frame sent\nI0508 11:26:53.028857 1263 log.go:172] (0xc000138630) (0xc0001ff360) Stream removed, broadcasting: 1\nI0508 11:26:53.028959 1263 log.go:172] (0xc000138630) Go away received\nI0508 11:26:53.029054 1263 log.go:172] (0xc000138630) (0xc0001ff360) Stream removed, broadcasting: 1\nI0508 11:26:53.029066 1263 log.go:172] (0xc000138630) (0xc00054a000) Stream removed, broadcasting: 3\nI0508 11:26:53.029072 1263 log.go:172] (0xc000138630) (0xc000504000) Stream removed, broadcasting: 5\n" May 8 11:26:53.034: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 8 11:26:53.034: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 8 11:26:53.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2xlt7 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 8 11:26:53.259: INFO: stderr: "I0508 11:26:53.161439 1286 log.go:172] (0xc00014c840) (0xc00075a640) Create stream\nI0508 11:26:53.161501 1286 log.go:172] (0xc00014c840) (0xc00075a640) Stream added, broadcasting: 1\nI0508 11:26:53.163453 1286 log.go:172] (0xc00014c840) Reply frame received for 1\nI0508 11:26:53.163488 1286 log.go:172] (0xc00014c840) (0xc00075a6e0) Create stream\nI0508 11:26:53.163496 1286 log.go:172] (0xc00014c840) (0xc00075a6e0) Stream added, broadcasting: 3\nI0508 11:26:53.164204 1286 log.go:172] (0xc00014c840) Reply frame received for 3\nI0508 11:26:53.164237 1286 log.go:172] (0xc00014c840) (0xc0007c6c80) Create stream\nI0508 11:26:53.164252 1286 log.go:172] (0xc00014c840) (0xc0007c6c80) Stream added, broadcasting: 5\nI0508 11:26:53.165094 1286 log.go:172] (0xc00014c840) Reply frame received for 5\nI0508 11:26:53.254789 1286 log.go:172] (0xc00014c840) Data frame received for 5\nI0508 11:26:53.254821 1286 log.go:172] (0xc0007c6c80) (5) Data frame handling\nI0508 11:26:53.254859 1286 log.go:172] (0xc00014c840) Data frame received for 3\nI0508 11:26:53.254873 1286 log.go:172] (0xc00075a6e0) (3) Data frame handling\nI0508 11:26:53.254889 1286 log.go:172] (0xc00075a6e0) (3) Data frame sent\nI0508 11:26:53.254901 1286 log.go:172] (0xc00014c840) Data frame received for 3\nI0508 11:26:53.254908 1286 log.go:172] (0xc00075a6e0) (3) Data frame handling\nI0508 11:26:53.256086 1286 log.go:172] (0xc00014c840) Data frame received for 1\nI0508 11:26:53.256108 1286 log.go:172] (0xc00075a640) (1) Data frame handling\nI0508 11:26:53.256131 1286 log.go:172] (0xc00075a640) (1) Data frame sent\nI0508 11:26:53.256149 1286 log.go:172] (0xc00014c840) (0xc00075a640) Stream removed, broadcasting: 1\nI0508 11:26:53.256168 1286 log.go:172] (0xc00014c840) Go away received\nI0508 11:26:53.256329 1286 log.go:172] (0xc00014c840) (0xc00075a640) Stream removed, broadcasting: 1\nI0508 11:26:53.256345 1286 log.go:172] (0xc00014c840) (0xc00075a6e0) Stream removed, broadcasting: 3\nI0508 11:26:53.256353 1286 log.go:172] (0xc00014c840) (0xc0007c6c80) Stream removed, broadcasting: 5\n" May 8 11:26:53.259: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 8 11:26:53.259: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 8 11:26:53.259: INFO: Waiting for statefulset status.replicas updated to 0 May 8 11:26:53.262: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 8 11:27:03.271: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 8 11:27:03.271: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 8 11:27:03.271: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 8 11:27:03.299: INFO: POD NODE PHASE GRACE CONDITIONS May 8 11:27:03.299: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:10 +0000 UTC }] May 8 11:27:03.299: INFO: ss-1 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC }] May 8 11:27:03.299: INFO: ss-2 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC }] May 8 11:27:03.299: INFO: May 8 11:27:03.299: INFO: StatefulSet ss has not reached scale 0, at 3 May 8 11:27:04.309: INFO: POD NODE PHASE GRACE CONDITIONS May 8 11:27:04.309: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:10 +0000 UTC }] May 8 11:27:04.309: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC }] May 8 11:27:04.309: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC }] May 8 11:27:04.309: INFO: May 8 11:27:04.309: INFO: StatefulSet ss has not reached scale 0, at 3 May 8 11:27:05.344: INFO: POD NODE PHASE GRACE CONDITIONS May 8 11:27:05.344: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:10 +0000 UTC }] May 8 11:27:05.344: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC }] May 8 11:27:05.344: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC }] May 8 11:27:05.344: INFO: May 8 11:27:05.344: INFO: StatefulSet ss has not reached scale 0, at 3 May 8 11:27:06.349: INFO: POD NODE PHASE GRACE CONDITIONS May 8 11:27:06.349: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:10 +0000 UTC }] May 8 11:27:06.349: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC }] May 8 11:27:06.349: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC }] May 8 11:27:06.349: INFO: May 8 11:27:06.349: INFO: StatefulSet ss has not reached scale 0, at 3 May 8 11:27:07.353: INFO: POD NODE PHASE GRACE CONDITIONS May 8 11:27:07.353: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:10 +0000 UTC }] May 8 11:27:07.353: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC }] May 8 11:27:07.353: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC }] May 8 11:27:07.353: INFO: May 8 11:27:07.353: INFO: StatefulSet ss has not reached scale 0, at 3 May 8 11:27:08.358: INFO: POD NODE PHASE GRACE CONDITIONS May 8 11:27:08.358: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:10 +0000 UTC }] May 8 11:27:08.359: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC }] May 8 11:27:08.359: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC }] May 8 11:27:08.359: INFO: May 8 11:27:08.359: INFO: StatefulSet ss has not reached scale 0, at 3 May 8 11:27:09.364: INFO: POD NODE PHASE GRACE CONDITIONS May 8 11:27:09.364: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:10 +0000 UTC }] May 8 11:27:09.364: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC }] May 8 11:27:09.364: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC }] May 8 11:27:09.364: INFO: May 8 11:27:09.364: INFO: StatefulSet ss has not reached scale 0, at 3 May 8 11:27:10.369: INFO: POD NODE PHASE GRACE CONDITIONS May 8 11:27:10.369: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:10 +0000 UTC }] May 8 11:27:10.369: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC }] May 8 11:27:10.369: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC }] May 8 11:27:10.369: INFO: May 8 11:27:10.369: INFO: StatefulSet ss has not reached scale 0, at 3 May 8 11:27:11.517: INFO: POD NODE PHASE GRACE CONDITIONS May 8 11:27:11.517: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:10 +0000 UTC }] May 8 11:27:11.517: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:26:30 +0000 UTC }] May 8 11:27:11.517: INFO: May 8 11:27:11.517: INFO: StatefulSet ss has not reached scale 0, at 2 May 8 11:27:12.521: INFO: Verifying statefulset ss doesn't scale past 0 for another 761.376953ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-2xlt7 May 8 11:27:13.526: INFO: Scaling statefulset ss to 0 May 8 11:27:13.538: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 8 11:27:13.540: INFO: Deleting all statefulset in ns e2e-tests-statefulset-2xlt7 May 8 11:27:13.542: INFO: Scaling statefulset ss to 0 May 8 11:27:13.552: INFO: Waiting for statefulset status.replicas updated to 0 May 8 11:27:13.554: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:27:13.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-2xlt7" for this suite. May 8 11:27:19.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:27:19.755: INFO: namespace: e2e-tests-statefulset-2xlt7, resource: bindings, ignored listing per whitelist May 8 11:27:19.807: INFO: namespace e2e-tests-statefulset-2xlt7 deletion completed in 6.207410768s • [SLOW TEST:69.652 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:27:19.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 8 11:27:20.171: INFO: Waiting up to 5m0s for pod "pod-e1250439-911e-11ea-8adb-0242ac110017" in namespace "e2e-tests-emptydir-54wsv" to be "success or failure" May 8 11:27:20.177: INFO: Pod "pod-e1250439-911e-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 5.803909ms May 8 11:27:22.180: INFO: Pod "pod-e1250439-911e-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008578793s May 8 11:27:24.183: INFO: Pod "pod-e1250439-911e-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011650397s STEP: Saw pod success May 8 11:27:24.183: INFO: Pod "pod-e1250439-911e-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:27:24.185: INFO: Trying to get logs from node hunter-worker2 pod pod-e1250439-911e-11ea-8adb-0242ac110017 container test-container: STEP: delete the pod May 8 11:27:24.485: INFO: Waiting for pod pod-e1250439-911e-11ea-8adb-0242ac110017 to disappear May 8 11:27:24.520: INFO: Pod pod-e1250439-911e-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:27:24.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-54wsv" for this suite. May 8 11:27:30.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:27:30.576: INFO: namespace: e2e-tests-emptydir-54wsv, resource: bindings, ignored listing per whitelist May 8 11:27:30.615: INFO: namespace e2e-tests-emptydir-54wsv deletion completed in 6.091570804s • [SLOW TEST:10.808 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:27:30.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 8 11:27:30.785: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 8 11:27:30.819: INFO: Pod name sample-pod: Found 0 pods out of 1 May 8 11:27:35.823: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 8 11:27:35.823: INFO: Creating deployment "test-rolling-update-deployment" May 8 11:27:35.826: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 8 11:27:35.905: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 8 11:27:38.312: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 8 11:27:38.314: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534056, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534056, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534056, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534055, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 11:27:40.317: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534056, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534056, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534056, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534055, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 11:27:42.317: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 8 11:27:42.325: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-2zhft,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2zhft/deployments/test-rolling-update-deployment,UID:ea8af4dc-911e-11ea-99e8-0242ac110002,ResourceVersion:9405371,Generation:1,CreationTimestamp:2020-05-08 11:27:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-08 11:27:36 +0000 UTC 2020-05-08 11:27:36 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-08 11:27:41 +0000 UTC 2020-05-08 11:27:35 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 8 11:27:42.327: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-2zhft,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2zhft/replicasets/test-rolling-update-deployment-75db98fb4c,UID:ea978157-911e-11ea-99e8-0242ac110002,ResourceVersion:9405362,Generation:1,CreationTimestamp:2020-05-08 11:27:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment ea8af4dc-911e-11ea-99e8-0242ac110002 0xc001c76e77 0xc001c76e78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 8 11:27:42.327: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 8 11:27:42.327: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-2zhft,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2zhft/replicasets/test-rolling-update-controller,UID:e78a368a-911e-11ea-99e8-0242ac110002,ResourceVersion:9405370,Generation:2,CreationTimestamp:2020-05-08 11:27:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment ea8af4dc-911e-11ea-99e8-0242ac110002 0xc0017cdec7 0xc0017cdec8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 8 11:27:42.330: INFO: Pod "test-rolling-update-deployment-75db98fb4c-8q8f5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-8q8f5,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-2zhft,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2zhft/pods/test-rolling-update-deployment-75db98fb4c-8q8f5,UID:ea97de46-911e-11ea-99e8-0242ac110002,ResourceVersion:9405361,Generation:0,CreationTimestamp:2020-05-08 11:27:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c ea978157-911e-11ea-99e8-0242ac110002 0xc001e7b2a7 0xc001e7b2a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zjhdz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zjhdz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-zjhdz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e7b3f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e7b410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:27:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:27:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:27:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:27:35 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.158,StartTime:2020-05-08 11:27:36 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-08 11:27:39 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://6a97e66c286a63679770cd046640d4be1fc1f9dd99f3fcddc61ccf145dc0c105}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:27:42.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-2zhft" for this suite. May 8 11:27:50.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:27:50.418: INFO: namespace: e2e-tests-deployment-2zhft, resource: bindings, ignored listing per whitelist May 8 11:27:50.439: INFO: namespace e2e-tests-deployment-2zhft deletion completed in 8.105419729s • [SLOW TEST:19.823 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:27:50.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-x7pmw.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-x7pmw.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-x7pmw.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-x7pmw.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-x7pmw.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-x7pmw.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 8 11:27:59.124: INFO: DNS probes using e2e-tests-dns-x7pmw/dns-test-f384598c-911e-11ea-8adb-0242ac110017 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:27:59.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-x7pmw" for this suite. May 8 11:28:05.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:28:05.228: INFO: namespace: e2e-tests-dns-x7pmw, resource: bindings, ignored listing per whitelist May 8 11:28:05.263: INFO: namespace e2e-tests-dns-x7pmw deletion completed in 6.091418718s • [SLOW TEST:14.824 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:28:05.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 8 11:28:05.411: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 8 11:28:05.421: INFO: Waiting for terminating namespaces to be deleted... May 8 11:28:05.423: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 8 11:28:05.428: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 8 11:28:05.428: INFO: Container kube-proxy ready: true, restart count 0 May 8 11:28:05.428: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 8 11:28:05.428: INFO: Container kindnet-cni ready: true, restart count 0 May 8 11:28:05.428: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 8 11:28:05.428: INFO: Container coredns ready: true, restart count 0 May 8 11:28:05.428: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 8 11:28:05.434: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 8 11:28:05.434: INFO: Container kube-proxy ready: true, restart count 0 May 8 11:28:05.434: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 8 11:28:05.434: INFO: Container kindnet-cni ready: true, restart count 0 May 8 11:28:05.434: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 8 11:28:05.434: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-fe9eb707-911e-11ea-8adb-0242ac110017 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-fe9eb707-911e-11ea-8adb-0242ac110017 off the node hunter-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-fe9eb707-911e-11ea-8adb-0242ac110017 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:28:13.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-nqj6h" for this suite. May 8 11:28:43.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:28:43.707: INFO: namespace: e2e-tests-sched-pred-nqj6h, resource: bindings, ignored listing per whitelist May 8 11:28:43.779: INFO: namespace e2e-tests-sched-pred-nqj6h deletion completed in 30.103136853s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:38.515 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:28:43.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs May 8 11:28:43.917: INFO: Waiting up to 5m0s for pod "pod-131e4d21-911f-11ea-8adb-0242ac110017" in namespace "e2e-tests-emptydir-mbm5t" to be "success or failure" May 8 11:28:43.928: INFO: Pod "pod-131e4d21-911f-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 11.213491ms May 8 11:28:45.973: INFO: Pod "pod-131e4d21-911f-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056223202s May 8 11:28:47.978: INFO: Pod "pod-131e4d21-911f-11ea-8adb-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.061153228s May 8 11:28:49.982: INFO: Pod "pod-131e4d21-911f-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.064814149s STEP: Saw pod success May 8 11:28:49.982: INFO: Pod "pod-131e4d21-911f-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:28:49.984: INFO: Trying to get logs from node hunter-worker pod pod-131e4d21-911f-11ea-8adb-0242ac110017 container test-container: STEP: delete the pod May 8 11:28:50.072: INFO: Waiting for pod pod-131e4d21-911f-11ea-8adb-0242ac110017 to disappear May 8 11:28:50.102: INFO: Pod pod-131e4d21-911f-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:28:50.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-mbm5t" for this suite. May 8 11:28:56.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:28:56.158: INFO: namespace: e2e-tests-emptydir-mbm5t, resource: bindings, ignored listing per whitelist May 8 11:28:56.245: INFO: namespace e2e-tests-emptydir-mbm5t deletion completed in 6.138862415s • [SLOW TEST:12.466 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:28:56.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-1a8dd0c4-911f-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume configMaps May 8 11:28:56.459: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1a909943-911f-11ea-8adb-0242ac110017" in namespace "e2e-tests-projected-vlnz9" to be "success or failure" May 8 11:28:56.467: INFO: Pod "pod-projected-configmaps-1a909943-911f-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 8.178452ms May 8 11:28:58.472: INFO: Pod "pod-projected-configmaps-1a909943-911f-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012933349s May 8 11:29:00.565: INFO: Pod "pod-projected-configmaps-1a909943-911f-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106158564s May 8 11:29:02.570: INFO: Pod "pod-projected-configmaps-1a909943-911f-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.110584547s STEP: Saw pod success May 8 11:29:02.570: INFO: Pod "pod-projected-configmaps-1a909943-911f-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:29:02.573: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-1a909943-911f-11ea-8adb-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 8 11:29:02.620: INFO: Waiting for pod pod-projected-configmaps-1a909943-911f-11ea-8adb-0242ac110017 to disappear May 8 11:29:02.645: INFO: Pod pod-projected-configmaps-1a909943-911f-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:29:02.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vlnz9" for this suite. May 8 11:29:08.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:29:08.730: INFO: namespace: e2e-tests-projected-vlnz9, resource: bindings, ignored listing per whitelist May 8 11:29:08.770: INFO: namespace e2e-tests-projected-vlnz9 deletion completed in 6.120375532s • [SLOW TEST:12.524 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:29:08.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-21ffe129-911f-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume configMaps May 8 11:29:08.887: INFO: Waiting up to 5m0s for pod "pod-configmaps-22021495-911f-11ea-8adb-0242ac110017" in namespace "e2e-tests-configmap-nlftj" to be "success or failure" May 8 11:29:08.960: INFO: Pod "pod-configmaps-22021495-911f-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 73.261605ms May 8 11:29:10.965: INFO: Pod "pod-configmaps-22021495-911f-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077627961s May 8 11:29:12.969: INFO: Pod "pod-configmaps-22021495-911f-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081928511s STEP: Saw pod success May 8 11:29:12.969: INFO: Pod "pod-configmaps-22021495-911f-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:29:12.973: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-22021495-911f-11ea-8adb-0242ac110017 container configmap-volume-test: STEP: delete the pod May 8 11:29:12.995: INFO: Waiting for pod pod-configmaps-22021495-911f-11ea-8adb-0242ac110017 to disappear May 8 11:29:12.999: INFO: Pod pod-configmaps-22021495-911f-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:29:12.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-nlftj" for this suite. May 8 11:29:19.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:29:19.057: INFO: namespace: e2e-tests-configmap-nlftj, resource: bindings, ignored listing per whitelist May 8 11:29:19.100: INFO: namespace e2e-tests-configmap-nlftj deletion completed in 6.097965362s • [SLOW TEST:10.330 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:29:19.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 8 11:29:19.306: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 8 11:29:19.317: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:29:19.319: INFO: Number of nodes with available pods: 0 May 8 11:29:19.319: INFO: Node hunter-worker is running more than one daemon pod May 8 11:29:20.417: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:29:20.420: INFO: Number of nodes with available pods: 0 May 8 11:29:20.420: INFO: Node hunter-worker is running more than one daemon pod May 8 11:29:21.324: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:29:21.328: INFO: Number of nodes with available pods: 0 May 8 11:29:21.328: INFO: Node hunter-worker is running more than one daemon pod May 8 11:29:22.324: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:29:22.638: INFO: Number of nodes with available pods: 0 May 8 11:29:22.638: INFO: Node hunter-worker is running more than one daemon pod May 8 11:29:23.374: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:29:23.409: INFO: Number of nodes with available pods: 1 May 8 11:29:23.409: INFO: Node hunter-worker is running more than one daemon pod May 8 11:29:24.471: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:29:24.475: INFO: Number of nodes with available pods: 2 May 8 11:29:24.475: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 8 11:29:24.656: INFO: Wrong image for pod: daemon-set-5sskh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 11:29:24.656: INFO: Wrong image for pod: daemon-set-psk5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 11:29:24.696: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:29:25.702: INFO: Wrong image for pod: daemon-set-5sskh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 11:29:25.702: INFO: Wrong image for pod: daemon-set-psk5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 11:29:25.705: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:29:26.701: INFO: Wrong image for pod: daemon-set-5sskh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 11:29:26.701: INFO: Wrong image for pod: daemon-set-psk5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 11:29:26.704: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:29:27.702: INFO: Wrong image for pod: daemon-set-5sskh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 11:29:27.702: INFO: Wrong image for pod: daemon-set-psk5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 11:29:27.706: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:29:28.702: INFO: Wrong image for pod: daemon-set-5sskh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 11:29:28.702: INFO: Pod daemon-set-5sskh is not available May 8 11:29:28.702: INFO: Wrong image for pod: daemon-set-psk5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 11:29:28.705: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:29:29.701: INFO: Wrong image for pod: daemon-set-5sskh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 11:29:29.701: INFO: Pod daemon-set-5sskh is not available May 8 11:29:29.701: INFO: Wrong image for pod: daemon-set-psk5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 11:29:29.709: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:29:30.702: INFO: Wrong image for pod: daemon-set-5sskh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 11:29:30.702: INFO: Pod daemon-set-5sskh is not available May 8 11:29:30.702: INFO: Wrong image for pod: daemon-set-psk5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 11:29:30.706: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:29:31.702: INFO: Wrong image for pod: daemon-set-psk5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 11:29:31.702: INFO: Pod daemon-set-zm52j is not available May 8 11:29:31.706: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:29:32.709: INFO: Wrong image for pod: daemon-set-psk5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 11:29:32.709: INFO: Pod daemon-set-zm52j is not available May 8 11:29:32.713: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:29:33.701: INFO: Wrong image for pod: daemon-set-psk5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 11:29:33.701: INFO: Pod daemon-set-zm52j is not available May 8 11:29:33.705: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:29:34.710: INFO: Wrong image for pod: daemon-set-psk5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 11:29:34.710: INFO: Pod daemon-set-zm52j is not available May 8 11:29:34.713: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:29:35.776: INFO: Wrong image for pod: daemon-set-psk5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 11:29:35.784: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:29:36.701: INFO: Wrong image for pod: daemon-set-psk5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 11:29:36.706: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:29:37.702: INFO: Wrong image for pod: daemon-set-psk5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 11:29:37.702: INFO: Pod daemon-set-psk5n is not available May 8 11:29:37.706: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:29:38.702: INFO: Pod daemon-set-v9fwt is not available May 8 11:29:38.706: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 8 11:29:38.711: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:29:38.713: INFO: Number of nodes with available pods: 1 May 8 11:29:38.714: INFO: Node hunter-worker2 is running more than one daemon pod May 8 11:29:39.719: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:29:39.722: INFO: Number of nodes with available pods: 1 May 8 11:29:39.722: INFO: Node hunter-worker2 is running more than one daemon pod May 8 11:29:40.718: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:29:40.721: INFO: Number of nodes with available pods: 1 May 8 11:29:40.721: INFO: Node hunter-worker2 is running more than one daemon pod May 8 11:29:41.717: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:29:41.720: INFO: Number of nodes with available pods: 2 May 8 11:29:41.720: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-6m6bh, will wait for the garbage collector to delete the pods May 8 11:29:41.792: INFO: Deleting DaemonSet.extensions daemon-set took: 7.287921ms May 8 11:29:41.892: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.23503ms May 8 11:29:51.295: INFO: Number of nodes with available pods: 0 May 8 11:29:51.295: INFO: Number of running nodes: 0, number of available pods: 0 May 8 11:29:51.298: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-6m6bh/daemonsets","resourceVersion":"9405877"},"items":null} May 8 11:29:51.300: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-6m6bh/pods","resourceVersion":"9405877"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:29:51.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-6m6bh" for this suite. May 8 11:29:59.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:29:59.336: INFO: namespace: e2e-tests-daemonsets-6m6bh, resource: bindings, ignored listing per whitelist May 8 11:29:59.408: INFO: namespace e2e-tests-daemonsets-6m6bh deletion completed in 8.093934726s • [SLOW TEST:40.307 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:29:59.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 8 11:29:59.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-d4k7z' May 8 11:29:59.628: INFO: stderr: "" May 8 11:29:59.628: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 May 8 11:29:59.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-d4k7z' May 8 11:30:04.263: INFO: stderr: "" May 8 11:30:04.263: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:30:04.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-d4k7z" for this suite. May 8 11:30:10.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:30:10.300: INFO: namespace: e2e-tests-kubectl-d4k7z, resource: bindings, ignored listing per whitelist May 8 11:30:10.359: INFO: namespace e2e-tests-kubectl-d4k7z deletion completed in 6.092889165s • [SLOW TEST:10.951 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:30:10.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-46b9b1b0-911f-11ea-8adb-0242ac110017 STEP: Creating secret with name s-test-opt-upd-46b9b218-911f-11ea-8adb-0242ac110017 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-46b9b1b0-911f-11ea-8adb-0242ac110017 STEP: Updating secret s-test-opt-upd-46b9b218-911f-11ea-8adb-0242ac110017 STEP: Creating secret with name s-test-opt-create-46b9b23d-911f-11ea-8adb-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:30:20.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-zcmqn" for this suite. May 8 11:30:44.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:30:44.692: INFO: namespace: e2e-tests-secrets-zcmqn, resource: bindings, ignored listing per whitelist May 8 11:30:44.735: INFO: namespace e2e-tests-secrets-zcmqn deletion completed in 24.101258746s • [SLOW TEST:34.375 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:30:44.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 8 11:30:44.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-68pjd' May 8 11:30:44.987: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 8 11:30:44.987: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 8 11:30:47.064: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-j5z5s] May 8 11:30:47.064: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-j5z5s" in namespace "e2e-tests-kubectl-68pjd" to be "running and ready" May 8 11:30:47.067: INFO: Pod "e2e-test-nginx-rc-j5z5s": Phase="Pending", Reason="", readiness=false. Elapsed: 3.143389ms May 8 11:30:49.071: INFO: Pod "e2e-test-nginx-rc-j5z5s": Phase="Running", Reason="", readiness=true. Elapsed: 2.007329778s May 8 11:30:49.071: INFO: Pod "e2e-test-nginx-rc-j5z5s" satisfied condition "running and ready" May 8 11:30:49.071: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-j5z5s] May 8 11:30:49.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-68pjd' May 8 11:30:49.205: INFO: stderr: "" May 8 11:30:49.205: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 May 8 11:30:49.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-68pjd' May 8 11:30:49.325: INFO: stderr: "" May 8 11:30:49.325: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:30:49.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-68pjd" for this suite. May 8 11:31:11.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:31:11.427: INFO: namespace: e2e-tests-kubectl-68pjd, resource: bindings, ignored listing per whitelist May 8 11:31:11.442: INFO: namespace e2e-tests-kubectl-68pjd deletion completed in 22.110011453s • [SLOW TEST:26.707 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:31:11.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-6b1ed2ca-911f-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume secrets May 8 11:31:11.687: INFO: Waiting up to 5m0s for pod "pod-secrets-6b33c55f-911f-11ea-8adb-0242ac110017" in namespace "e2e-tests-secrets-xz96f" to be "success or failure" May 8 11:31:11.699: INFO: Pod "pod-secrets-6b33c55f-911f-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 11.451265ms May 8 11:31:13.704: INFO: Pod "pod-secrets-6b33c55f-911f-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01639394s May 8 11:31:15.745: INFO: Pod "pod-secrets-6b33c55f-911f-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058219958s STEP: Saw pod success May 8 11:31:15.745: INFO: Pod "pod-secrets-6b33c55f-911f-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:31:15.799: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-6b33c55f-911f-11ea-8adb-0242ac110017 container secret-volume-test: STEP: delete the pod May 8 11:31:15.889: INFO: Waiting for pod pod-secrets-6b33c55f-911f-11ea-8adb-0242ac110017 to disappear May 8 11:31:16.154: INFO: Pod pod-secrets-6b33c55f-911f-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:31:16.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-xz96f" for this suite. May 8 11:31:22.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:31:22.304: INFO: namespace: e2e-tests-secrets-xz96f, resource: bindings, ignored listing per whitelist May 8 11:31:22.304: INFO: namespace e2e-tests-secrets-xz96f deletion completed in 6.145555829s STEP: Destroying namespace "e2e-tests-secret-namespace-zszx8" for this suite. May 8 11:31:28.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:31:28.386: INFO: namespace: e2e-tests-secret-namespace-zszx8, resource: bindings, ignored listing per whitelist May 8 11:31:28.396: INFO: namespace e2e-tests-secret-namespace-zszx8 deletion completed in 6.091892565s • [SLOW TEST:16.954 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:31:28.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 8 11:31:28.538: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:31:32.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-cmbpx" for this suite. May 8 11:32:10.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:32:10.798: INFO: namespace: e2e-tests-pods-cmbpx, resource: bindings, ignored listing per whitelist May 8 11:32:10.822: INFO: namespace e2e-tests-pods-cmbpx deletion completed in 38.120605155s • [SLOW TEST:42.425 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:32:10.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-rkgl STEP: Creating a pod to test atomic-volume-subpath May 8 11:32:10.947: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-rkgl" in namespace "e2e-tests-subpath-2nx64" to be "success or failure" May 8 11:32:10.950: INFO: Pod "pod-subpath-test-downwardapi-rkgl": Phase="Pending", Reason="", readiness=false. Elapsed: 3.418104ms May 8 11:32:12.955: INFO: Pod "pod-subpath-test-downwardapi-rkgl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007985476s May 8 11:32:14.958: INFO: Pod "pod-subpath-test-downwardapi-rkgl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011148934s May 8 11:32:16.962: INFO: Pod "pod-subpath-test-downwardapi-rkgl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015213676s May 8 11:32:18.967: INFO: Pod "pod-subpath-test-downwardapi-rkgl": Phase="Running", Reason="", readiness=false. Elapsed: 8.020134141s May 8 11:32:20.971: INFO: Pod "pod-subpath-test-downwardapi-rkgl": Phase="Running", Reason="", readiness=false. Elapsed: 10.024595509s May 8 11:32:22.976: INFO: Pod "pod-subpath-test-downwardapi-rkgl": Phase="Running", Reason="", readiness=false. Elapsed: 12.029327474s May 8 11:32:24.980: INFO: Pod "pod-subpath-test-downwardapi-rkgl": Phase="Running", Reason="", readiness=false. Elapsed: 14.032790705s May 8 11:32:26.983: INFO: Pod "pod-subpath-test-downwardapi-rkgl": Phase="Running", Reason="", readiness=false. Elapsed: 16.036178411s May 8 11:32:28.987: INFO: Pod "pod-subpath-test-downwardapi-rkgl": Phase="Running", Reason="", readiness=false. Elapsed: 18.040610041s May 8 11:32:30.992: INFO: Pod "pod-subpath-test-downwardapi-rkgl": Phase="Running", Reason="", readiness=false. Elapsed: 20.044960823s May 8 11:32:32.996: INFO: Pod "pod-subpath-test-downwardapi-rkgl": Phase="Running", Reason="", readiness=false. Elapsed: 22.048967246s May 8 11:32:35.000: INFO: Pod "pod-subpath-test-downwardapi-rkgl": Phase="Running", Reason="", readiness=false. Elapsed: 24.053006155s May 8 11:32:37.003: INFO: Pod "pod-subpath-test-downwardapi-rkgl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.056610984s STEP: Saw pod success May 8 11:32:37.003: INFO: Pod "pod-subpath-test-downwardapi-rkgl" satisfied condition "success or failure" May 8 11:32:37.006: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-rkgl container test-container-subpath-downwardapi-rkgl: STEP: delete the pod May 8 11:32:37.181: INFO: Waiting for pod pod-subpath-test-downwardapi-rkgl to disappear May 8 11:32:37.208: INFO: Pod pod-subpath-test-downwardapi-rkgl no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-rkgl May 8 11:32:37.208: INFO: Deleting pod "pod-subpath-test-downwardapi-rkgl" in namespace "e2e-tests-subpath-2nx64" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:32:37.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-2nx64" for this suite. May 8 11:32:43.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:32:43.366: INFO: namespace: e2e-tests-subpath-2nx64, resource: bindings, ignored listing per whitelist May 8 11:32:43.370: INFO: namespace e2e-tests-subpath-2nx64 deletion completed in 6.146797819s • [SLOW TEST:32.548 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:32:43.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-5kh29 STEP: creating a selector STEP: Creating the service pods in kubernetes May 8 11:32:43.605: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 8 11:33:07.869: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.168:8080/dial?request=hostName&protocol=udp&host=10.244.1.167&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-5kh29 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 11:33:07.869: INFO: >>> kubeConfig: /root/.kube/config I0508 11:33:07.904978 6 log.go:172] (0xc0020182c0) (0xc001ac35e0) Create stream I0508 11:33:07.905017 6 log.go:172] (0xc0020182c0) (0xc001ac35e0) Stream added, broadcasting: 1 I0508 11:33:07.908198 6 log.go:172] (0xc0020182c0) Reply frame received for 1 I0508 11:33:07.908265 6 log.go:172] (0xc0020182c0) (0xc00211ec80) Create stream I0508 11:33:07.908290 6 log.go:172] (0xc0020182c0) (0xc00211ec80) Stream added, broadcasting: 3 I0508 11:33:07.909524 6 log.go:172] (0xc0020182c0) Reply frame received for 3 I0508 11:33:07.909575 6 log.go:172] (0xc0020182c0) (0xc001ac3720) Create stream I0508 11:33:07.909591 6 log.go:172] (0xc0020182c0) (0xc001ac3720) Stream added, broadcasting: 5 I0508 11:33:07.910609 6 log.go:172] (0xc0020182c0) Reply frame received for 5 I0508 11:33:07.982684 6 log.go:172] (0xc0020182c0) Data frame received for 3 I0508 11:33:07.982725 6 log.go:172] (0xc00211ec80) (3) Data frame handling I0508 11:33:07.982763 6 log.go:172] (0xc00211ec80) (3) Data frame sent I0508 11:33:07.983287 6 log.go:172] (0xc0020182c0) Data frame received for 3 I0508 11:33:07.983312 6 log.go:172] (0xc00211ec80) (3) Data frame handling I0508 11:33:07.983500 6 log.go:172] (0xc0020182c0) Data frame received for 5 I0508 11:33:07.983521 6 log.go:172] (0xc001ac3720) (5) Data frame handling I0508 11:33:07.985320 6 log.go:172] (0xc0020182c0) Data frame received for 1 I0508 11:33:07.985396 6 log.go:172] (0xc001ac35e0) (1) Data frame handling I0508 11:33:07.985423 6 log.go:172] (0xc001ac35e0) (1) Data frame sent I0508 11:33:07.985439 6 log.go:172] (0xc0020182c0) (0xc001ac35e0) Stream removed, broadcasting: 1 I0508 11:33:07.985549 6 log.go:172] (0xc0020182c0) (0xc001ac35e0) Stream removed, broadcasting: 1 I0508 11:33:07.985575 6 log.go:172] (0xc0020182c0) (0xc00211ec80) Stream removed, broadcasting: 3 I0508 11:33:07.985690 6 log.go:172] (0xc0020182c0) Go away received I0508 11:33:07.985877 6 log.go:172] (0xc0020182c0) (0xc001ac3720) Stream removed, broadcasting: 5 May 8 11:33:07.985: INFO: Waiting for endpoints: map[] May 8 11:33:07.989: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.168:8080/dial?request=hostName&protocol=udp&host=10.244.2.66&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-5kh29 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 11:33:07.989: INFO: >>> kubeConfig: /root/.kube/config I0508 11:33:08.020608 6 log.go:172] (0xc002018790) (0xc001ac3900) Create stream I0508 11:33:08.020638 6 log.go:172] (0xc002018790) (0xc001ac3900) Stream added, broadcasting: 1 I0508 11:33:08.023502 6 log.go:172] (0xc002018790) Reply frame received for 1 I0508 11:33:08.023549 6 log.go:172] (0xc002018790) (0xc00235c1e0) Create stream I0508 11:33:08.023576 6 log.go:172] (0xc002018790) (0xc00235c1e0) Stream added, broadcasting: 3 I0508 11:33:08.024767 6 log.go:172] (0xc002018790) Reply frame received for 3 I0508 11:33:08.024805 6 log.go:172] (0xc002018790) (0xc001e99a40) Create stream I0508 11:33:08.024819 6 log.go:172] (0xc002018790) (0xc001e99a40) Stream added, broadcasting: 5 I0508 11:33:08.026017 6 log.go:172] (0xc002018790) Reply frame received for 5 I0508 11:33:08.086340 6 log.go:172] (0xc002018790) Data frame received for 3 I0508 11:33:08.086387 6 log.go:172] (0xc00235c1e0) (3) Data frame handling I0508 11:33:08.086409 6 log.go:172] (0xc00235c1e0) (3) Data frame sent I0508 11:33:08.086870 6 log.go:172] (0xc002018790) Data frame received for 3 I0508 11:33:08.086898 6 log.go:172] (0xc00235c1e0) (3) Data frame handling I0508 11:33:08.087102 6 log.go:172] (0xc002018790) Data frame received for 5 I0508 11:33:08.087136 6 log.go:172] (0xc001e99a40) (5) Data frame handling I0508 11:33:08.088978 6 log.go:172] (0xc002018790) Data frame received for 1 I0508 11:33:08.089006 6 log.go:172] (0xc001ac3900) (1) Data frame handling I0508 11:33:08.089033 6 log.go:172] (0xc001ac3900) (1) Data frame sent I0508 11:33:08.089056 6 log.go:172] (0xc002018790) (0xc001ac3900) Stream removed, broadcasting: 1 I0508 11:33:08.089070 6 log.go:172] (0xc002018790) Go away received I0508 11:33:08.089283 6 log.go:172] (0xc002018790) (0xc001ac3900) Stream removed, broadcasting: 1 I0508 11:33:08.089326 6 log.go:172] (0xc002018790) (0xc00235c1e0) Stream removed, broadcasting: 3 I0508 11:33:08.089361 6 log.go:172] (0xc002018790) (0xc001e99a40) Stream removed, broadcasting: 5 May 8 11:33:08.089: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:33:08.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-5kh29" for this suite. May 8 11:33:32.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:33:32.184: INFO: namespace: e2e-tests-pod-network-test-5kh29, resource: bindings, ignored listing per whitelist May 8 11:33:32.184: INFO: namespace e2e-tests-pod-network-test-5kh29 deletion completed in 24.090580875s • [SLOW TEST:48.814 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:33:32.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-bf07977e-911f-11ea-8adb-0242ac110017 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-bf07977e-911f-11ea-8adb-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:35:03.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-kw8bb" for this suite. May 8 11:35:27.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:35:27.325: INFO: namespace: e2e-tests-configmap-kw8bb, resource: bindings, ignored listing per whitelist May 8 11:35:27.362: INFO: namespace e2e-tests-configmap-kw8bb deletion completed in 24.096234457s • [SLOW TEST:115.178 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:35:27.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-03a98ed7-9120-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume configMaps May 8 11:35:27.476: INFO: Waiting up to 5m0s for pod "pod-configmaps-03aa3567-9120-11ea-8adb-0242ac110017" in namespace "e2e-tests-configmap-glm2h" to be "success or failure" May 8 11:35:27.493: INFO: Pod "pod-configmaps-03aa3567-9120-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 16.994114ms May 8 11:35:29.499: INFO: Pod "pod-configmaps-03aa3567-9120-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0236852s May 8 11:35:31.504: INFO: Pod "pod-configmaps-03aa3567-9120-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028267318s STEP: Saw pod success May 8 11:35:31.504: INFO: Pod "pod-configmaps-03aa3567-9120-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:35:31.507: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-03aa3567-9120-11ea-8adb-0242ac110017 container configmap-volume-test: STEP: delete the pod May 8 11:35:31.556: INFO: Waiting for pod pod-configmaps-03aa3567-9120-11ea-8adb-0242ac110017 to disappear May 8 11:35:31.571: INFO: Pod pod-configmaps-03aa3567-9120-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:35:31.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-glm2h" for this suite. May 8 11:35:37.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:35:37.779: INFO: namespace: e2e-tests-configmap-glm2h, resource: bindings, ignored listing per whitelist May 8 11:35:37.801: INFO: namespace e2e-tests-configmap-glm2h deletion completed in 6.226754883s • [SLOW TEST:10.439 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:35:37.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-09eeef10-9120-11ea-8adb-0242ac110017 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:35:44.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-tgjxl" for this suite. May 8 11:36:06.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:36:06.241: INFO: namespace: e2e-tests-configmap-tgjxl, resource: bindings, ignored listing per whitelist May 8 11:36:06.276: INFO: namespace e2e-tests-configmap-tgjxl deletion completed in 22.09660529s • [SLOW TEST:28.475 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:36:06.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-b2w6 STEP: Creating a pod to test atomic-volume-subpath May 8 11:36:06.418: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-b2w6" in namespace "e2e-tests-subpath-c24h8" to be "success or failure" May 8 11:36:06.434: INFO: Pod "pod-subpath-test-configmap-b2w6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.321155ms May 8 11:36:08.438: INFO: Pod "pod-subpath-test-configmap-b2w6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01995349s May 8 11:36:10.442: INFO: Pod "pod-subpath-test-configmap-b2w6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023834833s May 8 11:36:12.447: INFO: Pod "pod-subpath-test-configmap-b2w6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028724081s May 8 11:36:14.452: INFO: Pod "pod-subpath-test-configmap-b2w6": Phase="Running", Reason="", readiness=false. Elapsed: 8.033522982s May 8 11:36:16.456: INFO: Pod "pod-subpath-test-configmap-b2w6": Phase="Running", Reason="", readiness=false. Elapsed: 10.038069432s May 8 11:36:18.461: INFO: Pod "pod-subpath-test-configmap-b2w6": Phase="Running", Reason="", readiness=false. Elapsed: 12.042223778s May 8 11:36:20.464: INFO: Pod "pod-subpath-test-configmap-b2w6": Phase="Running", Reason="", readiness=false. Elapsed: 14.045346208s May 8 11:36:22.470: INFO: Pod "pod-subpath-test-configmap-b2w6": Phase="Running", Reason="", readiness=false. Elapsed: 16.051544771s May 8 11:36:24.475: INFO: Pod "pod-subpath-test-configmap-b2w6": Phase="Running", Reason="", readiness=false. Elapsed: 18.056264284s May 8 11:36:26.478: INFO: Pod "pod-subpath-test-configmap-b2w6": Phase="Running", Reason="", readiness=false. Elapsed: 20.060017225s May 8 11:36:28.483: INFO: Pod "pod-subpath-test-configmap-b2w6": Phase="Running", Reason="", readiness=false. Elapsed: 22.06446631s May 8 11:36:30.487: INFO: Pod "pod-subpath-test-configmap-b2w6": Phase="Running", Reason="", readiness=false. Elapsed: 24.068887252s May 8 11:36:32.491: INFO: Pod "pod-subpath-test-configmap-b2w6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.073126474s STEP: Saw pod success May 8 11:36:32.492: INFO: Pod "pod-subpath-test-configmap-b2w6" satisfied condition "success or failure" May 8 11:36:32.494: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-b2w6 container test-container-subpath-configmap-b2w6: STEP: delete the pod May 8 11:36:32.537: INFO: Waiting for pod pod-subpath-test-configmap-b2w6 to disappear May 8 11:36:32.565: INFO: Pod pod-subpath-test-configmap-b2w6 no longer exists STEP: Deleting pod pod-subpath-test-configmap-b2w6 May 8 11:36:32.565: INFO: Deleting pod "pod-subpath-test-configmap-b2w6" in namespace "e2e-tests-subpath-c24h8" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:36:32.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-c24h8" for this suite. May 8 11:36:40.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:36:40.663: INFO: namespace: e2e-tests-subpath-c24h8, resource: bindings, ignored listing per whitelist May 8 11:36:40.680: INFO: namespace e2e-tests-subpath-c24h8 deletion completed in 8.105288728s • [SLOW TEST:34.404 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:36:40.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 8 11:36:40.814: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2f6019f8-9120-11ea-8adb-0242ac110017" in namespace "e2e-tests-downward-api-hqpx2" to be "success or failure" May 8 11:36:40.818: INFO: Pod "downwardapi-volume-2f6019f8-9120-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.489659ms May 8 11:36:42.823: INFO: Pod "downwardapi-volume-2f6019f8-9120-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008874803s May 8 11:36:44.827: INFO: Pod "downwardapi-volume-2f6019f8-9120-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013152686s STEP: Saw pod success May 8 11:36:44.827: INFO: Pod "downwardapi-volume-2f6019f8-9120-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:36:44.831: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-2f6019f8-9120-11ea-8adb-0242ac110017 container client-container: STEP: delete the pod May 8 11:36:44.873: INFO: Waiting for pod downwardapi-volume-2f6019f8-9120-11ea-8adb-0242ac110017 to disappear May 8 11:36:44.895: INFO: Pod downwardapi-volume-2f6019f8-9120-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:36:44.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-hqpx2" for this suite. May 8 11:36:50.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:36:50.975: INFO: namespace: e2e-tests-downward-api-hqpx2, resource: bindings, ignored listing per whitelist May 8 11:36:51.013: INFO: namespace e2e-tests-downward-api-hqpx2 deletion completed in 6.114427675s • [SLOW TEST:10.332 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:36:51.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 8 11:36:55.635: INFO: Successfully updated pod "labelsupdate3581ff35-9120-11ea-8adb-0242ac110017" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:36:57.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cznwp" for this suite. May 8 11:37:19.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:37:19.775: INFO: namespace: e2e-tests-projected-cznwp, resource: bindings, ignored listing per whitelist May 8 11:37:19.837: INFO: namespace e2e-tests-projected-cznwp deletion completed in 22.114730377s • [SLOW TEST:28.824 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:37:19.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 8 11:37:19.957: INFO: Waiting up to 5m0s for pod "pod-46b5b41e-9120-11ea-8adb-0242ac110017" in namespace "e2e-tests-emptydir-b2s25" to be "success or failure" May 8 11:37:19.962: INFO: Pod "pod-46b5b41e-9120-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.338932ms May 8 11:37:21.966: INFO: Pod "pod-46b5b41e-9120-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008537919s May 8 11:37:23.970: INFO: Pod "pod-46b5b41e-9120-11ea-8adb-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.013117091s May 8 11:37:25.975: INFO: Pod "pod-46b5b41e-9120-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01745951s STEP: Saw pod success May 8 11:37:25.975: INFO: Pod "pod-46b5b41e-9120-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:37:25.978: INFO: Trying to get logs from node hunter-worker pod pod-46b5b41e-9120-11ea-8adb-0242ac110017 container test-container: STEP: delete the pod May 8 11:37:26.015: INFO: Waiting for pod pod-46b5b41e-9120-11ea-8adb-0242ac110017 to disappear May 8 11:37:26.045: INFO: Pod pod-46b5b41e-9120-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:37:26.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-b2s25" for this suite. May 8 11:37:32.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:37:32.132: INFO: namespace: e2e-tests-emptydir-b2s25, resource: bindings, ignored listing per whitelist May 8 11:37:32.134: INFO: namespace e2e-tests-emptydir-b2s25 deletion completed in 6.085260091s • [SLOW TEST:12.297 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:37:32.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 8 11:37:32.249: INFO: Pod name pod-release: Found 0 pods out of 1 May 8 11:37:37.254: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:37:38.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-ld7xb" for this suite. May 8 11:37:44.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:37:44.362: INFO: namespace: e2e-tests-replication-controller-ld7xb, resource: bindings, ignored listing per whitelist May 8 11:37:44.379: INFO: namespace e2e-tests-replication-controller-ld7xb deletion completed in 6.10382516s • [SLOW TEST:12.245 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:37:44.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 8 11:37:44.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fds6s' May 8 11:37:47.331: INFO: stderr: "" May 8 11:37:47.331: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 8 11:37:47.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fds6s' May 8 11:37:47.447: INFO: stderr: "" May 8 11:37:47.447: INFO: stdout: "update-demo-nautilus-2xr26 update-demo-nautilus-52v4h " May 8 11:37:47.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2xr26 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fds6s' May 8 11:37:47.535: INFO: stderr: "" May 8 11:37:47.535: INFO: stdout: "" May 8 11:37:47.535: INFO: update-demo-nautilus-2xr26 is created but not running May 8 11:37:52.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fds6s' May 8 11:37:52.649: INFO: stderr: "" May 8 11:37:52.649: INFO: stdout: "update-demo-nautilus-2xr26 update-demo-nautilus-52v4h " May 8 11:37:52.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2xr26 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fds6s' May 8 11:37:52.742: INFO: stderr: "" May 8 11:37:52.742: INFO: stdout: "true" May 8 11:37:52.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2xr26 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fds6s' May 8 11:37:52.848: INFO: stderr: "" May 8 11:37:52.848: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 11:37:52.848: INFO: validating pod update-demo-nautilus-2xr26 May 8 11:37:52.853: INFO: got data: { "image": "nautilus.jpg" } May 8 11:37:52.853: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 11:37:52.853: INFO: update-demo-nautilus-2xr26 is verified up and running May 8 11:37:52.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-52v4h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fds6s' May 8 11:37:52.962: INFO: stderr: "" May 8 11:37:52.962: INFO: stdout: "true" May 8 11:37:52.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-52v4h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fds6s' May 8 11:37:53.075: INFO: stderr: "" May 8 11:37:53.075: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 11:37:53.075: INFO: validating pod update-demo-nautilus-52v4h May 8 11:37:53.079: INFO: got data: { "image": "nautilus.jpg" } May 8 11:37:53.079: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 11:37:53.079: INFO: update-demo-nautilus-52v4h is verified up and running STEP: scaling down the replication controller May 8 11:37:53.082: INFO: scanned /root for discovery docs: May 8 11:37:53.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-fds6s' May 8 11:37:54.240: INFO: stderr: "" May 8 11:37:54.240: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 8 11:37:54.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fds6s' May 8 11:37:54.341: INFO: stderr: "" May 8 11:37:54.342: INFO: stdout: "update-demo-nautilus-2xr26 update-demo-nautilus-52v4h " STEP: Replicas for name=update-demo: expected=1 actual=2 May 8 11:37:59.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fds6s' May 8 11:37:59.443: INFO: stderr: "" May 8 11:37:59.443: INFO: stdout: "update-demo-nautilus-52v4h " May 8 11:37:59.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-52v4h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fds6s' May 8 11:37:59.535: INFO: stderr: "" May 8 11:37:59.536: INFO: stdout: "true" May 8 11:37:59.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-52v4h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fds6s' May 8 11:37:59.640: INFO: stderr: "" May 8 11:37:59.640: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 11:37:59.640: INFO: validating pod update-demo-nautilus-52v4h May 8 11:37:59.643: INFO: got data: { "image": "nautilus.jpg" } May 8 11:37:59.643: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 11:37:59.643: INFO: update-demo-nautilus-52v4h is verified up and running STEP: scaling up the replication controller May 8 11:37:59.645: INFO: scanned /root for discovery docs: May 8 11:37:59.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-fds6s' May 8 11:38:00.795: INFO: stderr: "" May 8 11:38:00.795: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 8 11:38:00.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fds6s' May 8 11:38:00.888: INFO: stderr: "" May 8 11:38:00.888: INFO: stdout: "update-demo-nautilus-52v4h update-demo-nautilus-pzvgr " May 8 11:38:00.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-52v4h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fds6s' May 8 11:38:00.990: INFO: stderr: "" May 8 11:38:00.990: INFO: stdout: "true" May 8 11:38:00.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-52v4h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fds6s' May 8 11:38:01.087: INFO: stderr: "" May 8 11:38:01.087: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 11:38:01.087: INFO: validating pod update-demo-nautilus-52v4h May 8 11:38:01.090: INFO: got data: { "image": "nautilus.jpg" } May 8 11:38:01.090: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 11:38:01.090: INFO: update-demo-nautilus-52v4h is verified up and running May 8 11:38:01.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pzvgr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fds6s' May 8 11:38:01.213: INFO: stderr: "" May 8 11:38:01.213: INFO: stdout: "" May 8 11:38:01.213: INFO: update-demo-nautilus-pzvgr is created but not running May 8 11:38:06.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fds6s' May 8 11:38:06.324: INFO: stderr: "" May 8 11:38:06.324: INFO: stdout: "update-demo-nautilus-52v4h update-demo-nautilus-pzvgr " May 8 11:38:06.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-52v4h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fds6s' May 8 11:38:06.423: INFO: stderr: "" May 8 11:38:06.423: INFO: stdout: "true" May 8 11:38:06.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-52v4h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fds6s' May 8 11:38:06.541: INFO: stderr: "" May 8 11:38:06.541: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 11:38:06.541: INFO: validating pod update-demo-nautilus-52v4h May 8 11:38:06.545: INFO: got data: { "image": "nautilus.jpg" } May 8 11:38:06.545: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 11:38:06.545: INFO: update-demo-nautilus-52v4h is verified up and running May 8 11:38:06.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pzvgr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fds6s' May 8 11:38:06.646: INFO: stderr: "" May 8 11:38:06.646: INFO: stdout: "true" May 8 11:38:06.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pzvgr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fds6s' May 8 11:38:06.748: INFO: stderr: "" May 8 11:38:06.748: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 11:38:06.748: INFO: validating pod update-demo-nautilus-pzvgr May 8 11:38:06.752: INFO: got data: { "image": "nautilus.jpg" } May 8 11:38:06.752: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 11:38:06.752: INFO: update-demo-nautilus-pzvgr is verified up and running STEP: using delete to clean up resources May 8 11:38:06.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-fds6s' May 8 11:38:06.962: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 8 11:38:06.962: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 8 11:38:06.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-fds6s' May 8 11:38:07.483: INFO: stderr: "No resources found.\n" May 8 11:38:07.483: INFO: stdout: "" May 8 11:38:07.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-fds6s -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 8 11:38:07.632: INFO: stderr: "" May 8 11:38:07.632: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:38:07.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fds6s" for this suite. May 8 11:38:29.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:38:29.682: INFO: namespace: e2e-tests-kubectl-fds6s, resource: bindings, ignored listing per whitelist May 8 11:38:29.800: INFO: namespace e2e-tests-kubectl-fds6s deletion completed in 22.164835522s • [SLOW TEST:45.421 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:38:29.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc May 8 11:38:30.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4jcrs' May 8 11:38:30.453: INFO: stderr: "" May 8 11:38:30.453: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. May 8 11:38:31.688: INFO: Selector matched 1 pods for map[app:redis] May 8 11:38:31.688: INFO: Found 0 / 1 May 8 11:38:32.458: INFO: Selector matched 1 pods for map[app:redis] May 8 11:38:32.458: INFO: Found 0 / 1 May 8 11:38:33.458: INFO: Selector matched 1 pods for map[app:redis] May 8 11:38:33.458: INFO: Found 0 / 1 May 8 11:38:34.652: INFO: Selector matched 1 pods for map[app:redis] May 8 11:38:34.652: INFO: Found 0 / 1 May 8 11:38:35.504: INFO: Selector matched 1 pods for map[app:redis] May 8 11:38:35.505: INFO: Found 0 / 1 May 8 11:38:36.458: INFO: Selector matched 1 pods for map[app:redis] May 8 11:38:36.458: INFO: Found 0 / 1 May 8 11:38:38.283: INFO: Selector matched 1 pods for map[app:redis] May 8 11:38:38.283: INFO: Found 0 / 1 May 8 11:38:38.592: INFO: Selector matched 1 pods for map[app:redis] May 8 11:38:38.592: INFO: Found 0 / 1 May 8 11:38:39.540: INFO: Selector matched 1 pods for map[app:redis] May 8 11:38:39.540: INFO: Found 0 / 1 May 8 11:38:40.604: INFO: Selector matched 1 pods for map[app:redis] May 8 11:38:40.604: INFO: Found 0 / 1 May 8 11:38:41.610: INFO: Selector matched 1 pods for map[app:redis] May 8 11:38:41.610: INFO: Found 1 / 1 May 8 11:38:41.610: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 8 11:38:41.614: INFO: Selector matched 1 pods for map[app:redis] May 8 11:38:41.614: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 8 11:38:41.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-gqv9r redis-master --namespace=e2e-tests-kubectl-4jcrs' May 8 11:38:42.170: INFO: stderr: "" May 8 11:38:42.171: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 08 May 11:38:39.529 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 08 May 11:38:39.529 # Server started, Redis version 3.2.12\n1:M 08 May 11:38:39.529 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 08 May 11:38:39.529 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 8 11:38:42.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-gqv9r redis-master --namespace=e2e-tests-kubectl-4jcrs --tail=1' May 8 11:38:42.270: INFO: stderr: "" May 8 11:38:42.270: INFO: stdout: "1:M 08 May 11:38:39.529 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 8 11:38:42.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-gqv9r redis-master --namespace=e2e-tests-kubectl-4jcrs --limit-bytes=1' May 8 11:38:42.370: INFO: stderr: "" May 8 11:38:42.370: INFO: stdout: " " STEP: exposing timestamps May 8 11:38:42.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-gqv9r redis-master --namespace=e2e-tests-kubectl-4jcrs --tail=1 --timestamps' May 8 11:38:42.484: INFO: stderr: "" May 8 11:38:42.485: INFO: stdout: "2020-05-08T11:38:39.529735215Z 1:M 08 May 11:38:39.529 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 8 11:38:44.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-gqv9r redis-master --namespace=e2e-tests-kubectl-4jcrs --since=1s' May 8 11:38:45.098: INFO: stderr: "" May 8 11:38:45.098: INFO: stdout: "" May 8 11:38:45.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-gqv9r redis-master --namespace=e2e-tests-kubectl-4jcrs --since=24h' May 8 11:38:45.206: INFO: stderr: "" May 8 11:38:45.207: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 08 May 11:38:39.529 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 08 May 11:38:39.529 # Server started, Redis version 3.2.12\n1:M 08 May 11:38:39.529 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 08 May 11:38:39.529 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources May 8 11:38:45.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4jcrs' May 8 11:38:45.318: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 8 11:38:45.318: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 8 11:38:45.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-4jcrs' May 8 11:38:45.433: INFO: stderr: "No resources found.\n" May 8 11:38:45.433: INFO: stdout: "" May 8 11:38:45.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-4jcrs -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 8 11:38:45.678: INFO: stderr: "" May 8 11:38:45.678: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:38:45.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4jcrs" for this suite. May 8 11:39:09.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:39:09.838: INFO: namespace: e2e-tests-kubectl-4jcrs, resource: bindings, ignored listing per whitelist May 8 11:39:09.842: INFO: namespace e2e-tests-kubectl-4jcrs deletion completed in 24.087986198s • [SLOW TEST:40.042 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:39:09.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-88489fc4-9120-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume secrets May 8 11:39:09.989: INFO: Waiting up to 5m0s for pod "pod-secrets-8849353c-9120-11ea-8adb-0242ac110017" in namespace "e2e-tests-secrets-5bwcp" to be "success or failure" May 8 11:39:10.012: INFO: Pod "pod-secrets-8849353c-9120-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 22.713876ms May 8 11:39:12.016: INFO: Pod "pod-secrets-8849353c-9120-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026766943s May 8 11:39:14.020: INFO: Pod "pod-secrets-8849353c-9120-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030760023s STEP: Saw pod success May 8 11:39:14.020: INFO: Pod "pod-secrets-8849353c-9120-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:39:14.025: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-8849353c-9120-11ea-8adb-0242ac110017 container secret-volume-test: STEP: delete the pod May 8 11:39:14.169: INFO: Waiting for pod pod-secrets-8849353c-9120-11ea-8adb-0242ac110017 to disappear May 8 11:39:14.175: INFO: Pod pod-secrets-8849353c-9120-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:39:14.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-5bwcp" for this suite. May 8 11:39:20.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:39:20.235: INFO: namespace: e2e-tests-secrets-5bwcp, resource: bindings, ignored listing per whitelist May 8 11:39:20.267: INFO: namespace e2e-tests-secrets-5bwcp deletion completed in 6.089385379s • [SLOW TEST:10.424 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:39:20.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 8 11:39:20.379: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8e789215-9120-11ea-8adb-0242ac110017" in namespace "e2e-tests-projected-lj7nl" to be "success or failure" May 8 11:39:20.392: INFO: Pod "downwardapi-volume-8e789215-9120-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 12.942498ms May 8 11:39:22.395: INFO: Pod "downwardapi-volume-8e789215-9120-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016563999s May 8 11:39:24.407: INFO: Pod "downwardapi-volume-8e789215-9120-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027984367s STEP: Saw pod success May 8 11:39:24.407: INFO: Pod "downwardapi-volume-8e789215-9120-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:39:24.410: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-8e789215-9120-11ea-8adb-0242ac110017 container client-container: STEP: delete the pod May 8 11:39:25.134: INFO: Waiting for pod downwardapi-volume-8e789215-9120-11ea-8adb-0242ac110017 to disappear May 8 11:39:25.188: INFO: Pod downwardapi-volume-8e789215-9120-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:39:25.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lj7nl" for this suite. May 8 11:39:31.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:39:31.300: INFO: namespace: e2e-tests-projected-lj7nl, resource: bindings, ignored listing per whitelist May 8 11:39:31.311: INFO: namespace e2e-tests-projected-lj7nl deletion completed in 6.118793469s • [SLOW TEST:11.044 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:39:31.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0508 11:40:01.950849 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 8 11:40:01.950: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:40:01.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-wppn7" for this suite. May 8 11:40:09.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:40:10.016: INFO: namespace: e2e-tests-gc-wppn7, resource: bindings, ignored listing per whitelist May 8 11:40:10.064: INFO: namespace e2e-tests-gc-wppn7 deletion completed in 8.110515436s • [SLOW TEST:38.752 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:40:10.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components May 8 11:40:10.364: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 8 11:40:10.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-dbzsm' May 8 11:40:10.709: INFO: stderr: "" May 8 11:40:10.709: INFO: stdout: "service/redis-slave created\n" May 8 11:40:10.709: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 8 11:40:10.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-dbzsm' May 8 11:40:11.006: INFO: stderr: "" May 8 11:40:11.006: INFO: stdout: "service/redis-master created\n" May 8 11:40:11.006: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 8 11:40:11.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-dbzsm' May 8 11:40:11.294: INFO: stderr: "" May 8 11:40:11.294: INFO: stdout: "service/frontend created\n" May 8 11:40:11.294: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 8 11:40:11.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-dbzsm' May 8 11:40:11.534: INFO: stderr: "" May 8 11:40:11.534: INFO: stdout: "deployment.extensions/frontend created\n" May 8 11:40:11.534: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 8 11:40:11.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-dbzsm' May 8 11:40:11.965: INFO: stderr: "" May 8 11:40:11.965: INFO: stdout: "deployment.extensions/redis-master created\n" May 8 11:40:11.966: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 8 11:40:11.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-dbzsm' May 8 11:40:12.265: INFO: stderr: "" May 8 11:40:12.265: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app May 8 11:40:12.265: INFO: Waiting for all frontend pods to be Running. May 8 11:40:22.316: INFO: Waiting for frontend to serve content. May 8 11:40:22.336: INFO: Trying to add a new entry to the guestbook. May 8 11:40:22.351: INFO: Verifying that added entry can be retrieved. May 8 11:40:22.364: INFO: Failed to get response from guestbook. err: , response: {"data": ""} STEP: using delete to clean up resources May 8 11:40:27.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-dbzsm' May 8 11:40:27.588: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 8 11:40:27.588: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 8 11:40:27.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-dbzsm' May 8 11:40:27.760: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 8 11:40:27.760: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 8 11:40:27.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-dbzsm' May 8 11:40:27.945: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 8 11:40:27.945: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 8 11:40:27.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-dbzsm' May 8 11:40:28.029: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 8 11:40:28.029: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources May 8 11:40:28.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-dbzsm' May 8 11:40:28.162: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 8 11:40:28.162: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 8 11:40:28.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-dbzsm' May 8 11:40:28.624: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 8 11:40:28.624: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:40:28.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-dbzsm" for this suite. May 8 11:41:14.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:41:14.876: INFO: namespace: e2e-tests-kubectl-dbzsm, resource: bindings, ignored listing per whitelist May 8 11:41:14.920: INFO: namespace e2e-tests-kubectl-dbzsm deletion completed in 46.104350049s • [SLOW TEST:64.856 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:41:14.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 8 11:41:15.287: INFO: PodSpec: initContainers in spec.initContainers May 8 11:42:06.427: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-d2fb1781-9120-11ea-8adb-0242ac110017", GenerateName:"", Namespace:"e2e-tests-init-container-tjbgz", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-tjbgz/pods/pod-init-d2fb1781-9120-11ea-8adb-0242ac110017", UID:"d316c15c-9120-11ea-99e8-0242ac110002", ResourceVersion:"9408266", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724534875, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"287341785"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-r2wcl", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00248e280), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-r2wcl", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-r2wcl", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-r2wcl", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00265e748), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0024046c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00265e910)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00265e930)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00265e938), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00265e93c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534875, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534875, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534875, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534875, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"10.244.2.78", StartTime:(*v1.Time)(0xc002498120), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002498160), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001653880)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://a319ba5a76924c1d4fba71818932e03d6b79163869475c4d6028b41539dc51cb"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002498180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002498140), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:42:06.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-tjbgz" for this suite. May 8 11:42:28.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:42:28.534: INFO: namespace: e2e-tests-init-container-tjbgz, resource: bindings, ignored listing per whitelist May 8 11:42:28.604: INFO: namespace e2e-tests-init-container-tjbgz deletion completed in 22.14448342s • [SLOW TEST:73.684 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:42:28.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 8 11:42:28.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-jwds6' May 8 11:42:28.813: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 8 11:42:28.813: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created May 8 11:42:28.816: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 May 8 11:42:28.837: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 8 11:42:28.852: INFO: scanned /root for discovery docs: May 8 11:42:28.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-jwds6' May 8 11:42:46.751: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 8 11:42:46.751: INFO: stdout: "Created e2e-test-nginx-rc-e484839363f6c750e6bff996088f303d\nScaling up e2e-test-nginx-rc-e484839363f6c750e6bff996088f303d from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e484839363f6c750e6bff996088f303d up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e484839363f6c750e6bff996088f303d to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 8 11:42:46.751: INFO: stdout: "Created e2e-test-nginx-rc-e484839363f6c750e6bff996088f303d\nScaling up e2e-test-nginx-rc-e484839363f6c750e6bff996088f303d from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e484839363f6c750e6bff996088f303d up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e484839363f6c750e6bff996088f303d to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 8 11:42:46.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-jwds6' May 8 11:42:46.908: INFO: stderr: "" May 8 11:42:46.908: INFO: stdout: "e2e-test-nginx-rc-e484839363f6c750e6bff996088f303d-bn4mn " May 8 11:42:46.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e484839363f6c750e6bff996088f303d-bn4mn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jwds6' May 8 11:42:47.004: INFO: stderr: "" May 8 11:42:47.004: INFO: stdout: "true" May 8 11:42:47.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e484839363f6c750e6bff996088f303d-bn4mn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jwds6' May 8 11:42:47.108: INFO: stderr: "" May 8 11:42:47.108: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 8 11:42:47.108: INFO: e2e-test-nginx-rc-e484839363f6c750e6bff996088f303d-bn4mn is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 May 8 11:42:47.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-jwds6' May 8 11:42:47.281: INFO: stderr: "" May 8 11:42:47.281: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:42:47.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jwds6" for this suite. May 8 11:42:55.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:42:55.581: INFO: namespace: e2e-tests-kubectl-jwds6, resource: bindings, ignored listing per whitelist May 8 11:42:55.620: INFO: namespace e2e-tests-kubectl-jwds6 deletion completed in 8.110670216s • [SLOW TEST:27.016 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:42:55.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-9tfq6 May 8 11:43:00.243: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-9tfq6 STEP: checking the pod's current state and verifying that restartCount is present May 8 11:43:00.246: INFO: Initial restart count of pod liveness-http is 0 May 8 11:43:16.282: INFO: Restart count of pod e2e-tests-container-probe-9tfq6/liveness-http is now 1 (16.036469778s elapsed) May 8 11:43:36.584: INFO: Restart count of pod e2e-tests-container-probe-9tfq6/liveness-http is now 2 (36.337774809s elapsed) May 8 11:43:56.640: INFO: Restart count of pod e2e-tests-container-probe-9tfq6/liveness-http is now 3 (56.393698116s elapsed) May 8 11:44:14.677: INFO: Restart count of pod e2e-tests-container-probe-9tfq6/liveness-http is now 4 (1m14.431546139s elapsed) May 8 11:45:16.806: INFO: Restart count of pod e2e-tests-container-probe-9tfq6/liveness-http is now 5 (2m16.560208619s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:45:16.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-9tfq6" for this suite. May 8 11:45:22.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:45:22.911: INFO: namespace: e2e-tests-container-probe-9tfq6, resource: bindings, ignored listing per whitelist May 8 11:45:22.940: INFO: namespace e2e-tests-container-probe-9tfq6 deletion completed in 6.110594865s • [SLOW TEST:147.320 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:45:22.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 8 11:45:23.068: INFO: Waiting up to 5m0s for pod "downwardapi-volume-66a6ae16-9121-11ea-8adb-0242ac110017" in namespace "e2e-tests-projected-c9lp8" to be "success or failure" May 8 11:45:23.078: INFO: Pod "downwardapi-volume-66a6ae16-9121-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 10.064612ms May 8 11:45:25.082: INFO: Pod "downwardapi-volume-66a6ae16-9121-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013762803s May 8 11:45:27.089: INFO: Pod "downwardapi-volume-66a6ae16-9121-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021176666s STEP: Saw pod success May 8 11:45:27.089: INFO: Pod "downwardapi-volume-66a6ae16-9121-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:45:27.091: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-66a6ae16-9121-11ea-8adb-0242ac110017 container client-container: STEP: delete the pod May 8 11:45:27.109: INFO: Waiting for pod downwardapi-volume-66a6ae16-9121-11ea-8adb-0242ac110017 to disappear May 8 11:45:27.136: INFO: Pod downwardapi-volume-66a6ae16-9121-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:45:27.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-c9lp8" for this suite. May 8 11:45:33.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:45:33.174: INFO: namespace: e2e-tests-projected-c9lp8, resource: bindings, ignored listing per whitelist May 8 11:45:33.228: INFO: namespace e2e-tests-projected-c9lp8 deletion completed in 6.089096321s • [SLOW TEST:10.288 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:45:33.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 8 11:45:33.327: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6cc68924-9121-11ea-8adb-0242ac110017" in namespace "e2e-tests-downward-api-2m2gv" to be "success or failure" May 8 11:45:33.358: INFO: Pod "downwardapi-volume-6cc68924-9121-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 31.588515ms May 8 11:45:35.431: INFO: Pod "downwardapi-volume-6cc68924-9121-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104460633s May 8 11:45:37.538: INFO: Pod "downwardapi-volume-6cc68924-9121-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.211899268s STEP: Saw pod success May 8 11:45:37.539: INFO: Pod "downwardapi-volume-6cc68924-9121-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:45:37.541: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-6cc68924-9121-11ea-8adb-0242ac110017 container client-container: STEP: delete the pod May 8 11:45:37.708: INFO: Waiting for pod downwardapi-volume-6cc68924-9121-11ea-8adb-0242ac110017 to disappear May 8 11:45:37.715: INFO: Pod downwardapi-volume-6cc68924-9121-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:45:37.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-2m2gv" for this suite. May 8 11:45:43.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:45:43.797: INFO: namespace: e2e-tests-downward-api-2m2gv, resource: bindings, ignored listing per whitelist May 8 11:45:43.809: INFO: namespace e2e-tests-downward-api-2m2gv deletion completed in 6.092531523s • [SLOW TEST:10.581 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:45:43.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 8 11:45:43.932: INFO: Waiting up to 5m0s for pod "downward-api-731a3f1c-9121-11ea-8adb-0242ac110017" in namespace "e2e-tests-downward-api-ms62j" to be "success or failure" May 8 11:45:43.957: INFO: Pod "downward-api-731a3f1c-9121-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 24.460359ms May 8 11:45:45.963: INFO: Pod "downward-api-731a3f1c-9121-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031053172s May 8 11:45:47.967: INFO: Pod "downward-api-731a3f1c-9121-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034609064s STEP: Saw pod success May 8 11:45:47.967: INFO: Pod "downward-api-731a3f1c-9121-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:45:47.970: INFO: Trying to get logs from node hunter-worker2 pod downward-api-731a3f1c-9121-11ea-8adb-0242ac110017 container dapi-container: STEP: delete the pod May 8 11:45:48.094: INFO: Waiting for pod downward-api-731a3f1c-9121-11ea-8adb-0242ac110017 to disappear May 8 11:45:48.132: INFO: Pod downward-api-731a3f1c-9121-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:45:48.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-ms62j" for this suite. May 8 11:45:54.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:45:54.239: INFO: namespace: e2e-tests-downward-api-ms62j, resource: bindings, ignored listing per whitelist May 8 11:45:54.259: INFO: namespace e2e-tests-downward-api-ms62j deletion completed in 6.123092131s • [SLOW TEST:10.449 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:45:54.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 8 11:45:58.377: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-795093ec-9121-11ea-8adb-0242ac110017,GenerateName:,Namespace:e2e-tests-events-z76xm,SelfLink:/api/v1/namespaces/e2e-tests-events-z76xm/pods/send-events-795093ec-9121-11ea-8adb-0242ac110017,UID:7952495a-9121-11ea-99e8-0242ac110002,ResourceVersion:9408941,Generation:0,CreationTimestamp:2020-05-08 11:45:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 349374678,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bzpld {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzpld,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-bzpld true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027169e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002716a00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:45:54 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:45:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:45:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:45:54 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.185,StartTime:2020-05-08 11:45:54 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-08 11:45:56 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://3800a7d4bf9bbf8f592881d687cd4f5730a7c969f714a1909daa9bf554648d70}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 8 11:46:00.382: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 8 11:46:02.388: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:46:02.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-z76xm" for this suite. May 8 11:46:42.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:46:42.515: INFO: namespace: e2e-tests-events-z76xm, resource: bindings, ignored listing per whitelist May 8 11:46:42.524: INFO: namespace e2e-tests-events-z76xm deletion completed in 40.121026828s • [SLOW TEST:48.265 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:46:42.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-961507e2-9121-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume secrets May 8 11:46:42.633: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9616ec1e-9121-11ea-8adb-0242ac110017" in namespace "e2e-tests-projected-8twpc" to be "success or failure" May 8 11:46:42.637: INFO: Pod "pod-projected-secrets-9616ec1e-9121-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007428ms May 8 11:46:44.642: INFO: Pod "pod-projected-secrets-9616ec1e-9121-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008434568s May 8 11:46:46.646: INFO: Pod "pod-projected-secrets-9616ec1e-9121-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01305594s STEP: Saw pod success May 8 11:46:46.646: INFO: Pod "pod-projected-secrets-9616ec1e-9121-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:46:46.650: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-9616ec1e-9121-11ea-8adb-0242ac110017 container projected-secret-volume-test: STEP: delete the pod May 8 11:46:46.669: INFO: Waiting for pod pod-projected-secrets-9616ec1e-9121-11ea-8adb-0242ac110017 to disappear May 8 11:46:46.697: INFO: Pod pod-projected-secrets-9616ec1e-9121-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:46:46.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8twpc" for this suite. May 8 11:46:52.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:46:52.751: INFO: namespace: e2e-tests-projected-8twpc, resource: bindings, ignored listing per whitelist May 8 11:46:52.818: INFO: namespace e2e-tests-projected-8twpc deletion completed in 6.117057233s • [SLOW TEST:10.293 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:46:52.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-t7wn STEP: Creating a pod to test atomic-volume-subpath May 8 11:46:52.994: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-t7wn" in namespace "e2e-tests-subpath-lg2g2" to be "success or failure" May 8 11:46:52.998: INFO: Pod "pod-subpath-test-secret-t7wn": Phase="Pending", Reason="", readiness=false. Elapsed: 3.786694ms May 8 11:46:55.001: INFO: Pod "pod-subpath-test-secret-t7wn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00757562s May 8 11:46:57.006: INFO: Pod "pod-subpath-test-secret-t7wn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011873193s May 8 11:46:59.009: INFO: Pod "pod-subpath-test-secret-t7wn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015409793s May 8 11:47:01.014: INFO: Pod "pod-subpath-test-secret-t7wn": Phase="Running", Reason="", readiness=false. Elapsed: 8.020036612s May 8 11:47:03.018: INFO: Pod "pod-subpath-test-secret-t7wn": Phase="Running", Reason="", readiness=false. Elapsed: 10.024576187s May 8 11:47:05.022: INFO: Pod "pod-subpath-test-secret-t7wn": Phase="Running", Reason="", readiness=false. Elapsed: 12.027922918s May 8 11:47:07.026: INFO: Pod "pod-subpath-test-secret-t7wn": Phase="Running", Reason="", readiness=false. Elapsed: 14.032481514s May 8 11:47:09.031: INFO: Pod "pod-subpath-test-secret-t7wn": Phase="Running", Reason="", readiness=false. Elapsed: 16.03686856s May 8 11:47:11.035: INFO: Pod "pod-subpath-test-secret-t7wn": Phase="Running", Reason="", readiness=false. Elapsed: 18.041107488s May 8 11:47:13.039: INFO: Pod "pod-subpath-test-secret-t7wn": Phase="Running", Reason="", readiness=false. Elapsed: 20.045031513s May 8 11:47:15.044: INFO: Pod "pod-subpath-test-secret-t7wn": Phase="Running", Reason="", readiness=false. Elapsed: 22.04998489s May 8 11:47:17.048: INFO: Pod "pod-subpath-test-secret-t7wn": Phase="Running", Reason="", readiness=false. Elapsed: 24.054336484s May 8 11:47:19.052: INFO: Pod "pod-subpath-test-secret-t7wn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.058169364s STEP: Saw pod success May 8 11:47:19.052: INFO: Pod "pod-subpath-test-secret-t7wn" satisfied condition "success or failure" May 8 11:47:19.055: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-secret-t7wn container test-container-subpath-secret-t7wn: STEP: delete the pod May 8 11:47:19.096: INFO: Waiting for pod pod-subpath-test-secret-t7wn to disappear May 8 11:47:19.129: INFO: Pod pod-subpath-test-secret-t7wn no longer exists STEP: Deleting pod pod-subpath-test-secret-t7wn May 8 11:47:19.129: INFO: Deleting pod "pod-subpath-test-secret-t7wn" in namespace "e2e-tests-subpath-lg2g2" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:47:19.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-lg2g2" for this suite. May 8 11:47:25.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:47:25.186: INFO: namespace: e2e-tests-subpath-lg2g2, resource: bindings, ignored listing per whitelist May 8 11:47:25.223: INFO: namespace e2e-tests-subpath-lg2g2 deletion completed in 6.088448164s • [SLOW TEST:32.405 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:47:25.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 8 11:47:25.374: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 May 8 11:47:25.380: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-5db88/daemonsets","resourceVersion":"9409196"},"items":null} May 8 11:47:25.382: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-5db88/pods","resourceVersion":"9409196"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:47:25.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-5db88" for this suite. May 8 11:47:31.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:47:31.435: INFO: namespace: e2e-tests-daemonsets-5db88, resource: bindings, ignored listing per whitelist May 8 11:47:31.537: INFO: namespace e2e-tests-daemonsets-5db88 deletion completed in 6.144189519s S [SKIPPING] [6.315 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 8 11:47:25.374: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:47:31.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-cshng STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cshng to expose endpoints map[] May 8 11:47:31.662: INFO: Get endpoints failed (12.577223ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 8 11:47:32.666: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cshng exposes endpoints map[] (1.016455861s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-cshng STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cshng to expose endpoints map[pod1:[100]] May 8 11:47:36.788: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cshng exposes endpoints map[pod1:[100]] (4.112386186s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-cshng STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cshng to expose endpoints map[pod1:[100] pod2:[101]] May 8 11:47:40.917: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cshng exposes endpoints map[pod1:[100] pod2:[101]] (4.124987878s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-cshng STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cshng to expose endpoints map[pod2:[101]] May 8 11:47:42.038: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cshng exposes endpoints map[pod2:[101]] (1.115747689s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-cshng STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cshng to expose endpoints map[] May 8 11:47:43.133: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cshng exposes endpoints map[] (1.090569509s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:47:43.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-cshng" for this suite. May 8 11:48:05.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:48:05.532: INFO: namespace: e2e-tests-services-cshng, resource: bindings, ignored listing per whitelist May 8 11:48:05.567: INFO: namespace e2e-tests-services-cshng deletion completed in 22.124545618s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:34.029 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:48:05.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults May 8 11:48:05.724: INFO: Waiting up to 5m0s for pod "client-containers-c79e550b-9121-11ea-8adb-0242ac110017" in namespace "e2e-tests-containers-srmgh" to be "success or failure" May 8 11:48:05.748: INFO: Pod "client-containers-c79e550b-9121-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 23.685713ms May 8 11:48:07.834: INFO: Pod "client-containers-c79e550b-9121-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109892723s May 8 11:48:09.838: INFO: Pod "client-containers-c79e550b-9121-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114151071s May 8 11:48:11.843: INFO: Pod "client-containers-c79e550b-9121-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.118412952s STEP: Saw pod success May 8 11:48:11.843: INFO: Pod "client-containers-c79e550b-9121-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:48:11.846: INFO: Trying to get logs from node hunter-worker2 pod client-containers-c79e550b-9121-11ea-8adb-0242ac110017 container test-container: STEP: delete the pod May 8 11:48:11.862: INFO: Waiting for pod client-containers-c79e550b-9121-11ea-8adb-0242ac110017 to disappear May 8 11:48:11.867: INFO: Pod client-containers-c79e550b-9121-11ea-8adb-0242ac110017 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:48:11.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-srmgh" for this suite. May 8 11:48:17.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:48:17.932: INFO: namespace: e2e-tests-containers-srmgh, resource: bindings, ignored listing per whitelist May 8 11:48:17.964: INFO: namespace e2e-tests-containers-srmgh deletion completed in 6.094324926s • [SLOW TEST:12.397 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:48:17.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 8 11:48:18.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-498cb' May 8 11:48:20.700: INFO: stderr: "" May 8 11:48:20.700: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 8 11:48:21.704: INFO: Selector matched 1 pods for map[app:redis] May 8 11:48:21.705: INFO: Found 0 / 1 May 8 11:48:22.705: INFO: Selector matched 1 pods for map[app:redis] May 8 11:48:22.705: INFO: Found 0 / 1 May 8 11:48:23.704: INFO: Selector matched 1 pods for map[app:redis] May 8 11:48:23.704: INFO: Found 0 / 1 May 8 11:48:24.750: INFO: Selector matched 1 pods for map[app:redis] May 8 11:48:24.750: INFO: Found 1 / 1 May 8 11:48:24.750: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 8 11:48:24.754: INFO: Selector matched 1 pods for map[app:redis] May 8 11:48:24.754: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 8 11:48:24.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-xvdjg --namespace=e2e-tests-kubectl-498cb -p {"metadata":{"annotations":{"x":"y"}}}' May 8 11:48:24.896: INFO: stderr: "" May 8 11:48:24.896: INFO: stdout: "pod/redis-master-xvdjg patched\n" STEP: checking annotations May 8 11:48:24.916: INFO: Selector matched 1 pods for map[app:redis] May 8 11:48:24.916: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:48:24.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-498cb" for this suite. May 8 11:48:48.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:48:49.078: INFO: namespace: e2e-tests-kubectl-498cb, resource: bindings, ignored listing per whitelist May 8 11:48:49.103: INFO: namespace e2e-tests-kubectl-498cb deletion completed in 24.183824835s • [SLOW TEST:31.139 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:48:49.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions May 8 11:48:49.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 8 11:48:49.969: INFO: stderr: "" May 8 11:48:49.969: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:48:49.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qd5ns" for this suite. May 8 11:48:56.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:48:56.212: INFO: namespace: e2e-tests-kubectl-qd5ns, resource: bindings, ignored listing per whitelist May 8 11:48:56.261: INFO: namespace e2e-tests-kubectl-qd5ns deletion completed in 6.287572643s • [SLOW TEST:7.158 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:48:56.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-e5d0ec37-9121-11ea-8adb-0242ac110017 May 8 11:48:56.455: INFO: Pod name my-hostname-basic-e5d0ec37-9121-11ea-8adb-0242ac110017: Found 0 pods out of 1 May 8 11:49:01.460: INFO: Pod name my-hostname-basic-e5d0ec37-9121-11ea-8adb-0242ac110017: Found 1 pods out of 1 May 8 11:49:01.460: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-e5d0ec37-9121-11ea-8adb-0242ac110017" are running May 8 11:49:01.463: INFO: Pod "my-hostname-basic-e5d0ec37-9121-11ea-8adb-0242ac110017-dbz7l" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 11:48:56 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 11:48:59 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 11:48:59 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 11:48:56 +0000 UTC Reason: Message:}]) May 8 11:49:01.463: INFO: Trying to dial the pod May 8 11:49:06.475: INFO: Controller my-hostname-basic-e5d0ec37-9121-11ea-8adb-0242ac110017: Got expected result from replica 1 [my-hostname-basic-e5d0ec37-9121-11ea-8adb-0242ac110017-dbz7l]: "my-hostname-basic-e5d0ec37-9121-11ea-8adb-0242ac110017-dbz7l", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:49:06.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-bbtsp" for this suite. May 8 11:49:12.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:49:12.556: INFO: namespace: e2e-tests-replication-controller-bbtsp, resource: bindings, ignored listing per whitelist May 8 11:49:12.587: INFO: namespace e2e-tests-replication-controller-bbtsp deletion completed in 6.107227542s • [SLOW TEST:16.326 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:49:12.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-bx447/secret-test-ef8bfe25-9121-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume secrets May 8 11:49:12.739: INFO: Waiting up to 5m0s for pod "pod-configmaps-ef8e7979-9121-11ea-8adb-0242ac110017" in namespace "e2e-tests-secrets-bx447" to be "success or failure" May 8 11:49:12.743: INFO: Pod "pod-configmaps-ef8e7979-9121-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.944298ms May 8 11:49:14.747: INFO: Pod "pod-configmaps-ef8e7979-9121-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00817371s May 8 11:49:16.750: INFO: Pod "pod-configmaps-ef8e7979-9121-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011655185s STEP: Saw pod success May 8 11:49:16.751: INFO: Pod "pod-configmaps-ef8e7979-9121-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:49:16.753: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-ef8e7979-9121-11ea-8adb-0242ac110017 container env-test: STEP: delete the pod May 8 11:49:16.824: INFO: Waiting for pod pod-configmaps-ef8e7979-9121-11ea-8adb-0242ac110017 to disappear May 8 11:49:16.876: INFO: Pod pod-configmaps-ef8e7979-9121-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:49:16.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-bx447" for this suite. May 8 11:49:22.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:49:22.936: INFO: namespace: e2e-tests-secrets-bx447, resource: bindings, ignored listing per whitelist May 8 11:49:22.992: INFO: namespace e2e-tests-secrets-bx447 deletion completed in 6.111614235s • [SLOW TEST:10.405 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:49:22.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 8 11:49:27.607: INFO: Successfully updated pod "pod-update-activedeadlineseconds-f5b8a9eb-9121-11ea-8adb-0242ac110017" May 8 11:49:27.607: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-f5b8a9eb-9121-11ea-8adb-0242ac110017" in namespace "e2e-tests-pods-tdvn4" to be "terminated due to deadline exceeded" May 8 11:49:27.627: INFO: Pod "pod-update-activedeadlineseconds-f5b8a9eb-9121-11ea-8adb-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 19.135099ms May 8 11:49:29.630: INFO: Pod "pod-update-activedeadlineseconds-f5b8a9eb-9121-11ea-8adb-0242ac110017": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.022958854s May 8 11:49:29.630: INFO: Pod "pod-update-activedeadlineseconds-f5b8a9eb-9121-11ea-8adb-0242ac110017" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:49:29.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-tdvn4" for this suite. May 8 11:49:35.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:49:35.689: INFO: namespace: e2e-tests-pods-tdvn4, resource: bindings, ignored listing per whitelist May 8 11:49:35.714: INFO: namespace e2e-tests-pods-tdvn4 deletion completed in 6.078758443s • [SLOW TEST:12.722 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:49:35.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 8 11:49:35.876: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 8 11:49:35.919: INFO: Number of nodes with available pods: 0 May 8 11:49:35.919: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 8 11:49:35.979: INFO: Number of nodes with available pods: 0 May 8 11:49:35.979: INFO: Node hunter-worker is running more than one daemon pod May 8 11:49:36.984: INFO: Number of nodes with available pods: 0 May 8 11:49:36.984: INFO: Node hunter-worker is running more than one daemon pod May 8 11:49:37.984: INFO: Number of nodes with available pods: 0 May 8 11:49:37.984: INFO: Node hunter-worker is running more than one daemon pod May 8 11:49:38.984: INFO: Number of nodes with available pods: 1 May 8 11:49:38.984: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 8 11:49:39.022: INFO: Number of nodes with available pods: 1 May 8 11:49:39.022: INFO: Number of running nodes: 0, number of available pods: 1 May 8 11:49:40.027: INFO: Number of nodes with available pods: 0 May 8 11:49:40.027: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 8 11:49:40.037: INFO: Number of nodes with available pods: 0 May 8 11:49:40.037: INFO: Node hunter-worker is running more than one daemon pod May 8 11:49:41.042: INFO: Number of nodes with available pods: 0 May 8 11:49:41.042: INFO: Node hunter-worker is running more than one daemon pod May 8 11:49:42.042: INFO: Number of nodes with available pods: 0 May 8 11:49:42.042: INFO: Node hunter-worker is running more than one daemon pod May 8 11:49:43.041: INFO: Number of nodes with available pods: 0 May 8 11:49:43.041: INFO: Node hunter-worker is running more than one daemon pod May 8 11:49:44.041: INFO: Number of nodes with available pods: 0 May 8 11:49:44.041: INFO: Node hunter-worker is running more than one daemon pod May 8 11:49:45.042: INFO: Number of nodes with available pods: 0 May 8 11:49:45.042: INFO: Node hunter-worker is running more than one daemon pod May 8 11:49:46.041: INFO: Number of nodes with available pods: 0 May 8 11:49:46.041: INFO: Node hunter-worker is running more than one daemon pod May 8 11:49:47.042: INFO: Number of nodes with available pods: 1 May 8 11:49:47.042: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-6xggj, will wait for the garbage collector to delete the pods May 8 11:49:47.108: INFO: Deleting DaemonSet.extensions daemon-set took: 7.129965ms May 8 11:49:47.208: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.225866ms May 8 11:50:01.363: INFO: Number of nodes with available pods: 0 May 8 11:50:01.363: INFO: Number of running nodes: 0, number of available pods: 0 May 8 11:50:01.366: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-6xggj/daemonsets","resourceVersion":"9409757"},"items":null} May 8 11:50:01.368: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-6xggj/pods","resourceVersion":"9409757"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:50:01.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-6xggj" for this suite. May 8 11:50:07.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:50:07.483: INFO: namespace: e2e-tests-daemonsets-6xggj, resource: bindings, ignored listing per whitelist May 8 11:50:07.486: INFO: namespace e2e-tests-daemonsets-6xggj deletion completed in 6.084897134s • [SLOW TEST:31.772 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:50:07.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 8 11:50:15.662: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 11:50:15.666: INFO: Pod pod-with-poststart-exec-hook still exists May 8 11:50:17.666: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 11:50:17.669: INFO: Pod pod-with-poststart-exec-hook still exists May 8 11:50:19.666: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 11:50:19.671: INFO: Pod pod-with-poststart-exec-hook still exists May 8 11:50:21.666: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 11:50:21.670: INFO: Pod pod-with-poststart-exec-hook still exists May 8 11:50:23.666: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 11:50:23.671: INFO: Pod pod-with-poststart-exec-hook still exists May 8 11:50:25.666: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 11:50:25.669: INFO: Pod pod-with-poststart-exec-hook still exists May 8 11:50:27.666: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 11:50:27.671: INFO: Pod pod-with-poststart-exec-hook still exists May 8 11:50:29.666: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 11:50:29.670: INFO: Pod pod-with-poststart-exec-hook still exists May 8 11:50:31.666: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 11:50:31.670: INFO: Pod pod-with-poststart-exec-hook still exists May 8 11:50:33.666: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 11:50:33.670: INFO: Pod pod-with-poststart-exec-hook still exists May 8 11:50:35.666: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 11:50:35.670: INFO: Pod pod-with-poststart-exec-hook still exists May 8 11:50:37.666: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 11:50:37.670: INFO: Pod pod-with-poststart-exec-hook still exists May 8 11:50:39.666: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 11:50:39.699: INFO: Pod pod-with-poststart-exec-hook still exists May 8 11:50:41.666: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 11:50:41.670: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:50:41.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-mqhvd" for this suite. May 8 11:51:03.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:51:03.723: INFO: namespace: e2e-tests-container-lifecycle-hook-mqhvd, resource: bindings, ignored listing per whitelist May 8 11:51:03.760: INFO: namespace e2e-tests-container-lifecycle-hook-mqhvd deletion completed in 22.085827414s • [SLOW TEST:56.274 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:51:03.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 8 11:51:03.935: INFO: Waiting up to 5m0s for pod "downwardapi-volume-31d6a989-9122-11ea-8adb-0242ac110017" in namespace "e2e-tests-downward-api-7vsvm" to be "success or failure" May 8 11:51:03.955: INFO: Pod "downwardapi-volume-31d6a989-9122-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 20.176209ms May 8 11:51:05.960: INFO: Pod "downwardapi-volume-31d6a989-9122-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024618759s May 8 11:51:07.964: INFO: Pod "downwardapi-volume-31d6a989-9122-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028416146s STEP: Saw pod success May 8 11:51:07.964: INFO: Pod "downwardapi-volume-31d6a989-9122-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:51:07.967: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-31d6a989-9122-11ea-8adb-0242ac110017 container client-container: STEP: delete the pod May 8 11:51:08.216: INFO: Waiting for pod downwardapi-volume-31d6a989-9122-11ea-8adb-0242ac110017 to disappear May 8 11:51:08.271: INFO: Pod downwardapi-volume-31d6a989-9122-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:51:08.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7vsvm" for this suite. May 8 11:51:14.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:51:14.370: INFO: namespace: e2e-tests-downward-api-7vsvm, resource: bindings, ignored listing per whitelist May 8 11:51:14.396: INFO: namespace e2e-tests-downward-api-7vsvm deletion completed in 6.122685915s • [SLOW TEST:10.636 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:51:14.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode May 8 11:51:14.537: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-v9nnb" to be "success or failure" May 8 11:51:14.548: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.204104ms May 8 11:51:16.552: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014379059s May 8 11:51:18.556: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01835081s May 8 11:51:20.560: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022774802s STEP: Saw pod success May 8 11:51:20.560: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 8 11:51:20.563: INFO: Trying to get logs from node hunter-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 8 11:51:20.619: INFO: Waiting for pod pod-host-path-test to disappear May 8 11:51:20.632: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:51:20.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-v9nnb" for this suite. May 8 11:51:26.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:51:26.693: INFO: namespace: e2e-tests-hostpath-v9nnb, resource: bindings, ignored listing per whitelist May 8 11:51:26.737: INFO: namespace e2e-tests-hostpath-v9nnb deletion completed in 6.10099807s • [SLOW TEST:12.340 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:51:26.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-3f7b97d8-9122-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume configMaps May 8 11:51:26.843: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3f7e143a-9122-11ea-8adb-0242ac110017" in namespace "e2e-tests-projected-pdgl5" to be "success or failure" May 8 11:51:26.861: INFO: Pod "pod-projected-configmaps-3f7e143a-9122-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 17.09193ms May 8 11:51:28.864: INFO: Pod "pod-projected-configmaps-3f7e143a-9122-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021019852s May 8 11:51:30.869: INFO: Pod "pod-projected-configmaps-3f7e143a-9122-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025391924s STEP: Saw pod success May 8 11:51:30.869: INFO: Pod "pod-projected-configmaps-3f7e143a-9122-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:51:30.872: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-3f7e143a-9122-11ea-8adb-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 8 11:51:30.920: INFO: Waiting for pod pod-projected-configmaps-3f7e143a-9122-11ea-8adb-0242ac110017 to disappear May 8 11:51:30.937: INFO: Pod pod-projected-configmaps-3f7e143a-9122-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:51:30.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pdgl5" for this suite. May 8 11:51:36.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:51:36.974: INFO: namespace: e2e-tests-projected-pdgl5, resource: bindings, ignored listing per whitelist May 8 11:51:37.030: INFO: namespace e2e-tests-projected-pdgl5 deletion completed in 6.088774592s • [SLOW TEST:10.293 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:51:37.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all May 8 11:51:37.204: INFO: Waiting up to 5m0s for pod "client-containers-45a8537a-9122-11ea-8adb-0242ac110017" in namespace "e2e-tests-containers-lgz25" to be "success or failure" May 8 11:51:37.286: INFO: Pod "client-containers-45a8537a-9122-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 82.371601ms May 8 11:51:39.358: INFO: Pod "client-containers-45a8537a-9122-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153949176s May 8 11:51:41.362: INFO: Pod "client-containers-45a8537a-9122-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.158131138s STEP: Saw pod success May 8 11:51:41.362: INFO: Pod "client-containers-45a8537a-9122-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:51:41.365: INFO: Trying to get logs from node hunter-worker2 pod client-containers-45a8537a-9122-11ea-8adb-0242ac110017 container test-container: STEP: delete the pod May 8 11:51:41.387: INFO: Waiting for pod client-containers-45a8537a-9122-11ea-8adb-0242ac110017 to disappear May 8 11:51:41.392: INFO: Pod client-containers-45a8537a-9122-11ea-8adb-0242ac110017 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:51:41.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-lgz25" for this suite. May 8 11:51:47.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:51:47.455: INFO: namespace: e2e-tests-containers-lgz25, resource: bindings, ignored listing per whitelist May 8 11:51:47.482: INFO: namespace e2e-tests-containers-lgz25 deletion completed in 6.086755122s • [SLOW TEST:10.451 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:51:47.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-qqwdr May 8 11:51:51.612: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-qqwdr STEP: checking the pod's current state and verifying that restartCount is present May 8 11:51:51.615: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:55:52.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-qqwdr" for this suite. May 8 11:55:58.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:55:58.547: INFO: namespace: e2e-tests-container-probe-qqwdr, resource: bindings, ignored listing per whitelist May 8 11:55:58.562: INFO: namespace e2e-tests-container-probe-qqwdr deletion completed in 6.086358211s • [SLOW TEST:251.080 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:55:58.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-lf4lm STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-lf4lm to expose endpoints map[] May 8 11:55:58.722: INFO: Get endpoints failed (9.622722ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 8 11:55:59.725: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-lf4lm exposes endpoints map[] (1.012965771s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-lf4lm STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-lf4lm to expose endpoints map[pod1:[80]] May 8 11:56:03.816: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-lf4lm exposes endpoints map[pod1:[80]] (4.084340911s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-lf4lm STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-lf4lm to expose endpoints map[pod1:[80] pod2:[80]] May 8 11:56:06.885: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-lf4lm exposes endpoints map[pod1:[80] pod2:[80]] (3.064998703s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-lf4lm STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-lf4lm to expose endpoints map[pod2:[80]] May 8 11:56:07.933: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-lf4lm exposes endpoints map[pod2:[80]] (1.04438235s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-lf4lm STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-lf4lm to expose endpoints map[] May 8 11:56:08.954: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-lf4lm exposes endpoints map[] (1.016401743s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:56:09.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-lf4lm" for this suite. May 8 11:56:31.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:56:31.168: INFO: namespace: e2e-tests-services-lf4lm, resource: bindings, ignored listing per whitelist May 8 11:56:31.207: INFO: namespace e2e-tests-services-lf4lm deletion completed in 22.083415715s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:32.645 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:56:31.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 8 11:56:38.383: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:56:39.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-lskx5" for this suite. May 8 11:57:01.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:57:01.613: INFO: namespace: e2e-tests-replicaset-lskx5, resource: bindings, ignored listing per whitelist May 8 11:57:01.631: INFO: namespace e2e-tests-replicaset-lskx5 deletion completed in 22.144301186s • [SLOW TEST:30.423 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:57:01.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-073c979d-9123-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume secrets May 8 11:57:02.486: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-078af091-9123-11ea-8adb-0242ac110017" in namespace "e2e-tests-projected-4srwf" to be "success or failure" May 8 11:57:02.555: INFO: Pod "pod-projected-secrets-078af091-9123-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 68.356906ms May 8 11:57:04.559: INFO: Pod "pod-projected-secrets-078af091-9123-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072603257s May 8 11:57:06.562: INFO: Pod "pod-projected-secrets-078af091-9123-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076054711s May 8 11:57:08.567: INFO: Pod "pod-projected-secrets-078af091-9123-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.080783894s STEP: Saw pod success May 8 11:57:08.567: INFO: Pod "pod-projected-secrets-078af091-9123-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:57:08.571: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-078af091-9123-11ea-8adb-0242ac110017 container projected-secret-volume-test: STEP: delete the pod May 8 11:57:08.593: INFO: Waiting for pod pod-projected-secrets-078af091-9123-11ea-8adb-0242ac110017 to disappear May 8 11:57:08.680: INFO: Pod pod-projected-secrets-078af091-9123-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:57:08.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4srwf" for this suite. May 8 11:57:14.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:57:14.920: INFO: namespace: e2e-tests-projected-4srwf, resource: bindings, ignored listing per whitelist May 8 11:57:14.928: INFO: namespace e2e-tests-projected-4srwf deletion completed in 6.243268204s • [SLOW TEST:13.297 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:57:14.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 8 11:57:15.003: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:57:16.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-vgj8x" for this suite. May 8 11:57:24.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:57:24.113: INFO: namespace: e2e-tests-custom-resource-definition-vgj8x, resource: bindings, ignored listing per whitelist May 8 11:57:24.178: INFO: namespace e2e-tests-custom-resource-definition-vgj8x deletion completed in 8.096164755s • [SLOW TEST:9.251 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:57:24.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy May 8 11:57:24.269: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix830968705/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:57:24.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-vznrh" for this suite. May 8 11:57:30.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:57:30.506: INFO: namespace: e2e-tests-kubectl-vznrh, resource: bindings, ignored listing per whitelist May 8 11:57:30.522: INFO: namespace e2e-tests-kubectl-vznrh deletion completed in 6.163931093s • [SLOW TEST:6.343 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:57:30.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-2mvqp/configmap-test-18812ad1-9123-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume configMaps May 8 11:57:30.976: INFO: Waiting up to 5m0s for pod "pod-configmaps-1881b4e7-9123-11ea-8adb-0242ac110017" in namespace "e2e-tests-configmap-2mvqp" to be "success or failure" May 8 11:57:30.986: INFO: Pod "pod-configmaps-1881b4e7-9123-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 9.962518ms May 8 11:57:32.992: INFO: Pod "pod-configmaps-1881b4e7-9123-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016579808s May 8 11:57:34.995: INFO: Pod "pod-configmaps-1881b4e7-9123-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01953773s May 8 11:57:37.000: INFO: Pod "pod-configmaps-1881b4e7-9123-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024203923s STEP: Saw pod success May 8 11:57:37.000: INFO: Pod "pod-configmaps-1881b4e7-9123-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:57:37.003: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-1881b4e7-9123-11ea-8adb-0242ac110017 container env-test: STEP: delete the pod May 8 11:57:37.059: INFO: Waiting for pod pod-configmaps-1881b4e7-9123-11ea-8adb-0242ac110017 to disappear May 8 11:57:37.066: INFO: Pod pod-configmaps-1881b4e7-9123-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:57:37.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-2mvqp" for this suite. May 8 11:57:43.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:57:43.098: INFO: namespace: e2e-tests-configmap-2mvqp, resource: bindings, ignored listing per whitelist May 8 11:57:43.156: INFO: namespace e2e-tests-configmap-2mvqp deletion completed in 6.087318864s • [SLOW TEST:12.634 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:57:43.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-1fdbd5c7-9123-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume configMaps May 8 11:57:43.284: INFO: Waiting up to 5m0s for pod "pod-configmaps-1fde403b-9123-11ea-8adb-0242ac110017" in namespace "e2e-tests-configmap-vgfg7" to be "success or failure" May 8 11:57:43.288: INFO: Pod "pod-configmaps-1fde403b-9123-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.786742ms May 8 11:57:45.292: INFO: Pod "pod-configmaps-1fde403b-9123-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008134621s May 8 11:57:47.296: INFO: Pod "pod-configmaps-1fde403b-9123-11ea-8adb-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.012144479s May 8 11:57:49.300: INFO: Pod "pod-configmaps-1fde403b-9123-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016009618s STEP: Saw pod success May 8 11:57:49.300: INFO: Pod "pod-configmaps-1fde403b-9123-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 11:57:49.303: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-1fde403b-9123-11ea-8adb-0242ac110017 container configmap-volume-test: STEP: delete the pod May 8 11:57:49.331: INFO: Waiting for pod pod-configmaps-1fde403b-9123-11ea-8adb-0242ac110017 to disappear May 8 11:57:49.393: INFO: Pod pod-configmaps-1fde403b-9123-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:57:49.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-vgfg7" for this suite. May 8 11:57:57.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:57:57.435: INFO: namespace: e2e-tests-configmap-vgfg7, resource: bindings, ignored listing per whitelist May 8 11:57:57.470: INFO: namespace e2e-tests-configmap-vgfg7 deletion completed in 8.072458722s • [SLOW TEST:14.314 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:57:57.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-bvzxn May 8 11:58:04.061: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-bvzxn STEP: checking the pod's current state and verifying that restartCount is present May 8 11:58:04.064: INFO: Initial restart count of pod liveness-exec is 0 May 8 11:58:52.515: INFO: Restart count of pod e2e-tests-container-probe-bvzxn/liveness-exec is now 1 (48.451728872s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 11:58:52.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-bvzxn" for this suite. May 8 11:58:58.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 11:58:58.903: INFO: namespace: e2e-tests-container-probe-bvzxn, resource: bindings, ignored listing per whitelist May 8 11:58:58.918: INFO: namespace e2e-tests-container-probe-bvzxn deletion completed in 6.240976537s • [SLOW TEST:61.448 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 11:58:58.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 8 11:59:00.113: INFO: Pod name wrapped-volume-race-4da01ec9-9123-11ea-8adb-0242ac110017: Found 0 pods out of 5 May 8 11:59:05.122: INFO: Pod name wrapped-volume-race-4da01ec9-9123-11ea-8adb-0242ac110017: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4da01ec9-9123-11ea-8adb-0242ac110017 in namespace e2e-tests-emptydir-wrapper-kljvj, will wait for the garbage collector to delete the pods May 8 12:01:21.217: INFO: Deleting ReplicationController wrapped-volume-race-4da01ec9-9123-11ea-8adb-0242ac110017 took: 7.286343ms May 8 12:01:21.317: INFO: Terminating ReplicationController wrapped-volume-race-4da01ec9-9123-11ea-8adb-0242ac110017 pods took: 100.242285ms STEP: Creating RC which spawns configmap-volume pods May 8 12:02:02.411: INFO: Pod name wrapped-volume-race-ba44e602-9123-11ea-8adb-0242ac110017: Found 0 pods out of 5 May 8 12:02:07.418: INFO: Pod name wrapped-volume-race-ba44e602-9123-11ea-8adb-0242ac110017: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ba44e602-9123-11ea-8adb-0242ac110017 in namespace e2e-tests-emptydir-wrapper-kljvj, will wait for the garbage collector to delete the pods May 8 12:04:01.519: INFO: Deleting ReplicationController wrapped-volume-race-ba44e602-9123-11ea-8adb-0242ac110017 took: 7.40246ms May 8 12:04:01.619: INFO: Terminating ReplicationController wrapped-volume-race-ba44e602-9123-11ea-8adb-0242ac110017 pods took: 100.222631ms STEP: Creating RC which spawns configmap-volume pods May 8 12:04:41.547: INFO: Pod name wrapped-volume-race-19292b7f-9124-11ea-8adb-0242ac110017: Found 0 pods out of 5 May 8 12:04:46.555: INFO: Pod name wrapped-volume-race-19292b7f-9124-11ea-8adb-0242ac110017: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-19292b7f-9124-11ea-8adb-0242ac110017 in namespace e2e-tests-emptydir-wrapper-kljvj, will wait for the garbage collector to delete the pods May 8 12:06:50.650: INFO: Deleting ReplicationController wrapped-volume-race-19292b7f-9124-11ea-8adb-0242ac110017 took: 17.828346ms May 8 12:06:50.850: INFO: Terminating ReplicationController wrapped-volume-race-19292b7f-9124-11ea-8adb-0242ac110017 pods took: 200.265697ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:07:32.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-kljvj" for this suite. May 8 12:07:40.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:07:40.705: INFO: namespace: e2e-tests-emptydir-wrapper-kljvj, resource: bindings, ignored listing per whitelist May 8 12:07:40.705: INFO: namespace e2e-tests-emptydir-wrapper-kljvj deletion completed in 8.111931401s • [SLOW TEST:521.786 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:07:40.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 8 12:07:45.449: INFO: Successfully updated pod "annotationupdate8409f4d4-9124-11ea-8adb-0242ac110017" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:07:47.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ph26b" for this suite. May 8 12:08:09.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:08:09.550: INFO: namespace: e2e-tests-projected-ph26b, resource: bindings, ignored listing per whitelist May 8 12:08:09.573: INFO: namespace e2e-tests-projected-ph26b deletion completed in 22.091087966s • [SLOW TEST:28.868 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:08:09.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-953b6522-9124-11ea-8adb-0242ac110017 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-953b6522-9124-11ea-8adb-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:08:15.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bcqgz" for this suite. May 8 12:08:37.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:08:37.868: INFO: namespace: e2e-tests-projected-bcqgz, resource: bindings, ignored listing per whitelist May 8 12:08:37.892: INFO: namespace e2e-tests-projected-bcqgz deletion completed in 22.135568165s • [SLOW TEST:28.319 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:08:37.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:09:38.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-6gcr6" for this suite. May 8 12:10:00.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:10:00.107: INFO: namespace: e2e-tests-container-probe-6gcr6, resource: bindings, ignored listing per whitelist May 8 12:10:00.141: INFO: namespace e2e-tests-container-probe-6gcr6 deletion completed in 22.105622324s • [SLOW TEST:82.249 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:10:00.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 8 12:10:04.855: INFO: Successfully updated pod "labelsupdated7206a72-9124-11ea-8adb-0242ac110017" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:10:06.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-bcnf6" for this suite. May 8 12:10:28.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:10:28.949: INFO: namespace: e2e-tests-downward-api-bcnf6, resource: bindings, ignored listing per whitelist May 8 12:10:28.982: INFO: namespace e2e-tests-downward-api-bcnf6 deletion completed in 22.083402867s • [SLOW TEST:28.841 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:10:28.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 8 12:10:29.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-7kb8g' May 8 12:10:34.614: INFO: stderr: "" May 8 12:10:34.614: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 8 12:10:39.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-7kb8g -o json' May 8 12:10:39.761: INFO: stderr: "" May 8 12:10:39.761: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-08T12:10:34Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-7kb8g\",\n \"resourceVersion\": \"9413093\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-7kb8g/pods/e2e-test-nginx-pod\",\n \"uid\": \"eb9b820e-9124-11ea-99e8-0242ac110002\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-xlt7k\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-xlt7k\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-xlt7k\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-08T12:10:34Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-08T12:10:39Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-08T12:10:39Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-08T12:10:34Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://b9ecab117e3f50b91854ba623ac899095523a5efaec5de4d8ec427318b292a66\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-08T12:10:38Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.4\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.94\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-08T12:10:34Z\"\n }\n}\n" STEP: replace the image in the pod May 8 12:10:39.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-7kb8g' May 8 12:10:40.007: INFO: stderr: "" May 8 12:10:40.007: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 May 8 12:10:40.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-7kb8g' May 8 12:10:44.338: INFO: stderr: "" May 8 12:10:44.338: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:10:44.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7kb8g" for this suite. May 8 12:10:50.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:10:50.385: INFO: namespace: e2e-tests-kubectl-7kb8g, resource: bindings, ignored listing per whitelist May 8 12:10:50.430: INFO: namespace e2e-tests-kubectl-7kb8g deletion completed in 6.084068964s • [SLOW TEST:21.448 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:10:50.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace May 8 12:10:54.791: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:11:20.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-d9zvp" for this suite. May 8 12:11:26.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:11:26.181: INFO: namespace: e2e-tests-namespaces-d9zvp, resource: bindings, ignored listing per whitelist May 8 12:11:26.254: INFO: namespace e2e-tests-namespaces-d9zvp deletion completed in 6.1439505s STEP: Destroying namespace "e2e-tests-nsdeletetest-dhc4c" for this suite. May 8 12:11:26.257: INFO: Namespace e2e-tests-nsdeletetest-dhc4c was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-b9j4w" for this suite. May 8 12:11:32.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:11:32.324: INFO: namespace: e2e-tests-nsdeletetest-b9j4w, resource: bindings, ignored listing per whitelist May 8 12:11:32.355: INFO: namespace e2e-tests-nsdeletetest-b9j4w deletion completed in 6.098728347s • [SLOW TEST:41.925 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:11:32.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-8mv2g May 8 12:11:36.526: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-8mv2g STEP: checking the pod's current state and verifying that restartCount is present May 8 12:11:36.529: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:15:37.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-8mv2g" for this suite. May 8 12:15:43.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:15:43.708: INFO: namespace: e2e-tests-container-probe-8mv2g, resource: bindings, ignored listing per whitelist May 8 12:15:43.723: INFO: namespace e2e-tests-container-probe-8mv2g deletion completed in 6.126525501s • [SLOW TEST:251.367 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:15:43.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 8 12:15:43.819: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a3e96cf5-9125-11ea-8adb-0242ac110017" in namespace "e2e-tests-downward-api-pbktn" to be "success or failure" May 8 12:15:43.823: INFO: Pod "downwardapi-volume-a3e96cf5-9125-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.253111ms May 8 12:15:45.828: INFO: Pod "downwardapi-volume-a3e96cf5-9125-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008632973s May 8 12:15:47.832: INFO: Pod "downwardapi-volume-a3e96cf5-9125-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013061105s STEP: Saw pod success May 8 12:15:47.832: INFO: Pod "downwardapi-volume-a3e96cf5-9125-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 12:15:47.835: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-a3e96cf5-9125-11ea-8adb-0242ac110017 container client-container: STEP: delete the pod May 8 12:15:47.988: INFO: Waiting for pod downwardapi-volume-a3e96cf5-9125-11ea-8adb-0242ac110017 to disappear May 8 12:15:48.003: INFO: Pod downwardapi-volume-a3e96cf5-9125-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:15:48.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-pbktn" for this suite. May 8 12:15:54.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:15:54.065: INFO: namespace: e2e-tests-downward-api-pbktn, resource: bindings, ignored listing per whitelist May 8 12:15:54.117: INFO: namespace e2e-tests-downward-api-pbktn deletion completed in 6.110413572s • [SLOW TEST:10.394 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:15:54.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-phzr9 STEP: creating a selector STEP: Creating the service pods in kubernetes May 8 12:15:54.204: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 8 12:16:20.358: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.95 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-phzr9 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 12:16:20.358: INFO: >>> kubeConfig: /root/.kube/config I0508 12:16:20.392333 6 log.go:172] (0xc000afd130) (0xc000f53400) Create stream I0508 12:16:20.392358 6 log.go:172] (0xc000afd130) (0xc000f53400) Stream added, broadcasting: 1 I0508 12:16:20.395623 6 log.go:172] (0xc000afd130) Reply frame received for 1 I0508 12:16:20.395665 6 log.go:172] (0xc000afd130) (0xc002185360) Create stream I0508 12:16:20.395681 6 log.go:172] (0xc000afd130) (0xc002185360) Stream added, broadcasting: 3 I0508 12:16:20.397048 6 log.go:172] (0xc000afd130) Reply frame received for 3 I0508 12:16:20.397103 6 log.go:172] (0xc000afd130) (0xc000f534a0) Create stream I0508 12:16:20.397371 6 log.go:172] (0xc000afd130) (0xc000f534a0) Stream added, broadcasting: 5 I0508 12:16:20.398907 6 log.go:172] (0xc000afd130) Reply frame received for 5 I0508 12:16:21.477432 6 log.go:172] (0xc000afd130) Data frame received for 3 I0508 12:16:21.477507 6 log.go:172] (0xc002185360) (3) Data frame handling I0508 12:16:21.477555 6 log.go:172] (0xc002185360) (3) Data frame sent I0508 12:16:21.477623 6 log.go:172] (0xc000afd130) Data frame received for 5 I0508 12:16:21.477645 6 log.go:172] (0xc000f534a0) (5) Data frame handling I0508 12:16:21.477744 6 log.go:172] (0xc000afd130) Data frame received for 3 I0508 12:16:21.477765 6 log.go:172] (0xc002185360) (3) Data frame handling I0508 12:16:21.479394 6 log.go:172] (0xc000afd130) Data frame received for 1 I0508 12:16:21.479440 6 log.go:172] (0xc000f53400) (1) Data frame handling I0508 12:16:21.479465 6 log.go:172] (0xc000f53400) (1) Data frame sent I0508 12:16:21.479481 6 log.go:172] (0xc000afd130) (0xc000f53400) Stream removed, broadcasting: 1 I0508 12:16:21.479498 6 log.go:172] (0xc000afd130) Go away received I0508 12:16:21.479648 6 log.go:172] (0xc000afd130) (0xc000f53400) Stream removed, broadcasting: 1 I0508 12:16:21.479673 6 log.go:172] (0xc000afd130) (0xc002185360) Stream removed, broadcasting: 3 I0508 12:16:21.479688 6 log.go:172] (0xc000afd130) (0xc000f534a0) Stream removed, broadcasting: 5 May 8 12:16:21.479: INFO: Found all expected endpoints: [netserver-0] May 8 12:16:21.488: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.222 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-phzr9 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 12:16:21.488: INFO: >>> kubeConfig: /root/.kube/config I0508 12:16:21.522423 6 log.go:172] (0xc000afd600) (0xc000f53b80) Create stream I0508 12:16:21.522456 6 log.go:172] (0xc000afd600) (0xc000f53b80) Stream added, broadcasting: 1 I0508 12:16:21.524737 6 log.go:172] (0xc000afd600) Reply frame received for 1 I0508 12:16:21.524780 6 log.go:172] (0xc000afd600) (0xc00213ff40) Create stream I0508 12:16:21.524793 6 log.go:172] (0xc000afd600) (0xc00213ff40) Stream added, broadcasting: 3 I0508 12:16:21.526196 6 log.go:172] (0xc000afd600) Reply frame received for 3 I0508 12:16:21.526239 6 log.go:172] (0xc000afd600) (0xc000f53c20) Create stream I0508 12:16:21.526254 6 log.go:172] (0xc000afd600) (0xc000f53c20) Stream added, broadcasting: 5 I0508 12:16:21.527181 6 log.go:172] (0xc000afd600) Reply frame received for 5 I0508 12:16:22.601265 6 log.go:172] (0xc000afd600) Data frame received for 3 I0508 12:16:22.601317 6 log.go:172] (0xc00213ff40) (3) Data frame handling I0508 12:16:22.601344 6 log.go:172] (0xc00213ff40) (3) Data frame sent I0508 12:16:22.601491 6 log.go:172] (0xc000afd600) Data frame received for 5 I0508 12:16:22.601514 6 log.go:172] (0xc000f53c20) (5) Data frame handling I0508 12:16:22.601700 6 log.go:172] (0xc000afd600) Data frame received for 3 I0508 12:16:22.601715 6 log.go:172] (0xc00213ff40) (3) Data frame handling I0508 12:16:22.603459 6 log.go:172] (0xc000afd600) Data frame received for 1 I0508 12:16:22.603481 6 log.go:172] (0xc000f53b80) (1) Data frame handling I0508 12:16:22.603511 6 log.go:172] (0xc000f53b80) (1) Data frame sent I0508 12:16:22.603536 6 log.go:172] (0xc000afd600) (0xc000f53b80) Stream removed, broadcasting: 1 I0508 12:16:22.603647 6 log.go:172] (0xc000afd600) Go away received I0508 12:16:22.603728 6 log.go:172] (0xc000afd600) (0xc000f53b80) Stream removed, broadcasting: 1 I0508 12:16:22.603772 6 log.go:172] (0xc000afd600) (0xc00213ff40) Stream removed, broadcasting: 3 I0508 12:16:22.603801 6 log.go:172] (0xc000afd600) (0xc000f53c20) Stream removed, broadcasting: 5 May 8 12:16:22.603: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:16:22.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-phzr9" for this suite. May 8 12:16:46.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:16:46.691: INFO: namespace: e2e-tests-pod-network-test-phzr9, resource: bindings, ignored listing per whitelist May 8 12:16:46.710: INFO: namespace e2e-tests-pod-network-test-phzr9 deletion completed in 24.101092326s • [SLOW TEST:52.593 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:16:46.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium May 8 12:16:46.849: INFO: Waiting up to 5m0s for pod "pod-c972d36e-9125-11ea-8adb-0242ac110017" in namespace "e2e-tests-emptydir-fwdrc" to be "success or failure" May 8 12:16:46.854: INFO: Pod "pod-c972d36e-9125-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.587671ms May 8 12:16:48.858: INFO: Pod "pod-c972d36e-9125-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008554221s May 8 12:16:50.862: INFO: Pod "pod-c972d36e-9125-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012435487s STEP: Saw pod success May 8 12:16:50.862: INFO: Pod "pod-c972d36e-9125-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 12:16:50.864: INFO: Trying to get logs from node hunter-worker2 pod pod-c972d36e-9125-11ea-8adb-0242ac110017 container test-container: STEP: delete the pod May 8 12:16:50.886: INFO: Waiting for pod pod-c972d36e-9125-11ea-8adb-0242ac110017 to disappear May 8 12:16:50.945: INFO: Pod pod-c972d36e-9125-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:16:50.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-fwdrc" for this suite. May 8 12:16:57.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:16:57.103: INFO: namespace: e2e-tests-emptydir-fwdrc, resource: bindings, ignored listing per whitelist May 8 12:16:57.197: INFO: namespace e2e-tests-emptydir-fwdrc deletion completed in 6.248188927s • [SLOW TEST:10.487 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:16:57.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 8 12:16:57.300: INFO: Waiting up to 5m0s for pod "pod-cfb583f0-9125-11ea-8adb-0242ac110017" in namespace "e2e-tests-emptydir-dqwx2" to be "success or failure" May 8 12:16:57.310: INFO: Pod "pod-cfb583f0-9125-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 10.303347ms May 8 12:16:59.315: INFO: Pod "pod-cfb583f0-9125-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014640384s May 8 12:17:01.319: INFO: Pod "pod-cfb583f0-9125-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018847386s STEP: Saw pod success May 8 12:17:01.319: INFO: Pod "pod-cfb583f0-9125-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 12:17:01.322: INFO: Trying to get logs from node hunter-worker pod pod-cfb583f0-9125-11ea-8adb-0242ac110017 container test-container: STEP: delete the pod May 8 12:17:01.349: INFO: Waiting for pod pod-cfb583f0-9125-11ea-8adb-0242ac110017 to disappear May 8 12:17:01.361: INFO: Pod pod-cfb583f0-9125-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:17:01.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-dqwx2" for this suite. May 8 12:17:07.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:17:07.443: INFO: namespace: e2e-tests-emptydir-dqwx2, resource: bindings, ignored listing per whitelist May 8 12:17:07.445: INFO: namespace e2e-tests-emptydir-dqwx2 deletion completed in 6.080902104s • [SLOW TEST:10.248 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:17:07.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 8 12:17:07.610: INFO: Waiting up to 5m0s for pod "pod-d5dc91a4-9125-11ea-8adb-0242ac110017" in namespace "e2e-tests-emptydir-shhnj" to be "success or failure" May 8 12:17:07.618: INFO: Pod "pod-d5dc91a4-9125-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 7.957094ms May 8 12:17:09.622: INFO: Pod "pod-d5dc91a4-9125-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011635755s May 8 12:17:11.626: INFO: Pod "pod-d5dc91a4-9125-11ea-8adb-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.015502335s May 8 12:17:13.859: INFO: Pod "pod-d5dc91a4-9125-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.249122817s STEP: Saw pod success May 8 12:17:13.859: INFO: Pod "pod-d5dc91a4-9125-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 12:17:13.862: INFO: Trying to get logs from node hunter-worker pod pod-d5dc91a4-9125-11ea-8adb-0242ac110017 container test-container: STEP: delete the pod May 8 12:17:14.005: INFO: Waiting for pod pod-d5dc91a4-9125-11ea-8adb-0242ac110017 to disappear May 8 12:17:14.011: INFO: Pod pod-d5dc91a4-9125-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:17:14.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-shhnj" for this suite. May 8 12:17:20.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:17:20.085: INFO: namespace: e2e-tests-emptydir-shhnj, resource: bindings, ignored listing per whitelist May 8 12:17:20.100: INFO: namespace e2e-tests-emptydir-shhnj deletion completed in 6.085279435s • [SLOW TEST:12.654 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:17:20.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 8 12:17:20.391: INFO: Waiting up to 5m0s for pod "downward-api-dd793d16-9125-11ea-8adb-0242ac110017" in namespace "e2e-tests-downward-api-v4hcw" to be "success or failure" May 8 12:17:20.394: INFO: Pod "downward-api-dd793d16-9125-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.179882ms May 8 12:17:22.506: INFO: Pod "downward-api-dd793d16-9125-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11520452s May 8 12:17:24.518: INFO: Pod "downward-api-dd793d16-9125-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.127000728s STEP: Saw pod success May 8 12:17:24.518: INFO: Pod "downward-api-dd793d16-9125-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 12:17:24.521: INFO: Trying to get logs from node hunter-worker pod downward-api-dd793d16-9125-11ea-8adb-0242ac110017 container dapi-container: STEP: delete the pod May 8 12:17:24.562: INFO: Waiting for pod downward-api-dd793d16-9125-11ea-8adb-0242ac110017 to disappear May 8 12:17:24.691: INFO: Pod downward-api-dd793d16-9125-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:17:24.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-v4hcw" for this suite. May 8 12:17:30.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:17:30.743: INFO: namespace: e2e-tests-downward-api-v4hcw, resource: bindings, ignored listing per whitelist May 8 12:17:30.787: INFO: namespace e2e-tests-downward-api-v4hcw deletion completed in 6.091741326s • [SLOW TEST:10.687 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:17:30.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 8 12:17:32.050: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 12:17:32.053: INFO: Number of nodes with available pods: 0 May 8 12:17:32.053: INFO: Node hunter-worker is running more than one daemon pod May 8 12:17:33.057: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 12:17:33.060: INFO: Number of nodes with available pods: 0 May 8 12:17:33.060: INFO: Node hunter-worker is running more than one daemon pod May 8 12:17:34.711: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 12:17:34.752: INFO: Number of nodes with available pods: 0 May 8 12:17:34.752: INFO: Node hunter-worker is running more than one daemon pod May 8 12:17:35.228: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 12:17:35.231: INFO: Number of nodes with available pods: 0 May 8 12:17:35.231: INFO: Node hunter-worker is running more than one daemon pod May 8 12:17:36.226: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 12:17:36.229: INFO: Number of nodes with available pods: 1 May 8 12:17:36.229: INFO: Node hunter-worker2 is running more than one daemon pod May 8 12:17:37.058: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 12:17:37.061: INFO: Number of nodes with available pods: 2 May 8 12:17:37.061: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 8 12:17:37.090: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 12:17:37.105: INFO: Number of nodes with available pods: 2 May 8 12:17:37.105: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-2jtmh, will wait for the garbage collector to delete the pods May 8 12:17:38.227: INFO: Deleting DaemonSet.extensions daemon-set took: 30.915424ms May 8 12:17:38.728: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.229404ms May 8 12:17:51.331: INFO: Number of nodes with available pods: 0 May 8 12:17:51.331: INFO: Number of running nodes: 0, number of available pods: 0 May 8 12:17:51.334: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-2jtmh/daemonsets","resourceVersion":"9414249"},"items":null} May 8 12:17:51.337: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-2jtmh/pods","resourceVersion":"9414249"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:17:51.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-2jtmh" for this suite. May 8 12:17:57.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:17:57.491: INFO: namespace: e2e-tests-daemonsets-2jtmh, resource: bindings, ignored listing per whitelist May 8 12:17:57.507: INFO: namespace e2e-tests-daemonsets-2jtmh deletion completed in 6.156410552s • [SLOW TEST:26.720 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:17:57.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 8 12:18:12.156: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 8 12:18:12.316: INFO: Pod pod-with-poststart-http-hook still exists May 8 12:18:14.316: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 8 12:18:14.735: INFO: Pod pod-with-poststart-http-hook still exists May 8 12:18:16.316: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 8 12:18:16.378: INFO: Pod pod-with-poststart-http-hook still exists May 8 12:18:18.316: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 8 12:18:18.321: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:18:18.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-2r6l7" for this suite. May 8 12:18:44.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:18:44.737: INFO: namespace: e2e-tests-container-lifecycle-hook-2r6l7, resource: bindings, ignored listing per whitelist May 8 12:18:45.071: INFO: namespace e2e-tests-container-lifecycle-hook-2r6l7 deletion completed in 26.746663904s • [SLOW TEST:47.564 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:18:45.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 1 pods STEP: Gathering metrics W0508 12:18:48.929088 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 8 12:18:48.929: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:18:48.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-js67p" for this suite. May 8 12:18:54.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:18:55.024: INFO: namespace: e2e-tests-gc-js67p, resource: bindings, ignored listing per whitelist May 8 12:18:55.034: INFO: namespace e2e-tests-gc-js67p deletion completed in 6.10213851s • [SLOW TEST:9.963 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:18:55.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 8 12:18:55.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4p4j4' May 8 12:18:55.641: INFO: stderr: "" May 8 12:18:55.641: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 8 12:18:55.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4p4j4' May 8 12:18:55.822: INFO: stderr: "" May 8 12:18:55.823: INFO: stdout: "update-demo-nautilus-7xwhm update-demo-nautilus-xrm7b " May 8 12:18:55.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7xwhm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4p4j4' May 8 12:18:56.023: INFO: stderr: "" May 8 12:18:56.023: INFO: stdout: "" May 8 12:18:56.023: INFO: update-demo-nautilus-7xwhm is created but not running May 8 12:19:01.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4p4j4' May 8 12:19:01.203: INFO: stderr: "" May 8 12:19:01.203: INFO: stdout: "update-demo-nautilus-7xwhm update-demo-nautilus-xrm7b " May 8 12:19:01.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7xwhm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4p4j4' May 8 12:19:01.879: INFO: stderr: "" May 8 12:19:01.879: INFO: stdout: "true" May 8 12:19:01.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7xwhm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4p4j4' May 8 12:19:02.083: INFO: stderr: "" May 8 12:19:02.083: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 12:19:02.083: INFO: validating pod update-demo-nautilus-7xwhm May 8 12:19:02.088: INFO: got data: { "image": "nautilus.jpg" } May 8 12:19:02.088: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 12:19:02.088: INFO: update-demo-nautilus-7xwhm is verified up and running May 8 12:19:02.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xrm7b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4p4j4' May 8 12:19:02.177: INFO: stderr: "" May 8 12:19:02.177: INFO: stdout: "" May 8 12:19:02.177: INFO: update-demo-nautilus-xrm7b is created but not running May 8 12:19:07.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4p4j4' May 8 12:19:07.284: INFO: stderr: "" May 8 12:19:07.284: INFO: stdout: "update-demo-nautilus-7xwhm update-demo-nautilus-xrm7b " May 8 12:19:07.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7xwhm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4p4j4' May 8 12:19:07.383: INFO: stderr: "" May 8 12:19:07.383: INFO: stdout: "true" May 8 12:19:07.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7xwhm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4p4j4' May 8 12:19:07.470: INFO: stderr: "" May 8 12:19:07.471: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 12:19:07.471: INFO: validating pod update-demo-nautilus-7xwhm May 8 12:19:07.473: INFO: got data: { "image": "nautilus.jpg" } May 8 12:19:07.473: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 12:19:07.473: INFO: update-demo-nautilus-7xwhm is verified up and running May 8 12:19:07.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xrm7b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4p4j4' May 8 12:19:07.568: INFO: stderr: "" May 8 12:19:07.568: INFO: stdout: "true" May 8 12:19:07.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xrm7b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4p4j4' May 8 12:19:07.648: INFO: stderr: "" May 8 12:19:07.648: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 12:19:07.648: INFO: validating pod update-demo-nautilus-xrm7b May 8 12:19:07.652: INFO: got data: { "image": "nautilus.jpg" } May 8 12:19:07.652: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 12:19:07.652: INFO: update-demo-nautilus-xrm7b is verified up and running STEP: using delete to clean up resources May 8 12:19:07.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4p4j4' May 8 12:19:07.766: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 8 12:19:07.766: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 8 12:19:07.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-4p4j4' May 8 12:19:07.967: INFO: stderr: "No resources found.\n" May 8 12:19:07.967: INFO: stdout: "" May 8 12:19:07.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-4p4j4 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 8 12:19:08.214: INFO: stderr: "" May 8 12:19:08.214: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:19:08.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4p4j4" for this suite. May 8 12:19:30.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:19:30.534: INFO: namespace: e2e-tests-kubectl-4p4j4, resource: bindings, ignored listing per whitelist May 8 12:19:30.565: INFO: namespace e2e-tests-kubectl-4p4j4 deletion completed in 22.347088985s • [SLOW TEST:35.531 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:19:30.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 8 12:19:34.753: INFO: Waiting up to 5m0s for pod "client-envvars-2d8e600b-9126-11ea-8adb-0242ac110017" in namespace "e2e-tests-pods-r8s7z" to be "success or failure" May 8 12:19:34.849: INFO: Pod "client-envvars-2d8e600b-9126-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 95.49315ms May 8 12:19:36.853: INFO: Pod "client-envvars-2d8e600b-9126-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099866054s May 8 12:19:38.857: INFO: Pod "client-envvars-2d8e600b-9126-11ea-8adb-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.103969346s May 8 12:19:40.860: INFO: Pod "client-envvars-2d8e600b-9126-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.10719177s STEP: Saw pod success May 8 12:19:40.861: INFO: Pod "client-envvars-2d8e600b-9126-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 12:19:40.863: INFO: Trying to get logs from node hunter-worker2 pod client-envvars-2d8e600b-9126-11ea-8adb-0242ac110017 container env3cont: STEP: delete the pod May 8 12:19:40.943: INFO: Waiting for pod client-envvars-2d8e600b-9126-11ea-8adb-0242ac110017 to disappear May 8 12:19:40.967: INFO: Pod client-envvars-2d8e600b-9126-11ea-8adb-0242ac110017 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:19:40.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-r8s7z" for this suite. May 8 12:20:20.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:20:21.019: INFO: namespace: e2e-tests-pods-r8s7z, resource: bindings, ignored listing per whitelist May 8 12:20:21.047: INFO: namespace e2e-tests-pods-r8s7z deletion completed in 40.075384265s • [SLOW TEST:50.481 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:20:21.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted May 8 12:20:34.496: INFO: 5 pods remaining May 8 12:20:34.496: INFO: 5 pods has nil DeletionTimestamp May 8 12:20:34.496: INFO: STEP: Gathering metrics W0508 12:20:39.440893 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 8 12:20:39.440: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:20:39.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-n9qfw" for this suite. May 8 12:20:51.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:20:51.464: INFO: namespace: e2e-tests-gc-n9qfw, resource: bindings, ignored listing per whitelist May 8 12:20:51.531: INFO: namespace e2e-tests-gc-n9qfw deletion completed in 12.08788536s • [SLOW TEST:30.484 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:20:51.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:20:51.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-jd78d" for this suite. May 8 12:20:57.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:20:58.002: INFO: namespace: e2e-tests-services-jd78d, resource: bindings, ignored listing per whitelist May 8 12:20:58.064: INFO: namespace e2e-tests-services-jd78d deletion completed in 6.11660824s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.533 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:20:58.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 8 12:20:58.207: INFO: Waiting up to 5m0s for pod "downward-api-5f4c441b-9126-11ea-8adb-0242ac110017" in namespace "e2e-tests-downward-api-znpbj" to be "success or failure" May 8 12:20:58.287: INFO: Pod "downward-api-5f4c441b-9126-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 80.646853ms May 8 12:21:00.292: INFO: Pod "downward-api-5f4c441b-9126-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085330748s May 8 12:21:02.296: INFO: Pod "downward-api-5f4c441b-9126-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089688476s STEP: Saw pod success May 8 12:21:02.297: INFO: Pod "downward-api-5f4c441b-9126-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 12:21:02.300: INFO: Trying to get logs from node hunter-worker2 pod downward-api-5f4c441b-9126-11ea-8adb-0242ac110017 container dapi-container: STEP: delete the pod May 8 12:21:02.320: INFO: Waiting for pod downward-api-5f4c441b-9126-11ea-8adb-0242ac110017 to disappear May 8 12:21:02.324: INFO: Pod downward-api-5f4c441b-9126-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:21:02.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-znpbj" for this suite. May 8 12:21:08.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:21:08.409: INFO: namespace: e2e-tests-downward-api-znpbj, resource: bindings, ignored listing per whitelist May 8 12:21:08.423: INFO: namespace e2e-tests-downward-api-znpbj deletion completed in 6.095251712s • [SLOW TEST:10.359 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:21:08.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod May 8 12:21:12.632: INFO: Pod pod-hostip-6576a66e-9126-11ea-8adb-0242ac110017 has hostIP: 172.17.0.4 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:21:12.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-6hgp9" for this suite. May 8 12:21:34.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:21:34.719: INFO: namespace: e2e-tests-pods-6hgp9, resource: bindings, ignored listing per whitelist May 8 12:21:34.750: INFO: namespace e2e-tests-pods-6hgp9 deletion completed in 22.11371299s • [SLOW TEST:26.327 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:21:34.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-75240eb1-9126-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume secrets May 8 12:21:34.846: INFO: Waiting up to 5m0s for pod "pod-secrets-752564a3-9126-11ea-8adb-0242ac110017" in namespace "e2e-tests-secrets-bl7bq" to be "success or failure" May 8 12:21:34.851: INFO: Pod "pod-secrets-752564a3-9126-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075356ms May 8 12:21:36.916: INFO: Pod "pod-secrets-752564a3-9126-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069811748s May 8 12:21:38.920: INFO: Pod "pod-secrets-752564a3-9126-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07350226s STEP: Saw pod success May 8 12:21:38.920: INFO: Pod "pod-secrets-752564a3-9126-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 12:21:38.924: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-752564a3-9126-11ea-8adb-0242ac110017 container secret-volume-test: STEP: delete the pod May 8 12:21:38.941: INFO: Waiting for pod pod-secrets-752564a3-9126-11ea-8adb-0242ac110017 to disappear May 8 12:21:38.946: INFO: Pod pod-secrets-752564a3-9126-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:21:38.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-bl7bq" for this suite. May 8 12:21:44.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:21:44.999: INFO: namespace: e2e-tests-secrets-bl7bq, resource: bindings, ignored listing per whitelist May 8 12:21:45.047: INFO: namespace e2e-tests-secrets-bl7bq deletion completed in 6.098583417s • [SLOW TEST:10.297 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:21:45.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-c2k9v [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-c2k9v STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-c2k9v May 8 12:21:45.149: INFO: Found 0 stateful pods, waiting for 1 May 8 12:21:55.154: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 8 12:21:55.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-c2k9v ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 8 12:21:55.412: INFO: stderr: "I0508 12:21:55.282217 3135 log.go:172] (0xc00014c790) (0xc0005b7360) Create stream\nI0508 12:21:55.282277 3135 log.go:172] (0xc00014c790) (0xc0005b7360) Stream added, broadcasting: 1\nI0508 12:21:55.284790 3135 log.go:172] (0xc00014c790) Reply frame received for 1\nI0508 12:21:55.284841 3135 log.go:172] (0xc00014c790) (0xc00071e000) Create stream\nI0508 12:21:55.284857 3135 log.go:172] (0xc00014c790) (0xc00071e000) Stream added, broadcasting: 3\nI0508 12:21:55.286173 3135 log.go:172] (0xc00014c790) Reply frame received for 3\nI0508 12:21:55.286226 3135 log.go:172] (0xc00014c790) (0xc000352000) Create stream\nI0508 12:21:55.286242 3135 log.go:172] (0xc00014c790) (0xc000352000) Stream added, broadcasting: 5\nI0508 12:21:55.287187 3135 log.go:172] (0xc00014c790) Reply frame received for 5\nI0508 12:21:55.403694 3135 log.go:172] (0xc00014c790) Data frame received for 3\nI0508 12:21:55.403727 3135 log.go:172] (0xc00071e000) (3) Data frame handling\nI0508 12:21:55.403748 3135 log.go:172] (0xc00071e000) (3) Data frame sent\nI0508 12:21:55.403998 3135 log.go:172] (0xc00014c790) Data frame received for 5\nI0508 12:21:55.404055 3135 log.go:172] (0xc000352000) (5) Data frame handling\nI0508 12:21:55.404104 3135 log.go:172] (0xc00014c790) Data frame received for 3\nI0508 12:21:55.404124 3135 log.go:172] (0xc00071e000) (3) Data frame handling\nI0508 12:21:55.406372 3135 log.go:172] (0xc00014c790) Data frame received for 1\nI0508 12:21:55.406411 3135 log.go:172] (0xc0005b7360) (1) Data frame handling\nI0508 12:21:55.406436 3135 log.go:172] (0xc0005b7360) (1) Data frame sent\nI0508 12:21:55.406470 3135 log.go:172] (0xc00014c790) (0xc0005b7360) Stream removed, broadcasting: 1\nI0508 12:21:55.406750 3135 log.go:172] (0xc00014c790) (0xc0005b7360) Stream removed, broadcasting: 1\nI0508 12:21:55.406778 3135 log.go:172] (0xc00014c790) (0xc00071e000) Stream removed, broadcasting: 3\nI0508 12:21:55.406793 3135 log.go:172] (0xc00014c790) (0xc000352000) Stream removed, broadcasting: 5\n" May 8 12:21:55.413: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 8 12:21:55.413: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 8 12:21:55.417: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 8 12:22:05.421: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 8 12:22:05.421: INFO: Waiting for statefulset status.replicas updated to 0 May 8 12:22:05.434: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999479s May 8 12:22:06.450: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.996864355s May 8 12:22:07.454: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.981456233s May 8 12:22:08.458: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.977519683s May 8 12:22:09.462: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.97315173s May 8 12:22:10.467: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.96930418s May 8 12:22:11.472: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.964049514s May 8 12:22:12.477: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.959443861s May 8 12:22:13.504: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.954516441s May 8 12:22:14.509: INFO: Verifying statefulset ss doesn't scale past 1 for another 927.367913ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-c2k9v May 8 12:22:15.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-c2k9v ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 12:22:15.707: INFO: stderr: "I0508 12:22:15.636668 3158 log.go:172] (0xc0006c6370) (0xc0006f0780) Create stream\nI0508 12:22:15.636713 3158 log.go:172] (0xc0006c6370) (0xc0006f0780) Stream added, broadcasting: 1\nI0508 12:22:15.638720 3158 log.go:172] (0xc0006c6370) Reply frame received for 1\nI0508 12:22:15.638759 3158 log.go:172] (0xc0006c6370) (0xc0006f0820) Create stream\nI0508 12:22:15.638771 3158 log.go:172] (0xc0006c6370) (0xc0006f0820) Stream added, broadcasting: 3\nI0508 12:22:15.639551 3158 log.go:172] (0xc0006c6370) Reply frame received for 3\nI0508 12:22:15.639571 3158 log.go:172] (0xc0006c6370) (0xc0006f08c0) Create stream\nI0508 12:22:15.639577 3158 log.go:172] (0xc0006c6370) (0xc0006f08c0) Stream added, broadcasting: 5\nI0508 12:22:15.640319 3158 log.go:172] (0xc0006c6370) Reply frame received for 5\nI0508 12:22:15.700126 3158 log.go:172] (0xc0006c6370) Data frame received for 3\nI0508 12:22:15.700249 3158 log.go:172] (0xc0006f0820) (3) Data frame handling\nI0508 12:22:15.700277 3158 log.go:172] (0xc0006f0820) (3) Data frame sent\nI0508 12:22:15.700340 3158 log.go:172] (0xc0006c6370) Data frame received for 3\nI0508 12:22:15.700370 3158 log.go:172] (0xc0006f0820) (3) Data frame handling\nI0508 12:22:15.700797 3158 log.go:172] (0xc0006c6370) Data frame received for 5\nI0508 12:22:15.700816 3158 log.go:172] (0xc0006f08c0) (5) Data frame handling\nI0508 12:22:15.702290 3158 log.go:172] (0xc0006c6370) Data frame received for 1\nI0508 12:22:15.702366 3158 log.go:172] (0xc0006f0780) (1) Data frame handling\nI0508 12:22:15.702424 3158 log.go:172] (0xc0006f0780) (1) Data frame sent\nI0508 12:22:15.702450 3158 log.go:172] (0xc0006c6370) (0xc0006f0780) Stream removed, broadcasting: 1\nI0508 12:22:15.702468 3158 log.go:172] (0xc0006c6370) Go away received\nI0508 12:22:15.702838 3158 log.go:172] (0xc0006c6370) (0xc0006f0780) Stream removed, broadcasting: 1\nI0508 12:22:15.702864 3158 log.go:172] (0xc0006c6370) (0xc0006f0820) Stream removed, broadcasting: 3\nI0508 12:22:15.702880 3158 log.go:172] (0xc0006c6370) (0xc0006f08c0) Stream removed, broadcasting: 5\n" May 8 12:22:15.707: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 8 12:22:15.707: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 8 12:22:15.710: INFO: Found 1 stateful pods, waiting for 3 May 8 12:22:25.715: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 8 12:22:25.715: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 8 12:22:25.715: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 8 12:22:25.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-c2k9v ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 8 12:22:25.927: INFO: stderr: "I0508 12:22:25.854827 3181 log.go:172] (0xc0007ae2c0) (0xc0003954a0) Create stream\nI0508 12:22:25.854876 3181 log.go:172] (0xc0007ae2c0) (0xc0003954a0) Stream added, broadcasting: 1\nI0508 12:22:25.857329 3181 log.go:172] (0xc0007ae2c0) Reply frame received for 1\nI0508 12:22:25.857371 3181 log.go:172] (0xc0007ae2c0) (0xc0005f6000) Create stream\nI0508 12:22:25.857385 3181 log.go:172] (0xc0007ae2c0) (0xc0005f6000) Stream added, broadcasting: 3\nI0508 12:22:25.858484 3181 log.go:172] (0xc0007ae2c0) Reply frame received for 3\nI0508 12:22:25.858516 3181 log.go:172] (0xc0007ae2c0) (0xc0006c6000) Create stream\nI0508 12:22:25.858529 3181 log.go:172] (0xc0007ae2c0) (0xc0006c6000) Stream added, broadcasting: 5\nI0508 12:22:25.859203 3181 log.go:172] (0xc0007ae2c0) Reply frame received for 5\nI0508 12:22:25.919554 3181 log.go:172] (0xc0007ae2c0) Data frame received for 5\nI0508 12:22:25.919603 3181 log.go:172] (0xc0006c6000) (5) Data frame handling\nI0508 12:22:25.919664 3181 log.go:172] (0xc0007ae2c0) Data frame received for 3\nI0508 12:22:25.919701 3181 log.go:172] (0xc0005f6000) (3) Data frame handling\nI0508 12:22:25.919723 3181 log.go:172] (0xc0005f6000) (3) Data frame sent\nI0508 12:22:25.919736 3181 log.go:172] (0xc0007ae2c0) Data frame received for 3\nI0508 12:22:25.919749 3181 log.go:172] (0xc0005f6000) (3) Data frame handling\nI0508 12:22:25.921418 3181 log.go:172] (0xc0007ae2c0) Data frame received for 1\nI0508 12:22:25.921437 3181 log.go:172] (0xc0003954a0) (1) Data frame handling\nI0508 12:22:25.921451 3181 log.go:172] (0xc0003954a0) (1) Data frame sent\nI0508 12:22:25.921462 3181 log.go:172] (0xc0007ae2c0) (0xc0003954a0) Stream removed, broadcasting: 1\nI0508 12:22:25.921473 3181 log.go:172] (0xc0007ae2c0) Go away received\nI0508 12:22:25.921801 3181 log.go:172] (0xc0007ae2c0) (0xc0003954a0) Stream removed, broadcasting: 1\nI0508 12:22:25.921832 3181 log.go:172] (0xc0007ae2c0) (0xc0005f6000) Stream removed, broadcasting: 3\nI0508 12:22:25.921843 3181 log.go:172] (0xc0007ae2c0) (0xc0006c6000) Stream removed, broadcasting: 5\n" May 8 12:22:25.927: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 8 12:22:25.927: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 8 12:22:25.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-c2k9v ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 8 12:22:26.158: INFO: stderr: "I0508 12:22:26.052644 3203 log.go:172] (0xc0007ce210) (0xc00071c5a0) Create stream\nI0508 12:22:26.052737 3203 log.go:172] (0xc0007ce210) (0xc00071c5a0) Stream added, broadcasting: 1\nI0508 12:22:26.056137 3203 log.go:172] (0xc0007ce210) Reply frame received for 1\nI0508 12:22:26.056216 3203 log.go:172] (0xc0007ce210) (0xc0005b4dc0) Create stream\nI0508 12:22:26.056236 3203 log.go:172] (0xc0007ce210) (0xc0005b4dc0) Stream added, broadcasting: 3\nI0508 12:22:26.057476 3203 log.go:172] (0xc0007ce210) Reply frame received for 3\nI0508 12:22:26.057526 3203 log.go:172] (0xc0007ce210) (0xc0004aa000) Create stream\nI0508 12:22:26.057543 3203 log.go:172] (0xc0007ce210) (0xc0004aa000) Stream added, broadcasting: 5\nI0508 12:22:26.058581 3203 log.go:172] (0xc0007ce210) Reply frame received for 5\nI0508 12:22:26.152783 3203 log.go:172] (0xc0007ce210) Data frame received for 5\nI0508 12:22:26.152829 3203 log.go:172] (0xc0004aa000) (5) Data frame handling\nI0508 12:22:26.152857 3203 log.go:172] (0xc0007ce210) Data frame received for 3\nI0508 12:22:26.152868 3203 log.go:172] (0xc0005b4dc0) (3) Data frame handling\nI0508 12:22:26.152882 3203 log.go:172] (0xc0005b4dc0) (3) Data frame sent\nI0508 12:22:26.152893 3203 log.go:172] (0xc0007ce210) Data frame received for 3\nI0508 12:22:26.153010 3203 log.go:172] (0xc0005b4dc0) (3) Data frame handling\nI0508 12:22:26.155028 3203 log.go:172] (0xc0007ce210) Data frame received for 1\nI0508 12:22:26.155087 3203 log.go:172] (0xc00071c5a0) (1) Data frame handling\nI0508 12:22:26.155112 3203 log.go:172] (0xc00071c5a0) (1) Data frame sent\nI0508 12:22:26.155124 3203 log.go:172] (0xc0007ce210) (0xc00071c5a0) Stream removed, broadcasting: 1\nI0508 12:22:26.155150 3203 log.go:172] (0xc0007ce210) Go away received\nI0508 12:22:26.155360 3203 log.go:172] (0xc0007ce210) (0xc00071c5a0) Stream removed, broadcasting: 1\nI0508 12:22:26.155390 3203 log.go:172] (0xc0007ce210) (0xc0005b4dc0) Stream removed, broadcasting: 3\nI0508 12:22:26.155400 3203 log.go:172] (0xc0007ce210) (0xc0004aa000) Stream removed, broadcasting: 5\n" May 8 12:22:26.158: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 8 12:22:26.158: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 8 12:22:26.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-c2k9v ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 8 12:22:26.378: INFO: stderr: "I0508 12:22:26.281391 3225 log.go:172] (0xc0008422c0) (0xc000716640) Create stream\nI0508 12:22:26.281452 3225 log.go:172] (0xc0008422c0) (0xc000716640) Stream added, broadcasting: 1\nI0508 12:22:26.283783 3225 log.go:172] (0xc0008422c0) Reply frame received for 1\nI0508 12:22:26.283816 3225 log.go:172] (0xc0008422c0) (0xc0005d2c80) Create stream\nI0508 12:22:26.283827 3225 log.go:172] (0xc0008422c0) (0xc0005d2c80) Stream added, broadcasting: 3\nI0508 12:22:26.284536 3225 log.go:172] (0xc0008422c0) Reply frame received for 3\nI0508 12:22:26.284567 3225 log.go:172] (0xc0008422c0) (0xc000524000) Create stream\nI0508 12:22:26.284580 3225 log.go:172] (0xc0008422c0) (0xc000524000) Stream added, broadcasting: 5\nI0508 12:22:26.285595 3225 log.go:172] (0xc0008422c0) Reply frame received for 5\nI0508 12:22:26.371643 3225 log.go:172] (0xc0008422c0) Data frame received for 3\nI0508 12:22:26.371668 3225 log.go:172] (0xc0005d2c80) (3) Data frame handling\nI0508 12:22:26.371675 3225 log.go:172] (0xc0005d2c80) (3) Data frame sent\nI0508 12:22:26.371783 3225 log.go:172] (0xc0008422c0) Data frame received for 3\nI0508 12:22:26.371806 3225 log.go:172] (0xc0005d2c80) (3) Data frame handling\nI0508 12:22:26.372240 3225 log.go:172] (0xc0008422c0) Data frame received for 5\nI0508 12:22:26.372283 3225 log.go:172] (0xc000524000) (5) Data frame handling\nI0508 12:22:26.374128 3225 log.go:172] (0xc0008422c0) Data frame received for 1\nI0508 12:22:26.374168 3225 log.go:172] (0xc000716640) (1) Data frame handling\nI0508 12:22:26.374199 3225 log.go:172] (0xc000716640) (1) Data frame sent\nI0508 12:22:26.374223 3225 log.go:172] (0xc0008422c0) (0xc000716640) Stream removed, broadcasting: 1\nI0508 12:22:26.374257 3225 log.go:172] (0xc0008422c0) Go away received\nI0508 12:22:26.374437 3225 log.go:172] (0xc0008422c0) (0xc000716640) Stream removed, broadcasting: 1\nI0508 12:22:26.374461 3225 log.go:172] (0xc0008422c0) (0xc0005d2c80) Stream removed, broadcasting: 3\nI0508 12:22:26.374473 3225 log.go:172] (0xc0008422c0) (0xc000524000) Stream removed, broadcasting: 5\n" May 8 12:22:26.378: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 8 12:22:26.378: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 8 12:22:26.378: INFO: Waiting for statefulset status.replicas updated to 0 May 8 12:22:26.409: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 8 12:22:36.417: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 8 12:22:36.417: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 8 12:22:36.417: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 8 12:22:36.434: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999473s May 8 12:22:37.439: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989678737s May 8 12:22:38.444: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.984889185s May 8 12:22:39.448: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.97988153s May 8 12:22:40.453: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.975925553s May 8 12:22:41.458: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.970396239s May 8 12:22:42.463: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.965510433s May 8 12:22:43.475: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.960927785s May 8 12:22:44.480: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.948748241s May 8 12:22:45.486: INFO: Verifying statefulset ss doesn't scale past 3 for another 943.30392ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-c2k9v May 8 12:22:46.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-c2k9v ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 12:22:46.715: INFO: stderr: "I0508 12:22:46.624861 3248 log.go:172] (0xc000138580) (0xc0005da640) Create stream\nI0508 12:22:46.624927 3248 log.go:172] (0xc000138580) (0xc0005da640) Stream added, broadcasting: 1\nI0508 12:22:46.628152 3248 log.go:172] (0xc000138580) Reply frame received for 1\nI0508 12:22:46.628195 3248 log.go:172] (0xc000138580) (0xc0005da6e0) Create stream\nI0508 12:22:46.628207 3248 log.go:172] (0xc000138580) (0xc0005da6e0) Stream added, broadcasting: 3\nI0508 12:22:46.629322 3248 log.go:172] (0xc000138580) Reply frame received for 3\nI0508 12:22:46.629360 3248 log.go:172] (0xc000138580) (0xc0005da780) Create stream\nI0508 12:22:46.629372 3248 log.go:172] (0xc000138580) (0xc0005da780) Stream added, broadcasting: 5\nI0508 12:22:46.630359 3248 log.go:172] (0xc000138580) Reply frame received for 5\nI0508 12:22:46.709546 3248 log.go:172] (0xc000138580) Data frame received for 5\nI0508 12:22:46.709576 3248 log.go:172] (0xc0005da780) (5) Data frame handling\nI0508 12:22:46.709593 3248 log.go:172] (0xc000138580) Data frame received for 3\nI0508 12:22:46.709598 3248 log.go:172] (0xc0005da6e0) (3) Data frame handling\nI0508 12:22:46.709607 3248 log.go:172] (0xc0005da6e0) (3) Data frame sent\nI0508 12:22:46.709613 3248 log.go:172] (0xc000138580) Data frame received for 3\nI0508 12:22:46.709619 3248 log.go:172] (0xc0005da6e0) (3) Data frame handling\nI0508 12:22:46.711025 3248 log.go:172] (0xc000138580) Data frame received for 1\nI0508 12:22:46.711040 3248 log.go:172] (0xc0005da640) (1) Data frame handling\nI0508 12:22:46.711049 3248 log.go:172] (0xc0005da640) (1) Data frame sent\nI0508 12:22:46.711059 3248 log.go:172] (0xc000138580) (0xc0005da640) Stream removed, broadcasting: 1\nI0508 12:22:46.711143 3248 log.go:172] (0xc000138580) Go away received\nI0508 12:22:46.711227 3248 log.go:172] (0xc000138580) (0xc0005da640) Stream removed, broadcasting: 1\nI0508 12:22:46.711271 3248 log.go:172] (0xc000138580) (0xc0005da6e0) Stream removed, broadcasting: 3\nI0508 12:22:46.711305 3248 log.go:172] (0xc000138580) (0xc0005da780) Stream removed, broadcasting: 5\n" May 8 12:22:46.716: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 8 12:22:46.716: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 8 12:22:46.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-c2k9v ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 12:22:46.919: INFO: stderr: "I0508 12:22:46.850373 3271 log.go:172] (0xc000138840) (0xc0006a9220) Create stream\nI0508 12:22:46.850437 3271 log.go:172] (0xc000138840) (0xc0006a9220) Stream added, broadcasting: 1\nI0508 12:22:46.853031 3271 log.go:172] (0xc000138840) Reply frame received for 1\nI0508 12:22:46.853097 3271 log.go:172] (0xc000138840) (0xc000770000) Create stream\nI0508 12:22:46.853272 3271 log.go:172] (0xc000138840) (0xc000770000) Stream added, broadcasting: 3\nI0508 12:22:46.854170 3271 log.go:172] (0xc000138840) Reply frame received for 3\nI0508 12:22:46.854222 3271 log.go:172] (0xc000138840) (0xc0005b0000) Create stream\nI0508 12:22:46.854235 3271 log.go:172] (0xc000138840) (0xc0005b0000) Stream added, broadcasting: 5\nI0508 12:22:46.855332 3271 log.go:172] (0xc000138840) Reply frame received for 5\nI0508 12:22:46.912770 3271 log.go:172] (0xc000138840) Data frame received for 5\nI0508 12:22:46.912820 3271 log.go:172] (0xc0005b0000) (5) Data frame handling\nI0508 12:22:46.912846 3271 log.go:172] (0xc000138840) Data frame received for 3\nI0508 12:22:46.912855 3271 log.go:172] (0xc000770000) (3) Data frame handling\nI0508 12:22:46.912868 3271 log.go:172] (0xc000770000) (3) Data frame sent\nI0508 12:22:46.912881 3271 log.go:172] (0xc000138840) Data frame received for 3\nI0508 12:22:46.912891 3271 log.go:172] (0xc000770000) (3) Data frame handling\nI0508 12:22:46.914664 3271 log.go:172] (0xc000138840) Data frame received for 1\nI0508 12:22:46.914690 3271 log.go:172] (0xc0006a9220) (1) Data frame handling\nI0508 12:22:46.914704 3271 log.go:172] (0xc0006a9220) (1) Data frame sent\nI0508 12:22:46.914729 3271 log.go:172] (0xc000138840) (0xc0006a9220) Stream removed, broadcasting: 1\nI0508 12:22:46.914751 3271 log.go:172] (0xc000138840) Go away received\nI0508 12:22:46.915066 3271 log.go:172] (0xc000138840) (0xc0006a9220) Stream removed, broadcasting: 1\nI0508 12:22:46.915101 3271 log.go:172] (0xc000138840) (0xc000770000) Stream removed, broadcasting: 3\nI0508 12:22:46.915131 3271 log.go:172] (0xc000138840) (0xc0005b0000) Stream removed, broadcasting: 5\n" May 8 12:22:46.919: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 8 12:22:46.919: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 8 12:22:46.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-c2k9v ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 12:22:47.132: INFO: stderr: "I0508 12:22:47.054048 3295 log.go:172] (0xc00055c2c0) (0xc00081d860) Create stream\nI0508 12:22:47.054126 3295 log.go:172] (0xc00055c2c0) (0xc00081d860) Stream added, broadcasting: 1\nI0508 12:22:47.058133 3295 log.go:172] (0xc00055c2c0) Reply frame received for 1\nI0508 12:22:47.058193 3295 log.go:172] (0xc00055c2c0) (0xc000332820) Create stream\nI0508 12:22:47.058219 3295 log.go:172] (0xc00055c2c0) (0xc000332820) Stream added, broadcasting: 3\nI0508 12:22:47.059299 3295 log.go:172] (0xc00055c2c0) Reply frame received for 3\nI0508 12:22:47.059331 3295 log.go:172] (0xc00055c2c0) (0xc00081d040) Create stream\nI0508 12:22:47.059339 3295 log.go:172] (0xc00055c2c0) (0xc00081d040) Stream added, broadcasting: 5\nI0508 12:22:47.060132 3295 log.go:172] (0xc00055c2c0) Reply frame received for 5\nI0508 12:22:47.127559 3295 log.go:172] (0xc00055c2c0) Data frame received for 5\nI0508 12:22:47.127595 3295 log.go:172] (0xc00081d040) (5) Data frame handling\nI0508 12:22:47.127619 3295 log.go:172] (0xc00055c2c0) Data frame received for 3\nI0508 12:22:47.127628 3295 log.go:172] (0xc000332820) (3) Data frame handling\nI0508 12:22:47.127638 3295 log.go:172] (0xc000332820) (3) Data frame sent\nI0508 12:22:47.127653 3295 log.go:172] (0xc00055c2c0) Data frame received for 3\nI0508 12:22:47.127661 3295 log.go:172] (0xc000332820) (3) Data frame handling\nI0508 12:22:47.128426 3295 log.go:172] (0xc00055c2c0) Data frame received for 1\nI0508 12:22:47.128439 3295 log.go:172] (0xc00081d860) (1) Data frame handling\nI0508 12:22:47.128445 3295 log.go:172] (0xc00081d860) (1) Data frame sent\nI0508 12:22:47.128452 3295 log.go:172] (0xc00055c2c0) (0xc00081d860) Stream removed, broadcasting: 1\nI0508 12:22:47.128524 3295 log.go:172] (0xc00055c2c0) Go away received\nI0508 12:22:47.128599 3295 log.go:172] (0xc00055c2c0) (0xc00081d860) Stream removed, broadcasting: 1\nI0508 12:22:47.128615 3295 log.go:172] (0xc00055c2c0) (0xc000332820) Stream removed, broadcasting: 3\nI0508 12:22:47.128625 3295 log.go:172] (0xc00055c2c0) (0xc00081d040) Stream removed, broadcasting: 5\n" May 8 12:22:47.132: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 8 12:22:47.132: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 8 12:22:47.132: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 8 12:23:17.146: INFO: Deleting all statefulset in ns e2e-tests-statefulset-c2k9v May 8 12:23:17.150: INFO: Scaling statefulset ss to 0 May 8 12:23:17.159: INFO: Waiting for statefulset status.replicas updated to 0 May 8 12:23:17.162: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:23:17.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-c2k9v" for this suite. May 8 12:23:23.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:23:23.261: INFO: namespace: e2e-tests-statefulset-c2k9v, resource: bindings, ignored listing per whitelist May 8 12:23:23.321: INFO: namespace e2e-tests-statefulset-c2k9v deletion completed in 6.121232672s • [SLOW TEST:98.274 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:23:23.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-b5ddf302-9126-11ea-8adb-0242ac110017 STEP: Creating configMap with name cm-test-opt-upd-b5ddf351-9126-11ea-8adb-0242ac110017 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-b5ddf302-9126-11ea-8adb-0242ac110017 STEP: Updating configmap cm-test-opt-upd-b5ddf351-9126-11ea-8adb-0242ac110017 STEP: Creating configMap with name cm-test-opt-create-b5ddf36d-9126-11ea-8adb-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:23:31.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-n2l2j" for this suite. May 8 12:23:53.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:23:53.643: INFO: namespace: e2e-tests-configmap-n2l2j, resource: bindings, ignored listing per whitelist May 8 12:23:53.674: INFO: namespace e2e-tests-configmap-n2l2j deletion completed in 22.141848765s • [SLOW TEST:30.353 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:23:53.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:23:57.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-75lm9" for this suite. May 8 12:24:03.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:24:03.928: INFO: namespace: e2e-tests-emptydir-wrapper-75lm9, resource: bindings, ignored listing per whitelist May 8 12:24:03.953: INFO: namespace e2e-tests-emptydir-wrapper-75lm9 deletion completed in 6.121856756s • [SLOW TEST:10.278 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:24:03.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 8 12:24:04.076: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ce148c6a-9126-11ea-8adb-0242ac110017" in namespace "e2e-tests-projected-4n6j5" to be "success or failure" May 8 12:24:04.082: INFO: Pod "downwardapi-volume-ce148c6a-9126-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 5.41609ms May 8 12:24:06.086: INFO: Pod "downwardapi-volume-ce148c6a-9126-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009845956s May 8 12:24:08.092: INFO: Pod "downwardapi-volume-ce148c6a-9126-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015691273s STEP: Saw pod success May 8 12:24:08.092: INFO: Pod "downwardapi-volume-ce148c6a-9126-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 12:24:08.094: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-ce148c6a-9126-11ea-8adb-0242ac110017 container client-container: STEP: delete the pod May 8 12:24:08.123: INFO: Waiting for pod downwardapi-volume-ce148c6a-9126-11ea-8adb-0242ac110017 to disappear May 8 12:24:08.136: INFO: Pod downwardapi-volume-ce148c6a-9126-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:24:08.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4n6j5" for this suite. May 8 12:24:14.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:24:14.258: INFO: namespace: e2e-tests-projected-4n6j5, resource: bindings, ignored listing per whitelist May 8 12:24:14.268: INFO: namespace e2e-tests-projected-4n6j5 deletion completed in 6.128112039s • [SLOW TEST:10.314 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:24:14.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 8 12:24:14.396: INFO: Waiting up to 5m0s for pod "pod-d43d4c18-9126-11ea-8adb-0242ac110017" in namespace "e2e-tests-emptydir-96v7q" to be "success or failure" May 8 12:24:14.399: INFO: Pod "pod-d43d4c18-9126-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.780433ms May 8 12:24:16.404: INFO: Pod "pod-d43d4c18-9126-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008288548s May 8 12:24:18.408: INFO: Pod "pod-d43d4c18-9126-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012449906s STEP: Saw pod success May 8 12:24:18.408: INFO: Pod "pod-d43d4c18-9126-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 12:24:18.411: INFO: Trying to get logs from node hunter-worker2 pod pod-d43d4c18-9126-11ea-8adb-0242ac110017 container test-container: STEP: delete the pod May 8 12:24:18.468: INFO: Waiting for pod pod-d43d4c18-9126-11ea-8adb-0242ac110017 to disappear May 8 12:24:18.511: INFO: Pod pod-d43d4c18-9126-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:24:18.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-96v7q" for this suite. May 8 12:24:24.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:24:24.599: INFO: namespace: e2e-tests-emptydir-96v7q, resource: bindings, ignored listing per whitelist May 8 12:24:24.616: INFO: namespace e2e-tests-emptydir-96v7q deletion completed in 6.100755096s • [SLOW TEST:10.348 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:24:24.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 8 12:24:24.702: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 8 12:24:24.742: INFO: Waiting for terminating namespaces to be deleted... May 8 12:24:24.745: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 8 12:24:24.750: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 8 12:24:24.750: INFO: Container kindnet-cni ready: true, restart count 0 May 8 12:24:24.750: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 8 12:24:24.750: INFO: Container coredns ready: true, restart count 0 May 8 12:24:24.750: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 8 12:24:24.751: INFO: Container kube-proxy ready: true, restart count 0 May 8 12:24:24.751: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 8 12:24:24.756: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 8 12:24:24.756: INFO: Container kindnet-cni ready: true, restart count 0 May 8 12:24:24.756: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 8 12:24:24.756: INFO: Container coredns ready: true, restart count 0 May 8 12:24:24.756: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 8 12:24:24.756: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 May 8 12:24:24.888: INFO: Pod coredns-54ff9cd656-4h7lb requesting resource cpu=100m on Node hunter-worker May 8 12:24:24.888: INFO: Pod coredns-54ff9cd656-8vrkk requesting resource cpu=100m on Node hunter-worker2 May 8 12:24:24.888: INFO: Pod kindnet-54h7m requesting resource cpu=100m on Node hunter-worker May 8 12:24:24.888: INFO: Pod kindnet-mtqrs requesting resource cpu=100m on Node hunter-worker2 May 8 12:24:24.888: INFO: Pod kube-proxy-s52ll requesting resource cpu=0m on Node hunter-worker2 May 8 12:24:24.888: INFO: Pod kube-proxy-szbng requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-da80da26-9126-11ea-8adb-0242ac110017.160d0d1dbcac37a7], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-72w5d/filler-pod-da80da26-9126-11ea-8adb-0242ac110017 to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-da80da26-9126-11ea-8adb-0242ac110017.160d0d1e6604c549], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-da80da26-9126-11ea-8adb-0242ac110017.160d0d1e9552e470], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-da80da26-9126-11ea-8adb-0242ac110017.160d0d1ea6242a28], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-da820051-9126-11ea-8adb-0242ac110017.160d0d1dbd09cb4d], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-72w5d/filler-pod-da820051-9126-11ea-8adb-0242ac110017 to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-da820051-9126-11ea-8adb-0242ac110017.160d0d1e07e6aa77], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-da820051-9126-11ea-8adb-0242ac110017.160d0d1e609e9bc0], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-da820051-9126-11ea-8adb-0242ac110017.160d0d1e78fa50b2], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.160d0d1f23d128e6], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:24:32.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-72w5d" for this suite. May 8 12:24:38.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:24:38.127: INFO: namespace: e2e-tests-sched-pred-72w5d, resource: bindings, ignored listing per whitelist May 8 12:24:38.170: INFO: namespace e2e-tests-sched-pred-72w5d deletion completed in 6.08580573s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:13.554 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:24:38.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-e2a7a758-9126-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume secrets May 8 12:24:38.587: INFO: Waiting up to 5m0s for pod "pod-secrets-e2a9fa04-9126-11ea-8adb-0242ac110017" in namespace "e2e-tests-secrets-klt6r" to be "success or failure" May 8 12:24:38.591: INFO: Pod "pod-secrets-e2a9fa04-9126-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.689869ms May 8 12:24:40.662: INFO: Pod "pod-secrets-e2a9fa04-9126-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074521727s May 8 12:24:42.666: INFO: Pod "pod-secrets-e2a9fa04-9126-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078687576s STEP: Saw pod success May 8 12:24:42.666: INFO: Pod "pod-secrets-e2a9fa04-9126-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 12:24:42.679: INFO: Trying to get logs from node hunter-worker pod pod-secrets-e2a9fa04-9126-11ea-8adb-0242ac110017 container secret-volume-test: STEP: delete the pod May 8 12:24:42.700: INFO: Waiting for pod pod-secrets-e2a9fa04-9126-11ea-8adb-0242ac110017 to disappear May 8 12:24:42.727: INFO: Pod pod-secrets-e2a9fa04-9126-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:24:42.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-klt6r" for this suite. May 8 12:24:48.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:24:48.778: INFO: namespace: e2e-tests-secrets-klt6r, resource: bindings, ignored listing per whitelist May 8 12:24:48.823: INFO: namespace e2e-tests-secrets-klt6r deletion completed in 6.092615178s • [SLOW TEST:10.653 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:24:48.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 8 12:24:48.944: INFO: Waiting up to 5m0s for pod "pod-e8d6c8ef-9126-11ea-8adb-0242ac110017" in namespace "e2e-tests-emptydir-fxspd" to be "success or failure" May 8 12:24:48.999: INFO: Pod "pod-e8d6c8ef-9126-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 54.716787ms May 8 12:24:51.099: INFO: Pod "pod-e8d6c8ef-9126-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154694074s May 8 12:24:53.103: INFO: Pod "pod-e8d6c8ef-9126-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.159086135s STEP: Saw pod success May 8 12:24:53.103: INFO: Pod "pod-e8d6c8ef-9126-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 12:24:53.106: INFO: Trying to get logs from node hunter-worker2 pod pod-e8d6c8ef-9126-11ea-8adb-0242ac110017 container test-container: STEP: delete the pod May 8 12:24:53.170: INFO: Waiting for pod pod-e8d6c8ef-9126-11ea-8adb-0242ac110017 to disappear May 8 12:24:53.198: INFO: Pod pod-e8d6c8ef-9126-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:24:53.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-fxspd" for this suite. May 8 12:24:59.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:24:59.296: INFO: namespace: e2e-tests-emptydir-fxspd, resource: bindings, ignored listing per whitelist May 8 12:24:59.296: INFO: namespace e2e-tests-emptydir-fxspd deletion completed in 6.081343222s • [SLOW TEST:10.473 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:24:59.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args May 8 12:24:59.400: INFO: Waiting up to 5m0s for pod "var-expansion-ef0e3907-9126-11ea-8adb-0242ac110017" in namespace "e2e-tests-var-expansion-zc4wh" to be "success or failure" May 8 12:24:59.406: INFO: Pod "var-expansion-ef0e3907-9126-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 5.96524ms May 8 12:25:01.410: INFO: Pod "var-expansion-ef0e3907-9126-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009806925s May 8 12:25:03.414: INFO: Pod "var-expansion-ef0e3907-9126-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01384447s STEP: Saw pod success May 8 12:25:03.414: INFO: Pod "var-expansion-ef0e3907-9126-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 12:25:03.417: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-ef0e3907-9126-11ea-8adb-0242ac110017 container dapi-container: STEP: delete the pod May 8 12:25:03.456: INFO: Waiting for pod var-expansion-ef0e3907-9126-11ea-8adb-0242ac110017 to disappear May 8 12:25:03.488: INFO: Pod var-expansion-ef0e3907-9126-11ea-8adb-0242ac110017 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:25:03.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-zc4wh" for this suite. May 8 12:25:09.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:25:09.550: INFO: namespace: e2e-tests-var-expansion-zc4wh, resource: bindings, ignored listing per whitelist May 8 12:25:09.580: INFO: namespace e2e-tests-var-expansion-zc4wh deletion completed in 6.088913509s • [SLOW TEST:10.284 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:25:09.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 8 12:25:09.659: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f52f6244-9126-11ea-8adb-0242ac110017" in namespace "e2e-tests-projected-z6mbx" to be "success or failure" May 8 12:25:09.710: INFO: Pod "downwardapi-volume-f52f6244-9126-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 51.058749ms May 8 12:25:11.715: INFO: Pod "downwardapi-volume-f52f6244-9126-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055910823s May 8 12:25:13.722: INFO: Pod "downwardapi-volume-f52f6244-9126-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063008266s STEP: Saw pod success May 8 12:25:13.722: INFO: Pod "downwardapi-volume-f52f6244-9126-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 12:25:13.725: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-f52f6244-9126-11ea-8adb-0242ac110017 container client-container: STEP: delete the pod May 8 12:25:13.750: INFO: Waiting for pod downwardapi-volume-f52f6244-9126-11ea-8adb-0242ac110017 to disappear May 8 12:25:13.766: INFO: Pod downwardapi-volume-f52f6244-9126-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:25:13.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-z6mbx" for this suite. May 8 12:25:19.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:25:19.808: INFO: namespace: e2e-tests-projected-z6mbx, resource: bindings, ignored listing per whitelist May 8 12:25:19.860: INFO: namespace e2e-tests-projected-z6mbx deletion completed in 6.091646882s • [SLOW TEST:10.279 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:25:19.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 8 12:25:19.990: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fb55cf55-9126-11ea-8adb-0242ac110017" in namespace "e2e-tests-projected-gvl89" to be "success or failure" May 8 12:25:20.015: INFO: Pod "downwardapi-volume-fb55cf55-9126-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 25.217906ms May 8 12:25:22.075: INFO: Pod "downwardapi-volume-fb55cf55-9126-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08553034s May 8 12:25:24.079: INFO: Pod "downwardapi-volume-fb55cf55-9126-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089268209s STEP: Saw pod success May 8 12:25:24.079: INFO: Pod "downwardapi-volume-fb55cf55-9126-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 12:25:24.083: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-fb55cf55-9126-11ea-8adb-0242ac110017 container client-container: STEP: delete the pod May 8 12:25:24.127: INFO: Waiting for pod downwardapi-volume-fb55cf55-9126-11ea-8adb-0242ac110017 to disappear May 8 12:25:24.162: INFO: Pod downwardapi-volume-fb55cf55-9126-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:25:24.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gvl89" for this suite. May 8 12:25:30.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:25:30.230: INFO: namespace: e2e-tests-projected-gvl89, resource: bindings, ignored listing per whitelist May 8 12:25:30.322: INFO: namespace e2e-tests-projected-gvl89 deletion completed in 6.156418667s • [SLOW TEST:10.462 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:25:30.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-qqtsx [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet May 8 12:25:30.485: INFO: Found 0 stateful pods, waiting for 3 May 8 12:25:40.489: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 8 12:25:40.489: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 8 12:25:40.489: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 8 12:25:50.490: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 8 12:25:50.490: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 8 12:25:50.490: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 8 12:25:50.519: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 8 12:26:00.586: INFO: Updating stateful set ss2 May 8 12:26:00.597: INFO: Waiting for Pod e2e-tests-statefulset-qqtsx/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 8 12:26:10.606: INFO: Waiting for Pod e2e-tests-statefulset-qqtsx/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 8 12:26:20.732: INFO: Found 2 stateful pods, waiting for 3 May 8 12:26:30.738: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 8 12:26:30.738: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 8 12:26:30.738: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 8 12:26:30.762: INFO: Updating stateful set ss2 May 8 12:26:30.780: INFO: Waiting for Pod e2e-tests-statefulset-qqtsx/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 8 12:26:40.807: INFO: Updating stateful set ss2 May 8 12:26:40.874: INFO: Waiting for StatefulSet e2e-tests-statefulset-qqtsx/ss2 to complete update May 8 12:26:40.874: INFO: Waiting for Pod e2e-tests-statefulset-qqtsx/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 8 12:26:50.883: INFO: Waiting for StatefulSet e2e-tests-statefulset-qqtsx/ss2 to complete update May 8 12:26:50.883: INFO: Waiting for Pod e2e-tests-statefulset-qqtsx/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 8 12:27:00.883: INFO: Deleting all statefulset in ns e2e-tests-statefulset-qqtsx May 8 12:27:00.886: INFO: Scaling statefulset ss2 to 0 May 8 12:27:30.923: INFO: Waiting for statefulset status.replicas updated to 0 May 8 12:27:30.925: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:27:30.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-qqtsx" for this suite. May 8 12:27:37.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:27:37.073: INFO: namespace: e2e-tests-statefulset-qqtsx, resource: bindings, ignored listing per whitelist May 8 12:27:37.118: INFO: namespace e2e-tests-statefulset-qqtsx deletion completed in 6.170418713s • [SLOW TEST:126.795 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:27:37.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:27:43.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-wfjkc" for this suite. May 8 12:27:49.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:27:49.536: INFO: namespace: e2e-tests-namespaces-wfjkc, resource: bindings, ignored listing per whitelist May 8 12:27:49.565: INFO: namespace e2e-tests-namespaces-wfjkc deletion completed in 6.091470689s STEP: Destroying namespace "e2e-tests-nsdeletetest-xflrk" for this suite. May 8 12:27:49.567: INFO: Namespace e2e-tests-nsdeletetest-xflrk was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-k678r" for this suite. May 8 12:27:55.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:27:55.618: INFO: namespace: e2e-tests-nsdeletetest-k678r, resource: bindings, ignored listing per whitelist May 8 12:27:55.676: INFO: namespace e2e-tests-nsdeletetest-k678r deletion completed in 6.108809874s • [SLOW TEST:18.559 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:27:55.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 8 12:27:55.773: INFO: Waiting up to 5m0s for pod "pod-5830ebd9-9127-11ea-8adb-0242ac110017" in namespace "e2e-tests-emptydir-zsjll" to be "success or failure" May 8 12:27:55.794: INFO: Pod "pod-5830ebd9-9127-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 21.54083ms May 8 12:27:57.798: INFO: Pod "pod-5830ebd9-9127-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025485376s May 8 12:27:59.803: INFO: Pod "pod-5830ebd9-9127-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029674284s STEP: Saw pod success May 8 12:27:59.803: INFO: Pod "pod-5830ebd9-9127-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 12:27:59.806: INFO: Trying to get logs from node hunter-worker2 pod pod-5830ebd9-9127-11ea-8adb-0242ac110017 container test-container: STEP: delete the pod May 8 12:27:59.824: INFO: Waiting for pod pod-5830ebd9-9127-11ea-8adb-0242ac110017 to disappear May 8 12:27:59.847: INFO: Pod pod-5830ebd9-9127-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:27:59.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-zsjll" for this suite. May 8 12:28:05.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:28:05.916: INFO: namespace: e2e-tests-emptydir-zsjll, resource: bindings, ignored listing per whitelist May 8 12:28:05.981: INFO: namespace e2e-tests-emptydir-zsjll deletion completed in 6.131406458s • [SLOW TEST:10.304 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:28:05.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-v8sj2 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-v8sj2 STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-v8sj2 STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-v8sj2 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-v8sj2 May 8 12:28:10.167: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-v8sj2, name: ss-0, uid: 5e91e91c-9127-11ea-99e8-0242ac110002, status phase: Pending. Waiting for statefulset controller to delete. May 8 12:28:11.244: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-v8sj2, name: ss-0, uid: 5e91e91c-9127-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 8 12:28:11.256: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-v8sj2, name: ss-0, uid: 5e91e91c-9127-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 8 12:28:11.276: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-v8sj2 STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-v8sj2 STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-v8sj2 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 8 12:28:21.494: INFO: Deleting all statefulset in ns e2e-tests-statefulset-v8sj2 May 8 12:28:21.498: INFO: Scaling statefulset ss to 0 May 8 12:28:31.518: INFO: Waiting for statefulset status.replicas updated to 0 May 8 12:28:31.521: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:28:31.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-v8sj2" for this suite. May 8 12:28:37.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:28:37.630: INFO: namespace: e2e-tests-statefulset-v8sj2, resource: bindings, ignored listing per whitelist May 8 12:28:37.667: INFO: namespace e2e-tests-statefulset-v8sj2 deletion completed in 6.095201115s • [SLOW TEST:31.686 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:28:37.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 8 12:28:37.792: INFO: Creating ReplicaSet my-hostname-basic-713ee1fa-9127-11ea-8adb-0242ac110017 May 8 12:28:37.799: INFO: Pod name my-hostname-basic-713ee1fa-9127-11ea-8adb-0242ac110017: Found 0 pods out of 1 May 8 12:28:42.803: INFO: Pod name my-hostname-basic-713ee1fa-9127-11ea-8adb-0242ac110017: Found 1 pods out of 1 May 8 12:28:42.803: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-713ee1fa-9127-11ea-8adb-0242ac110017" is running May 8 12:28:42.806: INFO: Pod "my-hostname-basic-713ee1fa-9127-11ea-8adb-0242ac110017-9sv52" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 12:28:37 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 12:28:40 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 12:28:40 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 12:28:37 +0000 UTC Reason: Message:}]) May 8 12:28:42.806: INFO: Trying to dial the pod May 8 12:28:47.816: INFO: Controller my-hostname-basic-713ee1fa-9127-11ea-8adb-0242ac110017: Got expected result from replica 1 [my-hostname-basic-713ee1fa-9127-11ea-8adb-0242ac110017-9sv52]: "my-hostname-basic-713ee1fa-9127-11ea-8adb-0242ac110017-9sv52", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:28:47.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-5jtws" for this suite. May 8 12:28:53.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:28:53.884: INFO: namespace: e2e-tests-replicaset-5jtws, resource: bindings, ignored listing per whitelist May 8 12:28:53.908: INFO: namespace e2e-tests-replicaset-5jtws deletion completed in 6.089069432s • [SLOW TEST:16.241 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:28:53.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-7aebca1e-9127-11ea-8adb-0242ac110017 STEP: Creating a pod to test consume configMaps May 8 12:28:54.071: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7af21e18-9127-11ea-8adb-0242ac110017" in namespace "e2e-tests-projected-l8dqg" to be "success or failure" May 8 12:28:54.083: INFO: Pod "pod-projected-configmaps-7af21e18-9127-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 12.282277ms May 8 12:28:56.087: INFO: Pod "pod-projected-configmaps-7af21e18-9127-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016093234s May 8 12:28:58.091: INFO: Pod "pod-projected-configmaps-7af21e18-9127-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019750031s STEP: Saw pod success May 8 12:28:58.091: INFO: Pod "pod-projected-configmaps-7af21e18-9127-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 12:28:58.093: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-7af21e18-9127-11ea-8adb-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 8 12:28:58.133: INFO: Waiting for pod pod-projected-configmaps-7af21e18-9127-11ea-8adb-0242ac110017 to disappear May 8 12:28:58.142: INFO: Pod pod-projected-configmaps-7af21e18-9127-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:28:58.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-l8dqg" for this suite. May 8 12:29:04.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:29:04.198: INFO: namespace: e2e-tests-projected-l8dqg, resource: bindings, ignored listing per whitelist May 8 12:29:04.234: INFO: namespace e2e-tests-projected-l8dqg deletion completed in 6.088183697s • [SLOW TEST:10.325 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:29:04.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 8 12:29:12.424: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 8 12:29:12.437: INFO: Pod pod-with-prestop-http-hook still exists May 8 12:29:14.438: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 8 12:29:14.540: INFO: Pod pod-with-prestop-http-hook still exists May 8 12:29:16.437: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 8 12:29:16.480: INFO: Pod pod-with-prestop-http-hook still exists May 8 12:29:18.437: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 8 12:29:18.441: INFO: Pod pod-with-prestop-http-hook still exists May 8 12:29:20.437: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 8 12:29:20.450: INFO: Pod pod-with-prestop-http-hook still exists May 8 12:29:22.437: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 8 12:29:22.474: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:29:22.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-j7v2x" for this suite. May 8 12:29:44.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:29:44.601: INFO: namespace: e2e-tests-container-lifecycle-hook-j7v2x, resource: bindings, ignored listing per whitelist May 8 12:29:44.604: INFO: namespace e2e-tests-container-lifecycle-hook-j7v2x deletion completed in 22.119713372s • [SLOW TEST:40.371 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:29:44.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-24d9v I0508 12:29:44.698075 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-24d9v, replica count: 1 I0508 12:29:45.748465 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0508 12:29:46.748662 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0508 12:29:47.748937 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 8 12:29:47.877: INFO: Created: latency-svc-xs5lg May 8 12:29:47.910: INFO: Got endpoints: latency-svc-xs5lg [61.208867ms] May 8 12:29:47.943: INFO: Created: latency-svc-kwzhm May 8 12:29:47.959: INFO: Got endpoints: latency-svc-kwzhm [48.847149ms] May 8 12:29:47.979: INFO: Created: latency-svc-b8csk May 8 12:29:47.995: INFO: Got endpoints: latency-svc-b8csk [85.060393ms] May 8 12:29:48.045: INFO: Created: latency-svc-mlv4x May 8 12:29:48.054: INFO: Got endpoints: latency-svc-mlv4x [143.738315ms] May 8 12:29:48.105: INFO: Created: latency-svc-slmmf May 8 12:29:48.126: INFO: Got endpoints: latency-svc-slmmf [215.830161ms] May 8 12:29:48.181: INFO: Created: latency-svc-4kf85 May 8 12:29:48.184: INFO: Got endpoints: latency-svc-4kf85 [273.971774ms] May 8 12:29:48.236: INFO: Created: latency-svc-hqvdm May 8 12:29:48.254: INFO: Got endpoints: latency-svc-hqvdm [343.614839ms] May 8 12:29:48.273: INFO: Created: latency-svc-79l8j May 8 12:29:48.312: INFO: Got endpoints: latency-svc-79l8j [401.647661ms] May 8 12:29:48.326: INFO: Created: latency-svc-wfd5n May 8 12:29:48.343: INFO: Got endpoints: latency-svc-wfd5n [433.176199ms] May 8 12:29:48.374: INFO: Created: latency-svc-nqs7g May 8 12:29:48.404: INFO: Got endpoints: latency-svc-nqs7g [493.548458ms] May 8 12:29:48.459: INFO: Created: latency-svc-rh6vm May 8 12:29:48.476: INFO: Got endpoints: latency-svc-rh6vm [565.449945ms] May 8 12:29:48.501: INFO: Created: latency-svc-lgjnd May 8 12:29:48.511: INFO: Got endpoints: latency-svc-lgjnd [600.76466ms] May 8 12:29:48.531: INFO: Created: latency-svc-7tndc May 8 12:29:48.542: INFO: Got endpoints: latency-svc-7tndc [631.159491ms] May 8 12:29:48.588: INFO: Created: latency-svc-b95zm May 8 12:29:48.602: INFO: Got endpoints: latency-svc-b95zm [691.817132ms] May 8 12:29:48.638: INFO: Created: latency-svc-wcqmh May 8 12:29:48.669: INFO: Got endpoints: latency-svc-wcqmh [758.281316ms] May 8 12:29:48.720: INFO: Created: latency-svc-brm57 May 8 12:29:48.723: INFO: Got endpoints: latency-svc-brm57 [812.698892ms] May 8 12:29:48.782: INFO: Created: latency-svc-bwj75 May 8 12:29:48.807: INFO: Got endpoints: latency-svc-bwj75 [847.825067ms] May 8 12:29:48.875: INFO: Created: latency-svc-vbtzk May 8 12:29:48.879: INFO: Got endpoints: latency-svc-vbtzk [884.045301ms] May 8 12:29:49.032: INFO: Created: latency-svc-bnsgp May 8 12:29:49.040: INFO: Got endpoints: latency-svc-bnsgp [985.55543ms] May 8 12:29:49.071: INFO: Created: latency-svc-st6tp May 8 12:29:49.107: INFO: Got endpoints: latency-svc-st6tp [981.173231ms] May 8 12:29:49.217: INFO: Created: latency-svc-4nkg8 May 8 12:29:49.220: INFO: Got endpoints: latency-svc-4nkg8 [1.035987233s] May 8 12:29:49.280: INFO: Created: latency-svc-4snzt May 8 12:29:49.299: INFO: Got endpoints: latency-svc-4snzt [1.045545819s] May 8 12:29:49.379: INFO: Created: latency-svc-8nrn9 May 8 12:29:49.381: INFO: Got endpoints: latency-svc-8nrn9 [1.069161658s] May 8 12:29:49.455: INFO: Created: latency-svc-fxnqp May 8 12:29:49.527: INFO: Got endpoints: latency-svc-fxnqp [1.182986035s] May 8 12:29:49.569: INFO: Created: latency-svc-r4d2d May 8 12:29:49.583: INFO: Got endpoints: latency-svc-r4d2d [1.179420872s] May 8 12:29:49.672: INFO: Created: latency-svc-hcqhn May 8 12:29:49.674: INFO: Got endpoints: latency-svc-hcqhn [1.198493008s] May 8 12:29:49.718: INFO: Created: latency-svc-wrb8r May 8 12:29:49.734: INFO: Got endpoints: latency-svc-wrb8r [1.222498112s] May 8 12:29:49.767: INFO: Created: latency-svc-8bpqj May 8 12:29:49.827: INFO: Got endpoints: latency-svc-8bpqj [1.285465692s] May 8 12:29:49.857: INFO: Created: latency-svc-nvnlv May 8 12:29:49.872: INFO: Got endpoints: latency-svc-nvnlv [1.270130634s] May 8 12:29:49.904: INFO: Created: latency-svc-z86kc May 8 12:29:49.921: INFO: Got endpoints: latency-svc-z86kc [1.252380143s] May 8 12:29:49.971: INFO: Created: latency-svc-992t9 May 8 12:29:49.975: INFO: Got endpoints: latency-svc-992t9 [1.251867747s] May 8 12:29:50.012: INFO: Created: latency-svc-jh4sp May 8 12:29:50.029: INFO: Got endpoints: latency-svc-jh4sp [1.222301703s] May 8 12:29:50.055: INFO: Created: latency-svc-kdggm May 8 12:29:50.109: INFO: Got endpoints: latency-svc-kdggm [1.229440371s] May 8 12:29:50.138: INFO: Created: latency-svc-4glcj May 8 12:29:50.150: INFO: Got endpoints: latency-svc-4glcj [1.109764723s] May 8 12:29:50.174: INFO: Created: latency-svc-8hgd5 May 8 12:29:50.186: INFO: Got endpoints: latency-svc-8hgd5 [1.078223825s] May 8 12:29:50.265: INFO: Created: latency-svc-grt5t May 8 12:29:50.268: INFO: Got endpoints: latency-svc-grt5t [1.047429566s] May 8 12:29:50.295: INFO: Created: latency-svc-2mgq9 May 8 12:29:50.342: INFO: Got endpoints: latency-svc-2mgq9 [1.042921937s] May 8 12:29:50.414: INFO: Created: latency-svc-dl8gb May 8 12:29:50.418: INFO: Got endpoints: latency-svc-dl8gb [1.036217822s] May 8 12:29:50.499: INFO: Created: latency-svc-s7bfs May 8 12:29:50.576: INFO: Got endpoints: latency-svc-s7bfs [1.049233862s] May 8 12:29:50.589: INFO: Created: latency-svc-59fzl May 8 12:29:50.629: INFO: Got endpoints: latency-svc-59fzl [1.04591138s] May 8 12:29:50.630: INFO: Created: latency-svc-zm8fx May 8 12:29:50.661: INFO: Got endpoints: latency-svc-zm8fx [986.582989ms] May 8 12:29:50.714: INFO: Created: latency-svc-tph6j May 8 12:29:50.722: INFO: Got endpoints: latency-svc-tph6j [987.970659ms] May 8 12:29:50.744: INFO: Created: latency-svc-khcjm May 8 12:29:50.757: INFO: Got endpoints: latency-svc-khcjm [930.013517ms] May 8 12:29:50.787: INFO: Created: latency-svc-dhxzv May 8 12:29:50.799: INFO: Got endpoints: latency-svc-dhxzv [926.771228ms] May 8 12:29:50.846: INFO: Created: latency-svc-t6259 May 8 12:29:50.853: INFO: Got endpoints: latency-svc-t6259 [931.928985ms] May 8 12:29:50.918: INFO: Created: latency-svc-h498z May 8 12:29:51.007: INFO: Got endpoints: latency-svc-h498z [1.031669212s] May 8 12:29:51.020: INFO: Created: latency-svc-qfzmc May 8 12:29:51.040: INFO: Got endpoints: latency-svc-qfzmc [1.01015795s] May 8 12:29:51.092: INFO: Created: latency-svc-nhwql May 8 12:29:51.132: INFO: Got endpoints: latency-svc-nhwql [1.023423289s] May 8 12:29:51.152: INFO: Created: latency-svc-2cjsc May 8 12:29:51.165: INFO: Got endpoints: latency-svc-2cjsc [1.015824866s] May 8 12:29:51.187: INFO: Created: latency-svc-7vq8x May 8 12:29:51.202: INFO: Got endpoints: latency-svc-7vq8x [1.016278595s] May 8 12:29:51.230: INFO: Created: latency-svc-97xtp May 8 12:29:51.278: INFO: Got endpoints: latency-svc-97xtp [1.010149173s] May 8 12:29:51.326: INFO: Created: latency-svc-9j7ft May 8 12:29:51.340: INFO: Got endpoints: latency-svc-9j7ft [997.895716ms] May 8 12:29:51.403: INFO: Created: latency-svc-8wqnm May 8 12:29:51.407: INFO: Got endpoints: latency-svc-8wqnm [989.384525ms] May 8 12:29:51.433: INFO: Created: latency-svc-bhqhq May 8 12:29:51.450: INFO: Got endpoints: latency-svc-bhqhq [873.683474ms] May 8 12:29:51.482: INFO: Created: latency-svc-fkf7c May 8 12:29:51.497: INFO: Got endpoints: latency-svc-fkf7c [867.967649ms] May 8 12:29:51.546: INFO: Created: latency-svc-9hfh8 May 8 12:29:51.559: INFO: Got endpoints: latency-svc-9hfh8 [898.229095ms] May 8 12:29:51.614: INFO: Created: latency-svc-948nk May 8 12:29:51.624: INFO: Got endpoints: latency-svc-948nk [902.16469ms] May 8 12:29:51.719: INFO: Created: latency-svc-vr2s2 May 8 12:29:51.722: INFO: Got endpoints: latency-svc-vr2s2 [965.157514ms] May 8 12:29:51.806: INFO: Created: latency-svc-2g9rz May 8 12:29:51.869: INFO: Got endpoints: latency-svc-2g9rz [1.069845087s] May 8 12:29:51.872: INFO: Created: latency-svc-tcpxv May 8 12:29:51.895: INFO: Got endpoints: latency-svc-tcpxv [1.042432199s] May 8 12:29:51.938: INFO: Created: latency-svc-vxfjn May 8 12:29:51.948: INFO: Got endpoints: latency-svc-vxfjn [941.122487ms] May 8 12:29:51.968: INFO: Created: latency-svc-9n42p May 8 12:29:52.019: INFO: Got endpoints: latency-svc-9n42p [979.18592ms] May 8 12:29:52.033: INFO: Created: latency-svc-rvpgd May 8 12:29:52.051: INFO: Got endpoints: latency-svc-rvpgd [918.5077ms] May 8 12:29:52.069: INFO: Created: latency-svc-d6t5n May 8 12:29:52.087: INFO: Got endpoints: latency-svc-d6t5n [921.570022ms] May 8 12:29:52.106: INFO: Created: latency-svc-lcgdx May 8 12:29:52.162: INFO: Got endpoints: latency-svc-lcgdx [960.230025ms] May 8 12:29:52.190: INFO: Created: latency-svc-qp7vx May 8 12:29:52.202: INFO: Got endpoints: latency-svc-qp7vx [924.28916ms] May 8 12:29:52.226: INFO: Created: latency-svc-j4p7r May 8 12:29:52.239: INFO: Got endpoints: latency-svc-j4p7r [898.075113ms] May 8 12:29:52.261: INFO: Created: latency-svc-t5659 May 8 12:29:52.324: INFO: Got endpoints: latency-svc-t5659 [917.429295ms] May 8 12:29:52.333: INFO: Created: latency-svc-qx7gv May 8 12:29:52.347: INFO: Got endpoints: latency-svc-qx7gv [897.887669ms] May 8 12:29:52.369: INFO: Created: latency-svc-6ls85 May 8 12:29:52.377: INFO: Got endpoints: latency-svc-6ls85 [879.802377ms] May 8 12:29:52.406: INFO: Created: latency-svc-nmvsx May 8 12:29:52.420: INFO: Got endpoints: latency-svc-nmvsx [860.326022ms] May 8 12:29:52.468: INFO: Created: latency-svc-b9qgn May 8 12:29:52.474: INFO: Got endpoints: latency-svc-b9qgn [850.04295ms] May 8 12:29:52.496: INFO: Created: latency-svc-gt8sq May 8 12:29:52.510: INFO: Got endpoints: latency-svc-gt8sq [787.604147ms] May 8 12:29:52.531: INFO: Created: latency-svc-xt48w May 8 12:29:52.540: INFO: Got endpoints: latency-svc-xt48w [671.073227ms] May 8 12:29:52.562: INFO: Created: latency-svc-lr78p May 8 12:29:52.629: INFO: Got endpoints: latency-svc-lr78p [733.922415ms] May 8 12:29:52.664: INFO: Created: latency-svc-h2cgk May 8 12:29:52.679: INFO: Got endpoints: latency-svc-h2cgk [730.865389ms] May 8 12:29:52.700: INFO: Created: latency-svc-r7hj2 May 8 12:29:52.716: INFO: Got endpoints: latency-svc-r7hj2 [696.756089ms] May 8 12:29:52.786: INFO: Created: latency-svc-tjtzm May 8 12:29:52.788: INFO: Got endpoints: latency-svc-tjtzm [736.892348ms] May 8 12:29:52.819: INFO: Created: latency-svc-6ldft May 8 12:29:52.836: INFO: Got endpoints: latency-svc-6ldft [748.644266ms] May 8 12:29:52.861: INFO: Created: latency-svc-v9mxx May 8 12:29:52.872: INFO: Got endpoints: latency-svc-v9mxx [709.1554ms] May 8 12:29:52.936: INFO: Created: latency-svc-d487q May 8 12:29:52.938: INFO: Got endpoints: latency-svc-d487q [735.599763ms] May 8 12:29:52.963: INFO: Created: latency-svc-h5ldh May 8 12:29:52.980: INFO: Got endpoints: latency-svc-h5ldh [741.530123ms] May 8 12:29:53.006: INFO: Created: latency-svc-6tng2 May 8 12:29:53.023: INFO: Got endpoints: latency-svc-6tng2 [697.926817ms] May 8 12:29:53.067: INFO: Created: latency-svc-fgq5g May 8 12:29:53.089: INFO: Got endpoints: latency-svc-fgq5g [741.809075ms] May 8 12:29:53.119: INFO: Created: latency-svc-cfm6b May 8 12:29:53.131: INFO: Got endpoints: latency-svc-cfm6b [753.433845ms] May 8 12:29:53.205: INFO: Created: latency-svc-gdjtp May 8 12:29:53.208: INFO: Got endpoints: latency-svc-gdjtp [788.047121ms] May 8 12:29:53.234: INFO: Created: latency-svc-dprkf May 8 12:29:53.257: INFO: Got endpoints: latency-svc-dprkf [783.083027ms] May 8 12:29:53.360: INFO: Created: latency-svc-r8gtr May 8 12:29:53.364: INFO: Got endpoints: latency-svc-r8gtr [853.698148ms] May 8 12:29:53.403: INFO: Created: latency-svc-9nwsm May 8 12:29:53.443: INFO: Got endpoints: latency-svc-9nwsm [902.453949ms] May 8 12:29:53.498: INFO: Created: latency-svc-d84v8 May 8 12:29:53.510: INFO: Got endpoints: latency-svc-d84v8 [880.814027ms] May 8 12:29:53.533: INFO: Created: latency-svc-cjwcj May 8 12:29:53.546: INFO: Got endpoints: latency-svc-cjwcj [867.252684ms] May 8 12:29:53.575: INFO: Created: latency-svc-7kk5r May 8 12:29:53.588: INFO: Got endpoints: latency-svc-7kk5r [872.787762ms] May 8 12:29:53.654: INFO: Created: latency-svc-gzm49 May 8 12:29:53.661: INFO: Got endpoints: latency-svc-gzm49 [872.948513ms] May 8 12:29:53.707: INFO: Created: latency-svc-v8rpp May 8 12:29:53.727: INFO: Got endpoints: latency-svc-v8rpp [891.537489ms] May 8 12:29:53.810: INFO: Created: latency-svc-272jh May 8 12:29:53.823: INFO: Got endpoints: latency-svc-272jh [951.321223ms] May 8 12:29:53.856: INFO: Created: latency-svc-m4j5l May 8 12:29:53.865: INFO: Got endpoints: latency-svc-m4j5l [926.983308ms] May 8 12:29:53.887: INFO: Created: latency-svc-4rg5s May 8 12:29:53.895: INFO: Got endpoints: latency-svc-4rg5s [915.054257ms] May 8 12:29:53.953: INFO: Created: latency-svc-n5xsl May 8 12:29:53.970: INFO: Got endpoints: latency-svc-n5xsl [947.456641ms] May 8 12:29:54.026: INFO: Created: latency-svc-r8fnn May 8 12:29:54.091: INFO: Got endpoints: latency-svc-r8fnn [1.001564922s] May 8 12:29:54.103: INFO: Created: latency-svc-6pqj5 May 8 12:29:54.112: INFO: Got endpoints: latency-svc-6pqj5 [981.02512ms] May 8 12:29:54.139: INFO: Created: latency-svc-7pxct May 8 12:29:54.148: INFO: Got endpoints: latency-svc-7pxct [940.504452ms] May 8 12:29:54.169: INFO: Created: latency-svc-pn458 May 8 12:29:54.185: INFO: Got endpoints: latency-svc-pn458 [927.337927ms] May 8 12:29:54.229: INFO: Created: latency-svc-xsjfc May 8 12:29:54.232: INFO: Got endpoints: latency-svc-xsjfc [868.35877ms] May 8 12:29:54.253: INFO: Created: latency-svc-sg7vd May 8 12:29:54.269: INFO: Got endpoints: latency-svc-sg7vd [826.458142ms] May 8 12:29:54.289: INFO: Created: latency-svc-psql7 May 8 12:29:54.306: INFO: Got endpoints: latency-svc-psql7 [795.269082ms] May 8 12:29:54.325: INFO: Created: latency-svc-b9wxh May 8 12:29:54.360: INFO: Got endpoints: latency-svc-b9wxh [813.607622ms] May 8 12:29:54.374: INFO: Created: latency-svc-t2zh7 May 8 12:29:54.391: INFO: Got endpoints: latency-svc-t2zh7 [802.016412ms] May 8 12:29:54.415: INFO: Created: latency-svc-47fhg May 8 12:29:54.444: INFO: Got endpoints: latency-svc-47fhg [783.172627ms] May 8 12:29:54.499: INFO: Created: latency-svc-zmc48 May 8 12:29:54.501: INFO: Got endpoints: latency-svc-zmc48 [773.955565ms] May 8 12:29:54.541: INFO: Created: latency-svc-jhm2r May 8 12:29:54.571: INFO: Got endpoints: latency-svc-jhm2r [747.610506ms] May 8 12:29:54.649: INFO: Created: latency-svc-kpf8x May 8 12:29:54.651: INFO: Got endpoints: latency-svc-kpf8x [786.23902ms] May 8 12:29:54.679: INFO: Created: latency-svc-lrhg9 May 8 12:29:54.691: INFO: Got endpoints: latency-svc-lrhg9 [796.057343ms] May 8 12:29:54.715: INFO: Created: latency-svc-sv2jr May 8 12:29:54.728: INFO: Got endpoints: latency-svc-sv2jr [757.493041ms] May 8 12:29:54.779: INFO: Created: latency-svc-87crk May 8 12:29:54.783: INFO: Got endpoints: latency-svc-87crk [691.564285ms] May 8 12:29:54.835: INFO: Created: latency-svc-k4xgn May 8 12:29:54.866: INFO: Got endpoints: latency-svc-k4xgn [754.53274ms] May 8 12:29:54.935: INFO: Created: latency-svc-pk425 May 8 12:29:54.945: INFO: Got endpoints: latency-svc-pk425 [796.01508ms] May 8 12:29:54.967: INFO: Created: latency-svc-ndk9q May 8 12:29:54.981: INFO: Got endpoints: latency-svc-ndk9q [796.063688ms] May 8 12:29:55.021: INFO: Created: latency-svc-rntsl May 8 12:29:55.097: INFO: Got endpoints: latency-svc-rntsl [864.816988ms] May 8 12:29:55.100: INFO: Created: latency-svc-znbgb May 8 12:29:55.146: INFO: Got endpoints: latency-svc-znbgb [876.889275ms] May 8 12:29:55.194: INFO: Created: latency-svc-l7b5f May 8 12:29:55.270: INFO: Got endpoints: latency-svc-l7b5f [964.663739ms] May 8 12:29:55.272: INFO: Created: latency-svc-bqrqg May 8 12:29:55.296: INFO: Got endpoints: latency-svc-bqrqg [936.340056ms] May 8 12:29:55.339: INFO: Created: latency-svc-kzht6 May 8 12:29:55.360: INFO: Got endpoints: latency-svc-kzht6 [969.275974ms] May 8 12:29:55.420: INFO: Created: latency-svc-bwtfr May 8 12:29:55.424: INFO: Got endpoints: latency-svc-bwtfr [979.919779ms] May 8 12:29:55.464: INFO: Created: latency-svc-chgcp May 8 12:29:55.481: INFO: Got endpoints: latency-svc-chgcp [979.088354ms] May 8 12:29:55.501: INFO: Created: latency-svc-6q6kd May 8 12:29:55.516: INFO: Got endpoints: latency-svc-6q6kd [945.762015ms] May 8 12:29:55.564: INFO: Created: latency-svc-csj78 May 8 12:29:55.577: INFO: Got endpoints: latency-svc-csj78 [925.106675ms] May 8 12:29:55.596: INFO: Created: latency-svc-j7lcc May 8 12:29:55.613: INFO: Got endpoints: latency-svc-j7lcc [921.488258ms] May 8 12:29:55.632: INFO: Created: latency-svc-2pkjq May 8 12:29:55.651: INFO: Got endpoints: latency-svc-2pkjq [923.46768ms] May 8 12:29:55.714: INFO: Created: latency-svc-vxdpz May 8 12:29:55.716: INFO: Got endpoints: latency-svc-vxdpz [933.527585ms] May 8 12:29:55.770: INFO: Created: latency-svc-j2h9x May 8 12:29:55.788: INFO: Got endpoints: latency-svc-j2h9x [921.156415ms] May 8 12:29:55.813: INFO: Created: latency-svc-fqdb6 May 8 12:29:55.851: INFO: Got endpoints: latency-svc-fqdb6 [906.399907ms] May 8 12:29:55.873: INFO: Created: latency-svc-g924x May 8 12:29:55.890: INFO: Got endpoints: latency-svc-g924x [909.310209ms] May 8 12:29:55.939: INFO: Created: latency-svc-75d8x May 8 12:29:55.950: INFO: Got endpoints: latency-svc-75d8x [853.378039ms] May 8 12:29:56.001: INFO: Created: latency-svc-lcwkl May 8 12:29:56.006: INFO: Got endpoints: latency-svc-lcwkl [860.145272ms] May 8 12:29:56.029: INFO: Created: latency-svc-9d9g4 May 8 12:29:56.047: INFO: Got endpoints: latency-svc-9d9g4 [776.384068ms] May 8 12:29:56.077: INFO: Created: latency-svc-vprzc May 8 12:29:56.089: INFO: Got endpoints: latency-svc-vprzc [792.86998ms] May 8 12:29:56.151: INFO: Created: latency-svc-nlfvn May 8 12:29:56.155: INFO: Got endpoints: latency-svc-nlfvn [795.427607ms] May 8 12:29:56.178: INFO: Created: latency-svc-lbprv May 8 12:29:56.192: INFO: Got endpoints: latency-svc-lbprv [767.541625ms] May 8 12:29:56.214: INFO: Created: latency-svc-s9czv May 8 12:29:56.228: INFO: Got endpoints: latency-svc-s9czv [747.096549ms] May 8 12:29:56.250: INFO: Created: latency-svc-xw9dz May 8 12:29:56.291: INFO: Got endpoints: latency-svc-xw9dz [774.588857ms] May 8 12:29:56.346: INFO: Created: latency-svc-ckksw May 8 12:29:56.360: INFO: Got endpoints: latency-svc-ckksw [783.664449ms] May 8 12:29:56.382: INFO: Created: latency-svc-dlvgh May 8 12:29:56.432: INFO: Got endpoints: latency-svc-dlvgh [818.720145ms] May 8 12:29:56.461: INFO: Created: latency-svc-n5jxj May 8 12:29:56.475: INFO: Got endpoints: latency-svc-n5jxj [823.73552ms] May 8 12:29:56.497: INFO: Created: latency-svc-ppqjx May 8 12:29:56.511: INFO: Got endpoints: latency-svc-ppqjx [794.902323ms] May 8 12:29:56.570: INFO: Created: latency-svc-mwzbw May 8 12:29:56.573: INFO: Got endpoints: latency-svc-mwzbw [785.545745ms] May 8 12:29:56.610: INFO: Created: latency-svc-lc95h May 8 12:29:56.631: INFO: Got endpoints: latency-svc-lc95h [780.485545ms] May 8 12:29:56.658: INFO: Created: latency-svc-7v96h May 8 12:29:56.737: INFO: Got endpoints: latency-svc-7v96h [847.116847ms] May 8 12:29:56.740: INFO: Created: latency-svc-k9qzz May 8 12:29:56.746: INFO: Got endpoints: latency-svc-k9qzz [794.990579ms] May 8 12:29:56.766: INFO: Created: latency-svc-gzxx6 May 8 12:29:56.782: INFO: Got endpoints: latency-svc-gzxx6 [775.647084ms] May 8 12:29:56.820: INFO: Created: latency-svc-9snlh May 8 12:29:56.837: INFO: Got endpoints: latency-svc-9snlh [790.060461ms] May 8 12:29:56.900: INFO: Created: latency-svc-hfjfk May 8 12:29:56.915: INFO: Got endpoints: latency-svc-hfjfk [826.084627ms] May 8 12:29:56.946: INFO: Created: latency-svc-7p46c May 8 12:29:56.957: INFO: Got endpoints: latency-svc-7p46c [801.588136ms] May 8 12:29:56.976: INFO: Created: latency-svc-9xxgj May 8 12:29:56.994: INFO: Got endpoints: latency-svc-9xxgj [801.908071ms] May 8 12:29:57.073: INFO: Created: latency-svc-k59c7 May 8 12:29:57.077: INFO: Got endpoints: latency-svc-k59c7 [849.236543ms] May 8 12:29:57.101: INFO: Created: latency-svc-5drv5 May 8 12:29:57.114: INFO: Got endpoints: latency-svc-5drv5 [822.639137ms] May 8 12:29:57.132: INFO: Created: latency-svc-d7md8 May 8 12:29:57.150: INFO: Got endpoints: latency-svc-d7md8 [789.550921ms] May 8 12:29:57.228: INFO: Created: latency-svc-w2vj2 May 8 12:29:57.230: INFO: Got endpoints: latency-svc-w2vj2 [797.803108ms] May 8 12:29:57.283: INFO: Created: latency-svc-xcklz May 8 12:29:57.300: INFO: Got endpoints: latency-svc-xcklz [825.490949ms] May 8 12:29:57.384: INFO: Created: latency-svc-5m9h5 May 8 12:29:57.413: INFO: Got endpoints: latency-svc-5m9h5 [902.363516ms] May 8 12:29:57.462: INFO: Created: latency-svc-9csth May 8 12:29:57.539: INFO: Got endpoints: latency-svc-9csth [966.101586ms] May 8 12:29:57.558: INFO: Created: latency-svc-jv5xd May 8 12:29:57.571: INFO: Got endpoints: latency-svc-jv5xd [939.749442ms] May 8 12:29:57.599: INFO: Created: latency-svc-hplk4 May 8 12:29:57.613: INFO: Got endpoints: latency-svc-hplk4 [875.861139ms] May 8 12:29:57.636: INFO: Created: latency-svc-6rzln May 8 12:29:57.677: INFO: Got endpoints: latency-svc-6rzln [931.538175ms] May 8 12:29:57.720: INFO: Created: latency-svc-l6bzn May 8 12:29:57.758: INFO: Got endpoints: latency-svc-l6bzn [975.631638ms] May 8 12:29:57.810: INFO: Created: latency-svc-8z72p May 8 12:29:57.830: INFO: Got endpoints: latency-svc-8z72p [992.930979ms] May 8 12:29:57.863: INFO: Created: latency-svc-g9zbv May 8 12:29:57.878: INFO: Got endpoints: latency-svc-g9zbv [962.94119ms] May 8 12:29:57.899: INFO: Created: latency-svc-765cg May 8 12:29:57.935: INFO: Got endpoints: latency-svc-765cg [977.741636ms] May 8 12:29:57.966: INFO: Created: latency-svc-7zvnx May 8 12:29:58.008: INFO: Created: latency-svc-l6hfn May 8 12:29:58.103: INFO: Got endpoints: latency-svc-7zvnx [1.109638949s] May 8 12:29:58.104: INFO: Created: latency-svc-w6wd6 May 8 12:29:58.119: INFO: Got endpoints: latency-svc-w6wd6 [1.00522452s] May 8 12:29:58.152: INFO: Created: latency-svc-k4n5d May 8 12:29:58.152: INFO: Got endpoints: latency-svc-l6hfn [1.074419189s] May 8 12:29:58.167: INFO: Got endpoints: latency-svc-k4n5d [1.017353118s] May 8 12:29:58.234: INFO: Created: latency-svc-4r4v4 May 8 12:29:58.237: INFO: Got endpoints: latency-svc-4r4v4 [1.007456823s] May 8 12:29:58.266: INFO: Created: latency-svc-jtsww May 8 12:29:58.296: INFO: Got endpoints: latency-svc-jtsww [995.592962ms] May 8 12:29:58.325: INFO: Created: latency-svc-v8jdx May 8 12:29:58.372: INFO: Got endpoints: latency-svc-v8jdx [958.210244ms] May 8 12:29:58.379: INFO: Created: latency-svc-gvb95 May 8 12:29:58.408: INFO: Got endpoints: latency-svc-gvb95 [868.894768ms] May 8 12:29:58.440: INFO: Created: latency-svc-hgptc May 8 12:29:58.457: INFO: Got endpoints: latency-svc-hgptc [885.203032ms] May 8 12:29:58.523: INFO: Created: latency-svc-2z47g May 8 12:29:58.525: INFO: Got endpoints: latency-svc-2z47g [911.958512ms] May 8 12:29:58.578: INFO: Created: latency-svc-rlvjz May 8 12:29:58.595: INFO: Got endpoints: latency-svc-rlvjz [917.872075ms] May 8 12:29:58.619: INFO: Created: latency-svc-47bfx May 8 12:29:58.701: INFO: Got endpoints: latency-svc-47bfx [943.540396ms] May 8 12:29:58.705: INFO: Created: latency-svc-29qlj May 8 12:29:58.710: INFO: Got endpoints: latency-svc-29qlj [880.374653ms] May 8 12:29:58.739: INFO: Created: latency-svc-49glz May 8 12:29:58.763: INFO: Got endpoints: latency-svc-49glz [884.922806ms] May 8 12:29:58.794: INFO: Created: latency-svc-5wj4n May 8 12:29:58.863: INFO: Got endpoints: latency-svc-5wj4n [928.103573ms] May 8 12:29:58.866: INFO: Created: latency-svc-ttjsb May 8 12:29:58.872: INFO: Got endpoints: latency-svc-ttjsb [768.567085ms] May 8 12:29:58.901: INFO: Created: latency-svc-vzmh5 May 8 12:29:58.914: INFO: Got endpoints: latency-svc-vzmh5 [795.18947ms] May 8 12:29:58.937: INFO: Created: latency-svc-s7v4z May 8 12:29:59.007: INFO: Got endpoints: latency-svc-s7v4z [854.956843ms] May 8 12:29:59.022: INFO: Created: latency-svc-hdp26 May 8 12:29:59.035: INFO: Got endpoints: latency-svc-hdp26 [867.360031ms] May 8 12:29:59.069: INFO: Created: latency-svc-pxmnm May 8 12:29:59.083: INFO: Got endpoints: latency-svc-pxmnm [845.891447ms] May 8 12:29:59.157: INFO: Created: latency-svc-x8mj2 May 8 12:29:59.159: INFO: Got endpoints: latency-svc-x8mj2 [863.084488ms] May 8 12:29:59.225: INFO: Created: latency-svc-frrzb May 8 12:29:59.246: INFO: Got endpoints: latency-svc-frrzb [873.826184ms] May 8 12:29:59.295: INFO: Created: latency-svc-ztcdl May 8 12:29:59.297: INFO: Got endpoints: latency-svc-ztcdl [888.561374ms] May 8 12:29:59.333: INFO: Created: latency-svc-r99xs May 8 12:29:59.355: INFO: Got endpoints: latency-svc-r99xs [898.195947ms] May 8 12:29:59.374: INFO: Created: latency-svc-wnf2k May 8 12:29:59.390: INFO: Got endpoints: latency-svc-wnf2k [865.022891ms] May 8 12:29:59.457: INFO: Created: latency-svc-prnxr May 8 12:29:59.463: INFO: Got endpoints: latency-svc-prnxr [867.563855ms] May 8 12:29:59.502: INFO: Created: latency-svc-bf8bn May 8 12:29:59.600: INFO: Got endpoints: latency-svc-bf8bn [897.961681ms] May 8 12:29:59.627: INFO: Created: latency-svc-7mpb8 May 8 12:29:59.656: INFO: Got endpoints: latency-svc-7mpb8 [945.913228ms] May 8 12:29:59.681: INFO: Created: latency-svc-f6x78 May 8 12:29:59.698: INFO: Got endpoints: latency-svc-f6x78 [935.009192ms] May 8 12:29:59.773: INFO: Created: latency-svc-pvrxj May 8 12:29:59.776: INFO: Got endpoints: latency-svc-pvrxj [912.937286ms] May 8 12:29:59.824: INFO: Created: latency-svc-jkw2m May 8 12:29:59.843: INFO: Got endpoints: latency-svc-jkw2m [970.51889ms] May 8 12:29:59.951: INFO: Created: latency-svc-skq5l May 8 12:29:59.975: INFO: Got endpoints: latency-svc-skq5l [1.060636476s] May 8 12:29:59.999: INFO: Created: latency-svc-hxnv9 May 8 12:30:00.018: INFO: Got endpoints: latency-svc-hxnv9 [1.011208799s] May 8 12:30:00.103: INFO: Created: latency-svc-rhkxl May 8 12:30:00.106: INFO: Got endpoints: latency-svc-rhkxl [1.070778827s] May 8 12:30:00.106: INFO: Latencies: [48.847149ms 85.060393ms 143.738315ms 215.830161ms 273.971774ms 343.614839ms 401.647661ms 433.176199ms 493.548458ms 565.449945ms 600.76466ms 631.159491ms 671.073227ms 691.564285ms 691.817132ms 696.756089ms 697.926817ms 709.1554ms 730.865389ms 733.922415ms 735.599763ms 736.892348ms 741.530123ms 741.809075ms 747.096549ms 747.610506ms 748.644266ms 753.433845ms 754.53274ms 757.493041ms 758.281316ms 767.541625ms 768.567085ms 773.955565ms 774.588857ms 775.647084ms 776.384068ms 780.485545ms 783.083027ms 783.172627ms 783.664449ms 785.545745ms 786.23902ms 787.604147ms 788.047121ms 789.550921ms 790.060461ms 792.86998ms 794.902323ms 794.990579ms 795.18947ms 795.269082ms 795.427607ms 796.01508ms 796.057343ms 796.063688ms 797.803108ms 801.588136ms 801.908071ms 802.016412ms 812.698892ms 813.607622ms 818.720145ms 822.639137ms 823.73552ms 825.490949ms 826.084627ms 826.458142ms 845.891447ms 847.116847ms 847.825067ms 849.236543ms 850.04295ms 853.378039ms 853.698148ms 854.956843ms 860.145272ms 860.326022ms 863.084488ms 864.816988ms 865.022891ms 867.252684ms 867.360031ms 867.563855ms 867.967649ms 868.35877ms 868.894768ms 872.787762ms 872.948513ms 873.683474ms 873.826184ms 875.861139ms 876.889275ms 879.802377ms 880.374653ms 880.814027ms 884.045301ms 884.922806ms 885.203032ms 888.561374ms 891.537489ms 897.887669ms 897.961681ms 898.075113ms 898.195947ms 898.229095ms 902.16469ms 902.363516ms 902.453949ms 906.399907ms 909.310209ms 911.958512ms 912.937286ms 915.054257ms 917.429295ms 917.872075ms 918.5077ms 921.156415ms 921.488258ms 921.570022ms 923.46768ms 924.28916ms 925.106675ms 926.771228ms 926.983308ms 927.337927ms 928.103573ms 930.013517ms 931.538175ms 931.928985ms 933.527585ms 935.009192ms 936.340056ms 939.749442ms 940.504452ms 941.122487ms 943.540396ms 945.762015ms 945.913228ms 947.456641ms 951.321223ms 958.210244ms 960.230025ms 962.94119ms 964.663739ms 965.157514ms 966.101586ms 969.275974ms 970.51889ms 975.631638ms 977.741636ms 979.088354ms 979.18592ms 979.919779ms 981.02512ms 981.173231ms 985.55543ms 986.582989ms 987.970659ms 989.384525ms 992.930979ms 995.592962ms 997.895716ms 1.001564922s 1.00522452s 1.007456823s 1.010149173s 1.01015795s 1.011208799s 1.015824866s 1.016278595s 1.017353118s 1.023423289s 1.031669212s 1.035987233s 1.036217822s 1.042432199s 1.042921937s 1.045545819s 1.04591138s 1.047429566s 1.049233862s 1.060636476s 1.069161658s 1.069845087s 1.070778827s 1.074419189s 1.078223825s 1.109638949s 1.109764723s 1.179420872s 1.182986035s 1.198493008s 1.222301703s 1.222498112s 1.229440371s 1.251867747s 1.252380143s 1.270130634s 1.285465692s] May 8 12:30:00.106: INFO: 50 %ile: 891.537489ms May 8 12:30:00.106: INFO: 90 %ile: 1.047429566s May 8 12:30:00.106: INFO: 99 %ile: 1.270130634s May 8 12:30:00.106: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:30:00.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-24d9v" for this suite. May 8 12:30:24.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:30:24.274: INFO: namespace: e2e-tests-svc-latency-24d9v, resource: bindings, ignored listing per whitelist May 8 12:30:24.278: INFO: namespace e2e-tests-svc-latency-24d9v deletion completed in 24.151689692s • [SLOW TEST:39.673 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:30:24.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-b0c8d449-9127-11ea-8adb-0242ac110017 STEP: Creating secret with name secret-projected-all-test-volume-b0c8d424-9127-11ea-8adb-0242ac110017 STEP: Creating a pod to test Check all projections for projected volume plugin May 8 12:30:24.459: INFO: Waiting up to 5m0s for pod "projected-volume-b0c8d3b0-9127-11ea-8adb-0242ac110017" in namespace "e2e-tests-projected-wf9fr" to be "success or failure" May 8 12:30:24.469: INFO: Pod "projected-volume-b0c8d3b0-9127-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 9.741568ms May 8 12:30:26.473: INFO: Pod "projected-volume-b0c8d3b0-9127-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014117919s May 8 12:30:28.477: INFO: Pod "projected-volume-b0c8d3b0-9127-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017721528s STEP: Saw pod success May 8 12:30:28.477: INFO: Pod "projected-volume-b0c8d3b0-9127-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 12:30:28.479: INFO: Trying to get logs from node hunter-worker2 pod projected-volume-b0c8d3b0-9127-11ea-8adb-0242ac110017 container projected-all-volume-test: STEP: delete the pod May 8 12:30:28.508: INFO: Waiting for pod projected-volume-b0c8d3b0-9127-11ea-8adb-0242ac110017 to disappear May 8 12:30:28.516: INFO: Pod projected-volume-b0c8d3b0-9127-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:30:28.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wf9fr" for this suite. May 8 12:30:34.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:30:34.585: INFO: namespace: e2e-tests-projected-wf9fr, resource: bindings, ignored listing per whitelist May 8 12:30:34.609: INFO: namespace e2e-tests-projected-wf9fr deletion completed in 6.089803056s • [SLOW TEST:10.331 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:30:34.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command May 8 12:30:34.722: INFO: Waiting up to 5m0s for pod "var-expansion-b6ee1477-9127-11ea-8adb-0242ac110017" in namespace "e2e-tests-var-expansion-r8xvp" to be "success or failure" May 8 12:30:34.768: INFO: Pod "var-expansion-b6ee1477-9127-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 45.952433ms May 8 12:30:36.772: INFO: Pod "var-expansion-b6ee1477-9127-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049512532s May 8 12:30:38.776: INFO: Pod "var-expansion-b6ee1477-9127-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053548107s STEP: Saw pod success May 8 12:30:38.776: INFO: Pod "var-expansion-b6ee1477-9127-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 12:30:38.778: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-b6ee1477-9127-11ea-8adb-0242ac110017 container dapi-container: STEP: delete the pod May 8 12:30:38.815: INFO: Waiting for pod var-expansion-b6ee1477-9127-11ea-8adb-0242ac110017 to disappear May 8 12:30:38.827: INFO: Pod var-expansion-b6ee1477-9127-11ea-8adb-0242ac110017 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:30:38.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-r8xvp" for this suite. May 8 12:30:44.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:30:44.919: INFO: namespace: e2e-tests-var-expansion-r8xvp, resource: bindings, ignored listing per whitelist May 8 12:30:44.980: INFO: namespace e2e-tests-var-expansion-r8xvp deletion completed in 6.148340802s • [SLOW TEST:10.371 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:30:44.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 8 12:30:45.086: INFO: Waiting up to 5m0s for pod "pod-bd1d99c1-9127-11ea-8adb-0242ac110017" in namespace "e2e-tests-emptydir-xnk66" to be "success or failure" May 8 12:30:45.175: INFO: Pod "pod-bd1d99c1-9127-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 88.885682ms May 8 12:30:47.319: INFO: Pod "pod-bd1d99c1-9127-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232756575s May 8 12:30:49.324: INFO: Pod "pod-bd1d99c1-9127-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.237638508s STEP: Saw pod success May 8 12:30:49.324: INFO: Pod "pod-bd1d99c1-9127-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 12:30:49.327: INFO: Trying to get logs from node hunter-worker2 pod pod-bd1d99c1-9127-11ea-8adb-0242ac110017 container test-container: STEP: delete the pod May 8 12:30:49.567: INFO: Waiting for pod pod-bd1d99c1-9127-11ea-8adb-0242ac110017 to disappear May 8 12:30:49.570: INFO: Pod pod-bd1d99c1-9127-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:30:49.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-xnk66" for this suite. May 8 12:30:55.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:30:55.602: INFO: namespace: e2e-tests-emptydir-xnk66, resource: bindings, ignored listing per whitelist May 8 12:30:55.658: INFO: namespace e2e-tests-emptydir-xnk66 deletion completed in 6.084351624s • [SLOW TEST:10.677 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:30:55.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 8 12:30:55.768: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 8 12:30:55.808: INFO: Waiting for terminating namespaces to be deleted... May 8 12:30:55.810: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 8 12:30:55.816: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 8 12:30:55.816: INFO: Container kube-proxy ready: true, restart count 0 May 8 12:30:55.816: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 8 12:30:55.816: INFO: Container kindnet-cni ready: true, restart count 0 May 8 12:30:55.816: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 8 12:30:55.816: INFO: Container coredns ready: true, restart count 0 May 8 12:30:55.816: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 8 12:30:55.821: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 8 12:30:55.821: INFO: Container kindnet-cni ready: true, restart count 0 May 8 12:30:55.821: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 8 12:30:55.821: INFO: Container coredns ready: true, restart count 0 May 8 12:30:55.821: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 8 12:30:55.821: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160d0d78c23cf616], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:30:56.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-rqnpn" for this suite. May 8 12:31:02.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:31:02.932: INFO: namespace: e2e-tests-sched-pred-rqnpn, resource: bindings, ignored listing per whitelist May 8 12:31:02.946: INFO: namespace e2e-tests-sched-pred-rqnpn deletion completed in 6.101359802s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.289 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:31:02.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments May 8 12:31:03.064: INFO: Waiting up to 5m0s for pod "client-containers-c7d4531a-9127-11ea-8adb-0242ac110017" in namespace "e2e-tests-containers-s7t2s" to be "success or failure" May 8 12:31:03.068: INFO: Pod "client-containers-c7d4531a-9127-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.448751ms May 8 12:31:05.080: INFO: Pod "client-containers-c7d4531a-9127-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015257952s May 8 12:31:07.087: INFO: Pod "client-containers-c7d4531a-9127-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022705067s STEP: Saw pod success May 8 12:31:07.087: INFO: Pod "client-containers-c7d4531a-9127-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 12:31:07.095: INFO: Trying to get logs from node hunter-worker pod client-containers-c7d4531a-9127-11ea-8adb-0242ac110017 container test-container: STEP: delete the pod May 8 12:31:07.250: INFO: Waiting for pod client-containers-c7d4531a-9127-11ea-8adb-0242ac110017 to disappear May 8 12:31:07.253: INFO: Pod client-containers-c7d4531a-9127-11ea-8adb-0242ac110017 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:31:07.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-s7t2s" for this suite. May 8 12:31:13.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:31:13.290: INFO: namespace: e2e-tests-containers-s7t2s, resource: bindings, ignored listing per whitelist May 8 12:31:13.352: INFO: namespace e2e-tests-containers-s7t2s deletion completed in 6.095194055s • [SLOW TEST:10.405 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 8 12:31:13.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 8 12:31:13.459: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ce05947a-9127-11ea-8adb-0242ac110017" in namespace "e2e-tests-projected-q9kmw" to be "success or failure" May 8 12:31:13.463: INFO: Pod "downwardapi-volume-ce05947a-9127-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.575839ms May 8 12:31:15.477: INFO: Pod "downwardapi-volume-ce05947a-9127-11ea-8adb-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017424957s May 8 12:31:17.481: INFO: Pod "downwardapi-volume-ce05947a-9127-11ea-8adb-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021028783s STEP: Saw pod success May 8 12:31:17.481: INFO: Pod "downwardapi-volume-ce05947a-9127-11ea-8adb-0242ac110017" satisfied condition "success or failure" May 8 12:31:17.484: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-ce05947a-9127-11ea-8adb-0242ac110017 container client-container: STEP: delete the pod May 8 12:31:17.503: INFO: Waiting for pod downwardapi-volume-ce05947a-9127-11ea-8adb-0242ac110017 to disappear May 8 12:31:17.505: INFO: Pod downwardapi-volume-ce05947a-9127-11ea-8adb-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 8 12:31:17.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-q9kmw" for this suite. May 8 12:31:23.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:31:23.656: INFO: namespace: e2e-tests-projected-q9kmw, resource: bindings, ignored listing per whitelist May 8 12:31:23.678: INFO: namespace e2e-tests-projected-q9kmw deletion completed in 6.170492743s • [SLOW TEST:10.326 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSMay 8 12:31:23.679: INFO: Running AfterSuite actions on all nodes May 8 12:31:23.679: INFO: Running AfterSuite actions on node 1 May 8 12:31:23.679: INFO: Skipping dumping logs from cluster Ran 200 of 2164 Specs in 6279.333 seconds SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS