I0120 12:56:12.229124 8 e2e.go:243] Starting e2e run "fd28f523-115b-4f8d-a77f-f0d26c35e455" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1579524970 - Will randomize all specs Will run 215 of 4412 specs Jan 20 12:56:12.710: INFO: >>> kubeConfig: /root/.kube/config Jan 20 12:56:12.717: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 20 12:56:12.755: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 20 12:56:12.795: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 20 12:56:12.796: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 20 12:56:12.796: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 20 12:56:12.805: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 20 12:56:12.805: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 20 12:56:12.805: INFO: e2e test version: v1.15.7 Jan 20 12:56:12.808: INFO: kube-apiserver version: v1.15.1 SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 12:56:12.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api Jan 20 12:56:12.957: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 20 12:56:12.974: INFO: Waiting up to 5m0s for pod "downwardapi-volume-38da4536-e79f-48b1-ba79-602800a9c283" in namespace "downward-api-6386" to be "success or failure" Jan 20 12:56:12.993: INFO: Pod "downwardapi-volume-38da4536-e79f-48b1-ba79-602800a9c283": Phase="Pending", Reason="", readiness=false. Elapsed: 19.165546ms Jan 20 12:56:15.003: INFO: Pod "downwardapi-volume-38da4536-e79f-48b1-ba79-602800a9c283": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028905757s Jan 20 12:56:17.023: INFO: Pod "downwardapi-volume-38da4536-e79f-48b1-ba79-602800a9c283": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049122106s Jan 20 12:56:19.032: INFO: Pod "downwardapi-volume-38da4536-e79f-48b1-ba79-602800a9c283": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057664271s Jan 20 12:56:21.042: INFO: Pod "downwardapi-volume-38da4536-e79f-48b1-ba79-602800a9c283": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067784842s Jan 20 12:56:23.059: INFO: Pod "downwardapi-volume-38da4536-e79f-48b1-ba79-602800a9c283": Phase="Pending", Reason="", readiness=false. Elapsed: 10.085410168s Jan 20 12:56:25.073: INFO: Pod "downwardapi-volume-38da4536-e79f-48b1-ba79-602800a9c283": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.098944371s STEP: Saw pod success Jan 20 12:56:25.073: INFO: Pod "downwardapi-volume-38da4536-e79f-48b1-ba79-602800a9c283" satisfied condition "success or failure" Jan 20 12:56:25.078: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-38da4536-e79f-48b1-ba79-602800a9c283 container client-container: STEP: delete the pod Jan 20 12:56:25.193: INFO: Waiting for pod downwardapi-volume-38da4536-e79f-48b1-ba79-602800a9c283 to disappear Jan 20 12:56:25.203: INFO: Pod downwardapi-volume-38da4536-e79f-48b1-ba79-602800a9c283 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 12:56:25.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6386" for this suite. Jan 20 12:56:31.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 12:56:31.456: INFO: namespace downward-api-6386 deletion completed in 6.246576099s • [SLOW TEST:18.648 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 12:56:31.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-44cf9964-8c9d-46e0-add3-e088c5b93b4f STEP: Creating a pod to test consume configMaps Jan 20 12:56:31.685: INFO: Waiting up to 5m0s for pod "pod-configmaps-1a3c824d-bf5c-4fe8-bc9a-5c74ff49a069" in namespace "configmap-5481" to be "success or failure" Jan 20 12:56:31.698: INFO: Pod "pod-configmaps-1a3c824d-bf5c-4fe8-bc9a-5c74ff49a069": Phase="Pending", Reason="", readiness=false. Elapsed: 12.615065ms Jan 20 12:56:33.709: INFO: Pod "pod-configmaps-1a3c824d-bf5c-4fe8-bc9a-5c74ff49a069": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02407703s Jan 20 12:56:35.716: INFO: Pod "pod-configmaps-1a3c824d-bf5c-4fe8-bc9a-5c74ff49a069": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030655558s Jan 20 12:56:37.728: INFO: Pod "pod-configmaps-1a3c824d-bf5c-4fe8-bc9a-5c74ff49a069": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042445482s Jan 20 12:56:39.733: INFO: Pod "pod-configmaps-1a3c824d-bf5c-4fe8-bc9a-5c74ff49a069": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047704252s Jan 20 12:56:41.751: INFO: Pod "pod-configmaps-1a3c824d-bf5c-4fe8-bc9a-5c74ff49a069": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065658245s STEP: Saw pod success Jan 20 12:56:41.752: INFO: Pod "pod-configmaps-1a3c824d-bf5c-4fe8-bc9a-5c74ff49a069" satisfied condition "success or failure" Jan 20 12:56:41.772: INFO: Trying to get logs from node iruya-node pod pod-configmaps-1a3c824d-bf5c-4fe8-bc9a-5c74ff49a069 container configmap-volume-test: STEP: delete the pod Jan 20 12:56:41.946: INFO: Waiting for pod pod-configmaps-1a3c824d-bf5c-4fe8-bc9a-5c74ff49a069 to disappear Jan 20 12:56:41.967: INFO: Pod pod-configmaps-1a3c824d-bf5c-4fe8-bc9a-5c74ff49a069 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 12:56:41.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5481" for this suite. Jan 20 12:56:48.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 12:56:48.203: INFO: namespace configmap-5481 deletion completed in 6.23020537s • [SLOW TEST:16.747 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 12:56:48.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jan 20 12:56:56.945: INFO: Successfully updated pod "labelsupdatee9566b2b-79fc-4fc7-96b3-4890464b6e04" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 12:56:59.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-233" for this suite. Jan 20 12:57:21.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 12:57:21.242: INFO: namespace projected-233 deletion completed in 22.208571091s • [SLOW TEST:33.037 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 12:57:21.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 12:57:21.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8339" for this suite. Jan 20 12:57:27.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 12:57:27.551: INFO: namespace services-8339 deletion completed in 6.161465456s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.310 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 12:57:27.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jan 20 12:57:35.963: INFO: 2 pods remaining Jan 20 12:57:35.963: INFO: 1 pods has nil DeletionTimestamp Jan 20 12:57:35.963: INFO: STEP: Gathering metrics W0120 12:57:36.592770 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 20 12:57:36.593: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 12:57:36.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2206" for this suite. Jan 20 12:57:48.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 12:57:48.873: INFO: namespace gc-2206 deletion completed in 12.275775595s • [SLOW TEST:21.322 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 12:57:48.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0120 12:57:59.080451 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 20 12:57:59.080: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 12:57:59.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4417" for this suite. Jan 20 12:58:05.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 12:58:05.646: INFO: namespace gc-4417 deletion completed in 6.560133829s • [SLOW TEST:16.769 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 12:58:05.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Jan 20 12:58:05.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9748' Jan 20 12:58:08.335: INFO: stderr: "" Jan 20 12:58:08.335: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 20 12:58:08.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9748' Jan 20 12:58:08.587: INFO: stderr: "" Jan 20 12:58:08.587: INFO: stdout: "update-demo-nautilus-kq66z update-demo-nautilus-zhgrp " Jan 20 12:58:08.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kq66z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9748' Jan 20 12:58:08.750: INFO: stderr: "" Jan 20 12:58:08.751: INFO: stdout: "" Jan 20 12:58:08.751: INFO: update-demo-nautilus-kq66z is created but not running Jan 20 12:58:13.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9748' Jan 20 12:58:14.036: INFO: stderr: "" Jan 20 12:58:14.036: INFO: stdout: "update-demo-nautilus-kq66z update-demo-nautilus-zhgrp " Jan 20 12:58:14.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kq66z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9748' Jan 20 12:58:14.228: INFO: stderr: "" Jan 20 12:58:14.228: INFO: stdout: "" Jan 20 12:58:14.228: INFO: update-demo-nautilus-kq66z is created but not running Jan 20 12:58:19.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9748' Jan 20 12:58:19.347: INFO: stderr: "" Jan 20 12:58:19.347: INFO: stdout: "update-demo-nautilus-kq66z update-demo-nautilus-zhgrp " Jan 20 12:58:19.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kq66z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9748' Jan 20 12:58:19.440: INFO: stderr: "" Jan 20 12:58:19.440: INFO: stdout: "" Jan 20 12:58:19.440: INFO: update-demo-nautilus-kq66z is created but not running Jan 20 12:58:24.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9748' Jan 20 12:58:24.647: INFO: stderr: "" Jan 20 12:58:24.647: INFO: stdout: "update-demo-nautilus-kq66z update-demo-nautilus-zhgrp " Jan 20 12:58:24.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kq66z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9748' Jan 20 12:58:24.779: INFO: stderr: "" Jan 20 12:58:24.779: INFO: stdout: "true" Jan 20 12:58:24.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kq66z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9748' Jan 20 12:58:24.911: INFO: stderr: "" Jan 20 12:58:24.911: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 20 12:58:24.911: INFO: validating pod update-demo-nautilus-kq66z Jan 20 12:58:24.998: INFO: got data: { "image": "nautilus.jpg" } Jan 20 12:58:24.999: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 20 12:58:24.999: INFO: update-demo-nautilus-kq66z is verified up and running Jan 20 12:58:24.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zhgrp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9748' Jan 20 12:58:25.135: INFO: stderr: "" Jan 20 12:58:25.135: INFO: stdout: "true" Jan 20 12:58:25.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zhgrp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9748' Jan 20 12:58:25.209: INFO: stderr: "" Jan 20 12:58:25.210: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 20 12:58:25.210: INFO: validating pod update-demo-nautilus-zhgrp Jan 20 12:58:25.239: INFO: got data: { "image": "nautilus.jpg" } Jan 20 12:58:25.239: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 20 12:58:25.239: INFO: update-demo-nautilus-zhgrp is verified up and running STEP: rolling-update to new replication controller Jan 20 12:58:25.241: INFO: scanned /root for discovery docs: Jan 20 12:58:25.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-9748' Jan 20 12:58:55.196: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 20 12:58:55.196: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 20 12:58:55.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9748' Jan 20 12:58:55.318: INFO: stderr: "" Jan 20 12:58:55.318: INFO: stdout: "update-demo-kitten-7jg2g update-demo-kitten-z2qdx update-demo-nautilus-kq66z " STEP: Replicas for name=update-demo: expected=2 actual=3 Jan 20 12:59:00.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9748' Jan 20 12:59:00.595: INFO: stderr: "" Jan 20 12:59:00.595: INFO: stdout: "update-demo-kitten-7jg2g update-demo-kitten-z2qdx " Jan 20 12:59:00.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7jg2g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9748' Jan 20 12:59:00.783: INFO: stderr: "" Jan 20 12:59:00.783: INFO: stdout: "true" Jan 20 12:59:00.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7jg2g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9748' Jan 20 12:59:00.874: INFO: stderr: "" Jan 20 12:59:00.874: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 20 12:59:00.874: INFO: validating pod update-demo-kitten-7jg2g Jan 20 12:59:00.913: INFO: got data: { "image": "kitten.jpg" } Jan 20 12:59:00.913: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 20 12:59:00.913: INFO: update-demo-kitten-7jg2g is verified up and running Jan 20 12:59:00.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-z2qdx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9748' Jan 20 12:59:00.989: INFO: stderr: "" Jan 20 12:59:00.989: INFO: stdout: "true" Jan 20 12:59:00.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-z2qdx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9748' Jan 20 12:59:01.117: INFO: stderr: "" Jan 20 12:59:01.117: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 20 12:59:01.117: INFO: validating pod update-demo-kitten-z2qdx Jan 20 12:59:01.137: INFO: got data: { "image": "kitten.jpg" } Jan 20 12:59:01.137: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 20 12:59:01.137: INFO: update-demo-kitten-z2qdx is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 12:59:01.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9748" for this suite. Jan 20 12:59:25.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 12:59:25.324: INFO: namespace kubectl-9748 deletion completed in 24.181639687s • [SLOW TEST:79.677 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 12:59:25.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-4n7v STEP: Creating a pod to test atomic-volume-subpath Jan 20 12:59:25.446: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-4n7v" in namespace "subpath-4079" to be "success or failure" Jan 20 12:59:25.474: INFO: Pod "pod-subpath-test-secret-4n7v": Phase="Pending", Reason="", readiness=false. Elapsed: 27.891571ms Jan 20 12:59:27.485: INFO: Pod "pod-subpath-test-secret-4n7v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038818391s Jan 20 12:59:29.495: INFO: Pod "pod-subpath-test-secret-4n7v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049527129s Jan 20 12:59:31.510: INFO: Pod "pod-subpath-test-secret-4n7v": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064302693s Jan 20 12:59:33.518: INFO: Pod "pod-subpath-test-secret-4n7v": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072252018s Jan 20 12:59:35.526: INFO: Pod "pod-subpath-test-secret-4n7v": Phase="Running", Reason="", readiness=true. Elapsed: 10.080169046s Jan 20 12:59:37.535: INFO: Pod "pod-subpath-test-secret-4n7v": Phase="Running", Reason="", readiness=true. Elapsed: 12.089229643s Jan 20 12:59:39.543: INFO: Pod "pod-subpath-test-secret-4n7v": Phase="Running", Reason="", readiness=true. Elapsed: 14.097260591s Jan 20 12:59:41.557: INFO: Pod "pod-subpath-test-secret-4n7v": Phase="Running", Reason="", readiness=true. Elapsed: 16.111273032s Jan 20 12:59:43.568: INFO: Pod "pod-subpath-test-secret-4n7v": Phase="Running", Reason="", readiness=true. Elapsed: 18.12170046s Jan 20 12:59:45.576: INFO: Pod "pod-subpath-test-secret-4n7v": Phase="Running", Reason="", readiness=true. Elapsed: 20.130359066s Jan 20 12:59:47.585: INFO: Pod "pod-subpath-test-secret-4n7v": Phase="Running", Reason="", readiness=true. Elapsed: 22.138877384s Jan 20 12:59:49.595: INFO: Pod "pod-subpath-test-secret-4n7v": Phase="Running", Reason="", readiness=true. Elapsed: 24.149020173s Jan 20 12:59:51.605: INFO: Pod "pod-subpath-test-secret-4n7v": Phase="Running", Reason="", readiness=true. Elapsed: 26.158736033s Jan 20 12:59:53.620: INFO: Pod "pod-subpath-test-secret-4n7v": Phase="Running", Reason="", readiness=true. Elapsed: 28.174019057s Jan 20 12:59:55.628: INFO: Pod "pod-subpath-test-secret-4n7v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.182288107s STEP: Saw pod success Jan 20 12:59:55.628: INFO: Pod "pod-subpath-test-secret-4n7v" satisfied condition "success or failure" Jan 20 12:59:55.633: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-4n7v container test-container-subpath-secret-4n7v: STEP: delete the pod Jan 20 12:59:55.697: INFO: Waiting for pod pod-subpath-test-secret-4n7v to disappear Jan 20 12:59:55.703: INFO: Pod pod-subpath-test-secret-4n7v no longer exists STEP: Deleting pod pod-subpath-test-secret-4n7v Jan 20 12:59:55.703: INFO: Deleting pod "pod-subpath-test-secret-4n7v" in namespace "subpath-4079" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 12:59:55.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4079" for this suite. Jan 20 13:00:01.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:00:01.896: INFO: namespace subpath-4079 deletion completed in 6.184367155s • [SLOW TEST:36.571 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:00:01.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Jan 20 13:00:02.057: INFO: Waiting up to 5m0s for pod "client-containers-b46debeb-cff4-451a-b950-6d5a4d0fc4b8" in namespace "containers-2621" to be "success or failure" Jan 20 13:00:02.098: INFO: Pod "client-containers-b46debeb-cff4-451a-b950-6d5a4d0fc4b8": Phase="Pending", Reason="", readiness=false. Elapsed: 40.261121ms Jan 20 13:00:04.147: INFO: Pod "client-containers-b46debeb-cff4-451a-b950-6d5a4d0fc4b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089366829s Jan 20 13:00:06.156: INFO: Pod "client-containers-b46debeb-cff4-451a-b950-6d5a4d0fc4b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098944117s Jan 20 13:00:08.171: INFO: Pod "client-containers-b46debeb-cff4-451a-b950-6d5a4d0fc4b8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113301833s Jan 20 13:00:10.185: INFO: Pod "client-containers-b46debeb-cff4-451a-b950-6d5a4d0fc4b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.127714627s STEP: Saw pod success Jan 20 13:00:10.185: INFO: Pod "client-containers-b46debeb-cff4-451a-b950-6d5a4d0fc4b8" satisfied condition "success or failure" Jan 20 13:00:10.194: INFO: Trying to get logs from node iruya-node pod client-containers-b46debeb-cff4-451a-b950-6d5a4d0fc4b8 container test-container: STEP: delete the pod Jan 20 13:00:10.327: INFO: Waiting for pod client-containers-b46debeb-cff4-451a-b950-6d5a4d0fc4b8 to disappear Jan 20 13:00:10.358: INFO: Pod client-containers-b46debeb-cff4-451a-b950-6d5a4d0fc4b8 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:00:10.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2621" for this suite. Jan 20 13:00:16.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:00:16.535: INFO: namespace containers-2621 deletion completed in 6.169033691s • [SLOW TEST:14.638 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:00:16.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-9337 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9337 to expose endpoints map[] Jan 20 13:00:16.710: INFO: Get endpoints failed (7.716178ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jan 20 13:00:17.719: INFO: successfully validated that service multi-endpoint-test in namespace services-9337 exposes endpoints map[] (1.016232729s elapsed) STEP: Creating pod pod1 in namespace services-9337 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9337 to expose endpoints map[pod1:[100]] Jan 20 13:00:21.815: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.083903878s elapsed, will retry) Jan 20 13:00:25.981: INFO: successfully validated that service multi-endpoint-test in namespace services-9337 exposes endpoints map[pod1:[100]] (8.25003885s elapsed) STEP: Creating pod pod2 in namespace services-9337 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9337 to expose endpoints map[pod1:[100] pod2:[101]] Jan 20 13:00:30.371: INFO: Unexpected endpoints: found map[538b39e6-e9bb-4757-a46c-b4fdbdc3f1f7:[100]], expected map[pod1:[100] pod2:[101]] (4.376486535s elapsed, will retry) Jan 20 13:00:33.468: INFO: successfully validated that service multi-endpoint-test in namespace services-9337 exposes endpoints map[pod1:[100] pod2:[101]] (7.473807637s elapsed) STEP: Deleting pod pod1 in namespace services-9337 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9337 to expose endpoints map[pod2:[101]] Jan 20 13:00:33.525: INFO: successfully validated that service multi-endpoint-test in namespace services-9337 exposes endpoints map[pod2:[101]] (46.130814ms elapsed) STEP: Deleting pod pod2 in namespace services-9337 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9337 to expose endpoints map[] Jan 20 13:00:34.598: INFO: successfully validated that service multi-endpoint-test in namespace services-9337 exposes endpoints map[] (1.061178842s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:00:34.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9337" for this suite. Jan 20 13:00:56.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:00:56.856: INFO: namespace services-9337 deletion completed in 22.227980058s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:40.320 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:00:56.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 20 13:00:56.923: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e11133d3-9f9f-42cf-8ab3-9b9f2a24622b" in namespace "projected-2841" to be "success or failure" Jan 20 13:00:56.945: INFO: Pod "downwardapi-volume-e11133d3-9f9f-42cf-8ab3-9b9f2a24622b": Phase="Pending", Reason="", readiness=false. Elapsed: 21.653171ms Jan 20 13:00:58.959: INFO: Pod "downwardapi-volume-e11133d3-9f9f-42cf-8ab3-9b9f2a24622b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035658609s Jan 20 13:01:00.969: INFO: Pod "downwardapi-volume-e11133d3-9f9f-42cf-8ab3-9b9f2a24622b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04562796s Jan 20 13:01:02.978: INFO: Pod "downwardapi-volume-e11133d3-9f9f-42cf-8ab3-9b9f2a24622b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055107828s Jan 20 13:01:04.985: INFO: Pod "downwardapi-volume-e11133d3-9f9f-42cf-8ab3-9b9f2a24622b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061457554s Jan 20 13:01:06.998: INFO: Pod "downwardapi-volume-e11133d3-9f9f-42cf-8ab3-9b9f2a24622b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.074619374s STEP: Saw pod success Jan 20 13:01:06.998: INFO: Pod "downwardapi-volume-e11133d3-9f9f-42cf-8ab3-9b9f2a24622b" satisfied condition "success or failure" Jan 20 13:01:07.005: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e11133d3-9f9f-42cf-8ab3-9b9f2a24622b container client-container: STEP: delete the pod Jan 20 13:01:07.057: INFO: Waiting for pod downwardapi-volume-e11133d3-9f9f-42cf-8ab3-9b9f2a24622b to disappear Jan 20 13:01:07.073: INFO: Pod downwardapi-volume-e11133d3-9f9f-42cf-8ab3-9b9f2a24622b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:01:07.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2841" for this suite. Jan 20 13:01:13.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:01:13.255: INFO: namespace projected-2841 deletion completed in 6.169713851s • [SLOW TEST:16.399 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:01:13.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Jan 20 13:01:13.353: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix510156576/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:01:13.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9525" for this suite. Jan 20 13:01:19.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:01:19.828: INFO: namespace kubectl-9525 deletion completed in 6.30985594s • [SLOW TEST:6.573 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:01:19.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 20 13:01:48.022: INFO: Container started at 2020-01-20 13:01:26 +0000 UTC, pod became ready at 2020-01-20 13:01:46 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:01:48.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5008" for this suite. Jan 20 13:02:10.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:02:10.177: INFO: namespace container-probe-5008 deletion completed in 22.148789174s • [SLOW TEST:50.348 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:02:10.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3101.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3101.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3101.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3101.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 20 13:02:24.440: INFO: File wheezy_udp@dns-test-service-3.dns-3101.svc.cluster.local from pod dns-3101/dns-test-713a771e-95f8-441f-bbc0-3d403c3ab1e8 contains '' instead of 'foo.example.com.' Jan 20 13:02:24.447: INFO: File jessie_udp@dns-test-service-3.dns-3101.svc.cluster.local from pod dns-3101/dns-test-713a771e-95f8-441f-bbc0-3d403c3ab1e8 contains '' instead of 'foo.example.com.' Jan 20 13:02:24.447: INFO: Lookups using dns-3101/dns-test-713a771e-95f8-441f-bbc0-3d403c3ab1e8 failed for: [wheezy_udp@dns-test-service-3.dns-3101.svc.cluster.local jessie_udp@dns-test-service-3.dns-3101.svc.cluster.local] Jan 20 13:02:29.478: INFO: DNS probes using dns-test-713a771e-95f8-441f-bbc0-3d403c3ab1e8 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3101.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3101.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3101.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3101.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 20 13:02:43.680: INFO: File wheezy_udp@dns-test-service-3.dns-3101.svc.cluster.local from pod dns-3101/dns-test-5d375288-c078-4503-89f6-681ada40d906 contains '' instead of 'bar.example.com.' Jan 20 13:02:43.689: INFO: File jessie_udp@dns-test-service-3.dns-3101.svc.cluster.local from pod dns-3101/dns-test-5d375288-c078-4503-89f6-681ada40d906 contains '' instead of 'bar.example.com.' Jan 20 13:02:43.689: INFO: Lookups using dns-3101/dns-test-5d375288-c078-4503-89f6-681ada40d906 failed for: [wheezy_udp@dns-test-service-3.dns-3101.svc.cluster.local jessie_udp@dns-test-service-3.dns-3101.svc.cluster.local] Jan 20 13:02:48.706: INFO: File wheezy_udp@dns-test-service-3.dns-3101.svc.cluster.local from pod dns-3101/dns-test-5d375288-c078-4503-89f6-681ada40d906 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 20 13:02:48.714: INFO: File jessie_udp@dns-test-service-3.dns-3101.svc.cluster.local from pod dns-3101/dns-test-5d375288-c078-4503-89f6-681ada40d906 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 20 13:02:48.714: INFO: Lookups using dns-3101/dns-test-5d375288-c078-4503-89f6-681ada40d906 failed for: [wheezy_udp@dns-test-service-3.dns-3101.svc.cluster.local jessie_udp@dns-test-service-3.dns-3101.svc.cluster.local] Jan 20 13:02:53.710: INFO: File wheezy_udp@dns-test-service-3.dns-3101.svc.cluster.local from pod dns-3101/dns-test-5d375288-c078-4503-89f6-681ada40d906 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 20 13:02:53.723: INFO: File jessie_udp@dns-test-service-3.dns-3101.svc.cluster.local from pod dns-3101/dns-test-5d375288-c078-4503-89f6-681ada40d906 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 20 13:02:53.723: INFO: Lookups using dns-3101/dns-test-5d375288-c078-4503-89f6-681ada40d906 failed for: [wheezy_udp@dns-test-service-3.dns-3101.svc.cluster.local jessie_udp@dns-test-service-3.dns-3101.svc.cluster.local] Jan 20 13:02:58.708: INFO: DNS probes using dns-test-5d375288-c078-4503-89f6-681ada40d906 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3101.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3101.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3101.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3101.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 20 13:03:13.013: INFO: File wheezy_udp@dns-test-service-3.dns-3101.svc.cluster.local from pod dns-3101/dns-test-c7d4176f-43ea-41b9-bb41-090aa5f275ea contains '' instead of '10.97.203.194' Jan 20 13:03:13.072: INFO: File jessie_udp@dns-test-service-3.dns-3101.svc.cluster.local from pod dns-3101/dns-test-c7d4176f-43ea-41b9-bb41-090aa5f275ea contains '' instead of '10.97.203.194' Jan 20 13:03:13.072: INFO: Lookups using dns-3101/dns-test-c7d4176f-43ea-41b9-bb41-090aa5f275ea failed for: [wheezy_udp@dns-test-service-3.dns-3101.svc.cluster.local jessie_udp@dns-test-service-3.dns-3101.svc.cluster.local] Jan 20 13:03:18.103: INFO: DNS probes using dns-test-c7d4176f-43ea-41b9-bb41-090aa5f275ea succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:03:18.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3101" for this suite. Jan 20 13:03:26.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:03:26.703: INFO: namespace dns-3101 deletion completed in 8.211242606s • [SLOW TEST:76.527 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:03:26.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 20 13:03:26.841: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 20 13:03:31.860: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 20 13:03:37.884: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 20 13:03:39.893: INFO: Creating deployment "test-rollover-deployment" Jan 20 13:03:39.931: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 20 13:03:41.961: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 20 13:03:41.970: INFO: Ensure that both replica sets have 1 created replica Jan 20 13:03:41.977: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 20 13:03:41.992: INFO: Updating deployment test-rollover-deployment Jan 20 13:03:41.992: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 20 13:03:44.975: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 20 13:03:45.349: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 20 13:03:45.402: INFO: all replica sets need to contain the pod-template-hash label Jan 20 13:03:45.402: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122220, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122220, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122222, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122219, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 13:03:47.413: INFO: all replica sets need to contain the pod-template-hash label Jan 20 13:03:47.413: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122220, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122220, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122222, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122219, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 13:03:49.431: INFO: all replica sets need to contain the pod-template-hash label Jan 20 13:03:49.431: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122220, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122220, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122222, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122219, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 13:03:51.415: INFO: all replica sets need to contain the pod-template-hash label Jan 20 13:03:51.416: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122220, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122220, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122222, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122219, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 13:03:53.414: INFO: all replica sets need to contain the pod-template-hash label Jan 20 13:03:53.414: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122220, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122220, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122231, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122219, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 13:03:55.415: INFO: all replica sets need to contain the pod-template-hash label Jan 20 13:03:55.415: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122220, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122220, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122231, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122219, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 13:03:57.418: INFO: all replica sets need to contain the pod-template-hash label Jan 20 13:03:57.418: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122220, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122220, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122231, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122219, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 13:03:59.425: INFO: all replica sets need to contain the pod-template-hash label Jan 20 13:03:59.425: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122220, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122220, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122231, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122219, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 13:04:01.422: INFO: all replica sets need to contain the pod-template-hash label Jan 20 13:04:01.422: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122220, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122220, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122231, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715122219, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 13:04:03.430: INFO: Jan 20 13:04:03.430: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 20 13:04:03.450: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-4136,SelfLink:/apis/apps/v1/namespaces/deployment-4136/deployments/test-rollover-deployment,UID:0e6dbd59-50fd-41c2-b332-4759f0eddf99,ResourceVersion:21179151,Generation:2,CreationTimestamp:2020-01-20 13:03:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-20 13:03:40 +0000 UTC 2020-01-20 13:03:40 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-20 13:04:02 +0000 UTC 2020-01-20 13:03:39 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 20 13:04:03.456: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-4136,SelfLink:/apis/apps/v1/namespaces/deployment-4136/replicasets/test-rollover-deployment-854595fc44,UID:2e94798d-0059-4e84-9dd8-0746fee11011,ResourceVersion:21179140,Generation:2,CreationTimestamp:2020-01-20 13:03:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 0e6dbd59-50fd-41c2-b332-4759f0eddf99 0xc000ffa707 0xc000ffa708}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 20 13:04:03.457: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 20 13:04:03.457: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-4136,SelfLink:/apis/apps/v1/namespaces/deployment-4136/replicasets/test-rollover-controller,UID:71bd7314-b87f-4b0d-a106-a182f28e125a,ResourceVersion:21179150,Generation:2,CreationTimestamp:2020-01-20 13:03:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 0e6dbd59-50fd-41c2-b332-4759f0eddf99 0xc000ffa627 0xc000ffa628}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 20 13:04:03.458: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-4136,SelfLink:/apis/apps/v1/namespaces/deployment-4136/replicasets/test-rollover-deployment-9b8b997cf,UID:065b2d82-d798-4c8d-964a-a1ef23bb65b2,ResourceVersion:21179103,Generation:2,CreationTimestamp:2020-01-20 13:03:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 0e6dbd59-50fd-41c2-b332-4759f0eddf99 0xc000ffa7e0 0xc000ffa7e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 20 13:04:03.466: INFO: Pod "test-rollover-deployment-854595fc44-22xsb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-22xsb,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-4136,SelfLink:/api/v1/namespaces/deployment-4136/pods/test-rollover-deployment-854595fc44-22xsb,UID:6725a14d-2a4e-4cd6-966b-8108fabe80a7,ResourceVersion:21179123,Generation:0,CreationTimestamp:2020-01-20 13:03:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 2e94798d-0059-4e84-9dd8-0746fee11011 0xc002639d07 0xc002639d08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-42227 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-42227,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-42227 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002639d70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002639d90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:03:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:03:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:03:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:03:42 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-01-20 13:03:42 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-20 13:03:50 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://0a547412c0e4c80c748338dc48fe4173d49c4761a5840f2d4c4d2692625b1211}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:04:03.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4136" for this suite. Jan 20 13:04:11.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:04:11.981: INFO: namespace deployment-4136 deletion completed in 8.50795961s • [SLOW TEST:45.277 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:04:11.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-7823, will wait for the garbage collector to delete the pods Jan 20 13:04:24.258: INFO: Deleting Job.batch foo took: 7.124663ms Jan 20 13:04:24.559: INFO: Terminating Job.batch foo pods took: 300.862052ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:05:06.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7823" for this suite. Jan 20 13:05:12.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:05:13.010: INFO: namespace job-7823 deletion completed in 6.228528902s • [SLOW TEST:61.028 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:05:13.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 20 13:05:14.040: INFO: Waiting up to 5m0s for pod "pod-d9775217-5094-4c99-a3b6-4fa5ba99a3a2" in namespace "emptydir-8264" to be "success or failure" Jan 20 13:05:14.046: INFO: Pod "pod-d9775217-5094-4c99-a3b6-4fa5ba99a3a2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.527998ms Jan 20 13:05:16.059: INFO: Pod "pod-d9775217-5094-4c99-a3b6-4fa5ba99a3a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018814947s Jan 20 13:05:18.073: INFO: Pod "pod-d9775217-5094-4c99-a3b6-4fa5ba99a3a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032292516s Jan 20 13:05:20.081: INFO: Pod "pod-d9775217-5094-4c99-a3b6-4fa5ba99a3a2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040561539s Jan 20 13:05:22.089: INFO: Pod "pod-d9775217-5094-4c99-a3b6-4fa5ba99a3a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048027452s STEP: Saw pod success Jan 20 13:05:22.089: INFO: Pod "pod-d9775217-5094-4c99-a3b6-4fa5ba99a3a2" satisfied condition "success or failure" Jan 20 13:05:22.095: INFO: Trying to get logs from node iruya-node pod pod-d9775217-5094-4c99-a3b6-4fa5ba99a3a2 container test-container: STEP: delete the pod Jan 20 13:05:22.170: INFO: Waiting for pod pod-d9775217-5094-4c99-a3b6-4fa5ba99a3a2 to disappear Jan 20 13:05:22.178: INFO: Pod pod-d9775217-5094-4c99-a3b6-4fa5ba99a3a2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:05:22.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8264" for this suite. Jan 20 13:05:28.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:05:28.536: INFO: namespace emptydir-8264 deletion completed in 6.185346673s • [SLOW TEST:15.526 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:05:28.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jan 20 13:05:39.247: INFO: Successfully updated pod "annotationupdate6b37979f-7edd-416f-8e33-f2658d35dd8e" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:05:43.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7541" for this suite. Jan 20 13:06:05.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:06:05.539: INFO: namespace projected-7541 deletion completed in 22.180729463s • [SLOW TEST:37.002 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:06:05.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-388cd213-2ef7-4d77-9957-689df292a478 in namespace container-probe-618 Jan 20 13:06:13.646: INFO: Started pod test-webserver-388cd213-2ef7-4d77-9957-689df292a478 in namespace container-probe-618 STEP: checking the pod's current state and verifying that restartCount is present Jan 20 13:06:13.650: INFO: Initial restart count of pod test-webserver-388cd213-2ef7-4d77-9957-689df292a478 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:10:15.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-618" for this suite. Jan 20 13:10:21.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:10:21.556: INFO: namespace container-probe-618 deletion completed in 6.14577342s • [SLOW TEST:256.016 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:10:21.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 20 13:10:21.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6354' Jan 20 13:10:23.717: INFO: stderr: "" Jan 20 13:10:23.717: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Jan 20 13:10:23.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-6354' Jan 20 13:10:28.057: INFO: stderr: "" Jan 20 13:10:28.057: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:10:28.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6354" for this suite. Jan 20 13:10:34.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:10:34.266: INFO: namespace kubectl-6354 deletion completed in 6.197987046s • [SLOW TEST:12.710 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:10:34.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:11:25.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1758" for this suite. Jan 20 13:11:31.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:11:31.924: INFO: namespace container-runtime-1758 deletion completed in 6.223255017s • [SLOW TEST:57.656 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:11:31.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-69738f6c-d4da-4139-9275-97a5b8f4fe24 STEP: Creating a pod to test consume configMaps Jan 20 13:11:32.094: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c1bb187b-0f3b-471e-8629-6468e5924e81" in namespace "projected-2533" to be "success or failure" Jan 20 13:11:32.182: INFO: Pod "pod-projected-configmaps-c1bb187b-0f3b-471e-8629-6468e5924e81": Phase="Pending", Reason="", readiness=false. Elapsed: 87.801577ms Jan 20 13:11:34.194: INFO: Pod "pod-projected-configmaps-c1bb187b-0f3b-471e-8629-6468e5924e81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100369571s Jan 20 13:11:36.202: INFO: Pod "pod-projected-configmaps-c1bb187b-0f3b-471e-8629-6468e5924e81": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108024601s Jan 20 13:11:38.211: INFO: Pod "pod-projected-configmaps-c1bb187b-0f3b-471e-8629-6468e5924e81": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116696796s Jan 20 13:11:40.225: INFO: Pod "pod-projected-configmaps-c1bb187b-0f3b-471e-8629-6468e5924e81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.131466043s STEP: Saw pod success Jan 20 13:11:40.226: INFO: Pod "pod-projected-configmaps-c1bb187b-0f3b-471e-8629-6468e5924e81" satisfied condition "success or failure" Jan 20 13:11:40.230: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-c1bb187b-0f3b-471e-8629-6468e5924e81 container projected-configmap-volume-test: STEP: delete the pod Jan 20 13:11:40.324: INFO: Waiting for pod pod-projected-configmaps-c1bb187b-0f3b-471e-8629-6468e5924e81 to disappear Jan 20 13:11:40.328: INFO: Pod pod-projected-configmaps-c1bb187b-0f3b-471e-8629-6468e5924e81 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:11:40.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2533" for this suite. Jan 20 13:11:46.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:11:46.544: INFO: namespace projected-2533 deletion completed in 6.183937524s • [SLOW TEST:14.618 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:11:46.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 20 13:11:46.676: INFO: Waiting up to 5m0s for pod "downwardapi-volume-256d9f06-e9fd-4dd3-a8b8-a305a75b5511" in namespace "projected-4150" to be "success or failure" Jan 20 13:11:46.830: INFO: Pod "downwardapi-volume-256d9f06-e9fd-4dd3-a8b8-a305a75b5511": Phase="Pending", Reason="", readiness=false. Elapsed: 153.968927ms Jan 20 13:11:48.841: INFO: Pod "downwardapi-volume-256d9f06-e9fd-4dd3-a8b8-a305a75b5511": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165133929s Jan 20 13:11:50.855: INFO: Pod "downwardapi-volume-256d9f06-e9fd-4dd3-a8b8-a305a75b5511": Phase="Pending", Reason="", readiness=false. Elapsed: 4.17940156s Jan 20 13:11:52.869: INFO: Pod "downwardapi-volume-256d9f06-e9fd-4dd3-a8b8-a305a75b5511": Phase="Pending", Reason="", readiness=false. Elapsed: 6.19266824s Jan 20 13:11:54.901: INFO: Pod "downwardapi-volume-256d9f06-e9fd-4dd3-a8b8-a305a75b5511": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.225256288s STEP: Saw pod success Jan 20 13:11:54.902: INFO: Pod "downwardapi-volume-256d9f06-e9fd-4dd3-a8b8-a305a75b5511" satisfied condition "success or failure" Jan 20 13:11:54.912: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-256d9f06-e9fd-4dd3-a8b8-a305a75b5511 container client-container: STEP: delete the pod Jan 20 13:11:55.004: INFO: Waiting for pod downwardapi-volume-256d9f06-e9fd-4dd3-a8b8-a305a75b5511 to disappear Jan 20 13:11:55.016: INFO: Pod downwardapi-volume-256d9f06-e9fd-4dd3-a8b8-a305a75b5511 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:11:55.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4150" for this suite. Jan 20 13:12:01.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:12:01.221: INFO: namespace projected-4150 deletion completed in 6.200157946s • [SLOW TEST:14.676 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:12:01.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-6ce445fa-678e-4aab-9a61-238dda5c7c7f STEP: Creating a pod to test consume configMaps Jan 20 13:12:01.333: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4a478acb-b0e0-45e2-b106-7c1c3e071284" in namespace "projected-9103" to be "success or failure" Jan 20 13:12:01.341: INFO: Pod "pod-projected-configmaps-4a478acb-b0e0-45e2-b106-7c1c3e071284": Phase="Pending", Reason="", readiness=false. Elapsed: 7.544838ms Jan 20 13:12:03.350: INFO: Pod "pod-projected-configmaps-4a478acb-b0e0-45e2-b106-7c1c3e071284": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016524191s Jan 20 13:12:05.361: INFO: Pod "pod-projected-configmaps-4a478acb-b0e0-45e2-b106-7c1c3e071284": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027720383s Jan 20 13:12:07.382: INFO: Pod "pod-projected-configmaps-4a478acb-b0e0-45e2-b106-7c1c3e071284": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048725922s Jan 20 13:12:09.391: INFO: Pod "pod-projected-configmaps-4a478acb-b0e0-45e2-b106-7c1c3e071284": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058041371s STEP: Saw pod success Jan 20 13:12:09.391: INFO: Pod "pod-projected-configmaps-4a478acb-b0e0-45e2-b106-7c1c3e071284" satisfied condition "success or failure" Jan 20 13:12:09.395: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-4a478acb-b0e0-45e2-b106-7c1c3e071284 container projected-configmap-volume-test: STEP: delete the pod Jan 20 13:12:09.521: INFO: Waiting for pod pod-projected-configmaps-4a478acb-b0e0-45e2-b106-7c1c3e071284 to disappear Jan 20 13:12:09.557: INFO: Pod pod-projected-configmaps-4a478acb-b0e0-45e2-b106-7c1c3e071284 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:12:09.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9103" for this suite. Jan 20 13:12:15.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:12:15.747: INFO: namespace projected-9103 deletion completed in 6.184878187s • [SLOW TEST:14.525 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:12:15.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-c2311c2a-a8d9-420c-9d7c-19d8eb9d58ea STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-c2311c2a-a8d9-420c-9d7c-19d8eb9d58ea STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:12:26.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7486" for this suite. Jan 20 13:12:48.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:12:48.211: INFO: namespace projected-7486 deletion completed in 22.183612772s • [SLOW TEST:32.463 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:12:48.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 20 13:12:48.325: INFO: Waiting up to 5m0s for pod "downward-api-ebf42d6e-8632-4769-abaa-e470dd650622" in namespace "downward-api-8245" to be "success or failure" Jan 20 13:12:48.334: INFO: Pod "downward-api-ebf42d6e-8632-4769-abaa-e470dd650622": Phase="Pending", Reason="", readiness=false. Elapsed: 8.080935ms Jan 20 13:12:50.344: INFO: Pod "downward-api-ebf42d6e-8632-4769-abaa-e470dd650622": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018589456s Jan 20 13:12:52.351: INFO: Pod "downward-api-ebf42d6e-8632-4769-abaa-e470dd650622": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025633155s Jan 20 13:12:54.365: INFO: Pod "downward-api-ebf42d6e-8632-4769-abaa-e470dd650622": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039143108s Jan 20 13:12:56.384: INFO: Pod "downward-api-ebf42d6e-8632-4769-abaa-e470dd650622": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05795602s STEP: Saw pod success Jan 20 13:12:56.384: INFO: Pod "downward-api-ebf42d6e-8632-4769-abaa-e470dd650622" satisfied condition "success or failure" Jan 20 13:12:56.389: INFO: Trying to get logs from node iruya-node pod downward-api-ebf42d6e-8632-4769-abaa-e470dd650622 container dapi-container: STEP: delete the pod Jan 20 13:12:56.463: INFO: Waiting for pod downward-api-ebf42d6e-8632-4769-abaa-e470dd650622 to disappear Jan 20 13:12:56.471: INFO: Pod downward-api-ebf42d6e-8632-4769-abaa-e470dd650622 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:12:56.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8245" for this suite. Jan 20 13:13:02.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:13:02.721: INFO: namespace downward-api-8245 deletion completed in 6.233666013s • [SLOW TEST:14.509 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:13:02.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 20 13:13:10.080: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:13:10.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7175" for this suite. Jan 20 13:13:16.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:13:16.395: INFO: namespace container-runtime-7175 deletion completed in 6.188298519s • [SLOW TEST:13.673 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:13:16.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 20 13:13:16.542: INFO: Waiting up to 5m0s for pod "downwardapi-volume-13d621f1-5c61-4cc1-8767-ab5a04ad9b63" in namespace "downward-api-8152" to be "success or failure" Jan 20 13:13:16.547: INFO: Pod "downwardapi-volume-13d621f1-5c61-4cc1-8767-ab5a04ad9b63": Phase="Pending", Reason="", readiness=false. Elapsed: 4.412389ms Jan 20 13:13:18.559: INFO: Pod "downwardapi-volume-13d621f1-5c61-4cc1-8767-ab5a04ad9b63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016537859s Jan 20 13:13:20.575: INFO: Pod "downwardapi-volume-13d621f1-5c61-4cc1-8767-ab5a04ad9b63": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032345534s Jan 20 13:13:22.590: INFO: Pod "downwardapi-volume-13d621f1-5c61-4cc1-8767-ab5a04ad9b63": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047957979s Jan 20 13:13:24.597: INFO: Pod "downwardapi-volume-13d621f1-5c61-4cc1-8767-ab5a04ad9b63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054699754s STEP: Saw pod success Jan 20 13:13:24.597: INFO: Pod "downwardapi-volume-13d621f1-5c61-4cc1-8767-ab5a04ad9b63" satisfied condition "success or failure" Jan 20 13:13:24.602: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-13d621f1-5c61-4cc1-8767-ab5a04ad9b63 container client-container: STEP: delete the pod Jan 20 13:13:24.705: INFO: Waiting for pod downwardapi-volume-13d621f1-5c61-4cc1-8767-ab5a04ad9b63 to disappear Jan 20 13:13:24.716: INFO: Pod downwardapi-volume-13d621f1-5c61-4cc1-8767-ab5a04ad9b63 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:13:24.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8152" for this suite. Jan 20 13:13:30.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:13:30.935: INFO: namespace downward-api-8152 deletion completed in 6.208608622s • [SLOW TEST:14.539 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:13:30.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-91276e1d-593e-448a-8039-303fa02998e5 STEP: Creating a pod to test consume secrets Jan 20 13:13:31.054: INFO: Waiting up to 5m0s for pod "pod-secrets-fa542c2b-0ad0-4d97-b7f4-42d0dbb13eb3" in namespace "secrets-8591" to be "success or failure" Jan 20 13:13:31.070: INFO: Pod "pod-secrets-fa542c2b-0ad0-4d97-b7f4-42d0dbb13eb3": Phase="Pending", Reason="", readiness=false. Elapsed: 16.028936ms Jan 20 13:13:33.080: INFO: Pod "pod-secrets-fa542c2b-0ad0-4d97-b7f4-42d0dbb13eb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025188951s Jan 20 13:13:35.095: INFO: Pod "pod-secrets-fa542c2b-0ad0-4d97-b7f4-42d0dbb13eb3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040181152s Jan 20 13:13:37.105: INFO: Pod "pod-secrets-fa542c2b-0ad0-4d97-b7f4-42d0dbb13eb3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050939172s Jan 20 13:13:39.118: INFO: Pod "pod-secrets-fa542c2b-0ad0-4d97-b7f4-42d0dbb13eb3": Phase="Running", Reason="", readiness=true. Elapsed: 8.063163402s Jan 20 13:13:41.131: INFO: Pod "pod-secrets-fa542c2b-0ad0-4d97-b7f4-42d0dbb13eb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.076345767s STEP: Saw pod success Jan 20 13:13:41.131: INFO: Pod "pod-secrets-fa542c2b-0ad0-4d97-b7f4-42d0dbb13eb3" satisfied condition "success or failure" Jan 20 13:13:41.165: INFO: Trying to get logs from node iruya-node pod pod-secrets-fa542c2b-0ad0-4d97-b7f4-42d0dbb13eb3 container secret-volume-test: STEP: delete the pod Jan 20 13:13:41.299: INFO: Waiting for pod pod-secrets-fa542c2b-0ad0-4d97-b7f4-42d0dbb13eb3 to disappear Jan 20 13:13:41.341: INFO: Pod pod-secrets-fa542c2b-0ad0-4d97-b7f4-42d0dbb13eb3 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:13:41.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8591" for this suite. Jan 20 13:13:47.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:13:47.586: INFO: namespace secrets-8591 deletion completed in 6.239532032s • [SLOW TEST:16.651 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:13:47.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jan 20 13:13:47.646: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 20 13:13:47.712: INFO: Waiting for terminating namespaces to be deleted... Jan 20 13:13:47.716: INFO: Logging pods the kubelet thinks is on node iruya-node before test Jan 20 13:13:47.726: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Jan 20 13:13:47.726: INFO: Container kube-proxy ready: true, restart count 0 Jan 20 13:13:47.726: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Jan 20 13:13:47.726: INFO: Container weave ready: true, restart count 0 Jan 20 13:13:47.726: INFO: Container weave-npc ready: true, restart count 0 Jan 20 13:13:47.726: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Jan 20 13:13:47.738: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Jan 20 13:13:47.738: INFO: Container kube-controller-manager ready: true, restart count 19 Jan 20 13:13:47.738: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Jan 20 13:13:47.738: INFO: Container kube-proxy ready: true, restart count 0 Jan 20 13:13:47.738: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Jan 20 13:13:47.738: INFO: Container kube-apiserver ready: true, restart count 0 Jan 20 13:13:47.738: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Jan 20 13:13:47.738: INFO: Container kube-scheduler ready: true, restart count 13 Jan 20 13:13:47.738: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 20 13:13:47.738: INFO: Container coredns ready: true, restart count 0 Jan 20 13:13:47.738: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Jan 20 13:13:47.738: INFO: Container etcd ready: true, restart count 0 Jan 20 13:13:47.738: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Jan 20 13:13:47.738: INFO: Container weave ready: true, restart count 0 Jan 20 13:13:47.738: INFO: Container weave-npc ready: true, restart count 0 Jan 20 13:13:47.738: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 20 13:13:47.738: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15eb9a8db6a3febe], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:13:48.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1051" for this suite. Jan 20 13:13:55.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:13:55.220: INFO: namespace sched-pred-1051 deletion completed in 6.439590827s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.634 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:13:55.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Jan 20 13:14:03.358: INFO: Pod pod-hostip-9f0c2eab-8fd3-4e80-a673-dbf998b103a7 has hostIP: 10.96.3.65 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:14:03.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1263" for this suite. Jan 20 13:14:25.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:14:25.534: INFO: namespace pods-1263 deletion completed in 22.166938335s • [SLOW TEST:30.314 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:14:25.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0120 13:14:28.793456 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 20 13:14:28.793: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:14:28.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8337" for this suite. Jan 20 13:14:35.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:14:35.263: INFO: namespace gc-8337 deletion completed in 6.467344801s • [SLOW TEST:9.728 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:14:35.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-ecfe9421-0606-4536-b82b-099419578504 Jan 20 13:14:35.454: INFO: Pod name my-hostname-basic-ecfe9421-0606-4536-b82b-099419578504: Found 0 pods out of 1 Jan 20 13:14:40.470: INFO: Pod name my-hostname-basic-ecfe9421-0606-4536-b82b-099419578504: Found 1 pods out of 1 Jan 20 13:14:40.470: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-ecfe9421-0606-4536-b82b-099419578504" are running Jan 20 13:14:42.519: INFO: Pod "my-hostname-basic-ecfe9421-0606-4536-b82b-099419578504-4cl24" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-20 13:14:35 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-20 13:14:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ecfe9421-0606-4536-b82b-099419578504]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-20 13:14:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ecfe9421-0606-4536-b82b-099419578504]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-20 13:14:35 +0000 UTC Reason: Message:}]) Jan 20 13:14:42.519: INFO: Trying to dial the pod Jan 20 13:14:47.560: INFO: Controller my-hostname-basic-ecfe9421-0606-4536-b82b-099419578504: Got expected result from replica 1 [my-hostname-basic-ecfe9421-0606-4536-b82b-099419578504-4cl24]: "my-hostname-basic-ecfe9421-0606-4536-b82b-099419578504-4cl24", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:14:47.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7667" for this suite. Jan 20 13:14:53.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:14:53.767: INFO: namespace replication-controller-7667 deletion completed in 6.197482212s • [SLOW TEST:18.504 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:14:53.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 20 13:14:53.931: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c6ac03e0-452b-4844-ade0-c35d4a0af0ae" in namespace "downward-api-9576" to be "success or failure" Jan 20 13:14:53.947: INFO: Pod "downwardapi-volume-c6ac03e0-452b-4844-ade0-c35d4a0af0ae": Phase="Pending", Reason="", readiness=false. Elapsed: 15.280265ms Jan 20 13:14:55.962: INFO: Pod "downwardapi-volume-c6ac03e0-452b-4844-ade0-c35d4a0af0ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030657162s Jan 20 13:14:57.975: INFO: Pod "downwardapi-volume-c6ac03e0-452b-4844-ade0-c35d4a0af0ae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04367142s Jan 20 13:14:59.986: INFO: Pod "downwardapi-volume-c6ac03e0-452b-4844-ade0-c35d4a0af0ae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054497404s Jan 20 13:15:01.997: INFO: Pod "downwardapi-volume-c6ac03e0-452b-4844-ade0-c35d4a0af0ae": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065327761s Jan 20 13:15:04.013: INFO: Pod "downwardapi-volume-c6ac03e0-452b-4844-ade0-c35d4a0af0ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.081208345s STEP: Saw pod success Jan 20 13:15:04.013: INFO: Pod "downwardapi-volume-c6ac03e0-452b-4844-ade0-c35d4a0af0ae" satisfied condition "success or failure" Jan 20 13:15:04.018: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c6ac03e0-452b-4844-ade0-c35d4a0af0ae container client-container: STEP: delete the pod Jan 20 13:15:04.297: INFO: Waiting for pod downwardapi-volume-c6ac03e0-452b-4844-ade0-c35d4a0af0ae to disappear Jan 20 13:15:04.324: INFO: Pod downwardapi-volume-c6ac03e0-452b-4844-ade0-c35d4a0af0ae no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:15:04.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9576" for this suite. Jan 20 13:15:10.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:15:10.548: INFO: namespace downward-api-9576 deletion completed in 6.197860167s • [SLOW TEST:16.780 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:15:10.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 20 13:15:10.662: INFO: Waiting up to 5m0s for pod "pod-c37c56eb-ceb2-46cf-9fbc-6722e6af6076" in namespace "emptydir-8438" to be "success or failure" Jan 20 13:15:10.667: INFO: Pod "pod-c37c56eb-ceb2-46cf-9fbc-6722e6af6076": Phase="Pending", Reason="", readiness=false. Elapsed: 4.427851ms Jan 20 13:15:12.686: INFO: Pod "pod-c37c56eb-ceb2-46cf-9fbc-6722e6af6076": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02344185s Jan 20 13:15:14.692: INFO: Pod "pod-c37c56eb-ceb2-46cf-9fbc-6722e6af6076": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029792649s Jan 20 13:15:16.704: INFO: Pod "pod-c37c56eb-ceb2-46cf-9fbc-6722e6af6076": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041936297s Jan 20 13:15:18.712: INFO: Pod "pod-c37c56eb-ceb2-46cf-9fbc-6722e6af6076": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049300041s STEP: Saw pod success Jan 20 13:15:18.712: INFO: Pod "pod-c37c56eb-ceb2-46cf-9fbc-6722e6af6076" satisfied condition "success or failure" Jan 20 13:15:18.717: INFO: Trying to get logs from node iruya-node pod pod-c37c56eb-ceb2-46cf-9fbc-6722e6af6076 container test-container: STEP: delete the pod Jan 20 13:15:18.822: INFO: Waiting for pod pod-c37c56eb-ceb2-46cf-9fbc-6722e6af6076 to disappear Jan 20 13:15:18.835: INFO: Pod pod-c37c56eb-ceb2-46cf-9fbc-6722e6af6076 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:15:18.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8438" for this suite. Jan 20 13:15:24.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:15:25.047: INFO: namespace emptydir-8438 deletion completed in 6.20600898s • [SLOW TEST:14.498 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:15:25.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9071 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 20 13:15:25.102: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 20 13:16:05.361: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9071 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 20 13:16:05.361: INFO: >>> kubeConfig: /root/.kube/config I0120 13:16:05.443731 8 log.go:172] (0xc0009de4d0) (0xc001dac3c0) Create stream I0120 13:16:05.443851 8 log.go:172] (0xc0009de4d0) (0xc001dac3c0) Stream added, broadcasting: 1 I0120 13:16:05.450464 8 log.go:172] (0xc0009de4d0) Reply frame received for 1 I0120 13:16:05.450516 8 log.go:172] (0xc0009de4d0) (0xc00242c820) Create stream I0120 13:16:05.450533 8 log.go:172] (0xc0009de4d0) (0xc00242c820) Stream added, broadcasting: 3 I0120 13:16:05.452660 8 log.go:172] (0xc0009de4d0) Reply frame received for 3 I0120 13:16:05.452685 8 log.go:172] (0xc0009de4d0) (0xc0006ea640) Create stream I0120 13:16:05.452693 8 log.go:172] (0xc0009de4d0) (0xc0006ea640) Stream added, broadcasting: 5 I0120 13:16:05.454438 8 log.go:172] (0xc0009de4d0) Reply frame received for 5 I0120 13:16:06.607131 8 log.go:172] (0xc0009de4d0) Data frame received for 3 I0120 13:16:06.607184 8 log.go:172] (0xc00242c820) (3) Data frame handling I0120 13:16:06.607216 8 log.go:172] (0xc00242c820) (3) Data frame sent I0120 13:16:06.726907 8 log.go:172] (0xc0009de4d0) Data frame received for 1 I0120 13:16:06.727080 8 log.go:172] (0xc0009de4d0) (0xc0006ea640) Stream removed, broadcasting: 5 I0120 13:16:06.727200 8 log.go:172] (0xc001dac3c0) (1) Data frame handling I0120 13:16:06.727311 8 log.go:172] (0xc001dac3c0) (1) Data frame sent I0120 13:16:06.727383 8 log.go:172] (0xc0009de4d0) (0xc00242c820) Stream removed, broadcasting: 3 I0120 13:16:06.727464 8 log.go:172] (0xc0009de4d0) (0xc001dac3c0) Stream removed, broadcasting: 1 I0120 13:16:06.727547 8 log.go:172] (0xc0009de4d0) Go away received I0120 13:16:06.729488 8 log.go:172] (0xc0009de4d0) (0xc001dac3c0) Stream removed, broadcasting: 1 I0120 13:16:06.729583 8 log.go:172] (0xc0009de4d0) (0xc00242c820) Stream removed, broadcasting: 3 I0120 13:16:06.729612 8 log.go:172] (0xc0009de4d0) (0xc0006ea640) Stream removed, broadcasting: 5 Jan 20 13:16:06.729: INFO: Found all expected endpoints: [netserver-0] Jan 20 13:16:06.741: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9071 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 20 13:16:06.741: INFO: >>> kubeConfig: /root/.kube/config I0120 13:16:06.849679 8 log.go:172] (0xc0009df130) (0xc001dac820) Create stream I0120 13:16:06.850063 8 log.go:172] (0xc0009df130) (0xc001dac820) Stream added, broadcasting: 1 I0120 13:16:06.865091 8 log.go:172] (0xc0009df130) Reply frame received for 1 I0120 13:16:06.865180 8 log.go:172] (0xc0009df130) (0xc0013541e0) Create stream I0120 13:16:06.865200 8 log.go:172] (0xc0009df130) (0xc0013541e0) Stream added, broadcasting: 3 I0120 13:16:06.867025 8 log.go:172] (0xc0009df130) Reply frame received for 3 I0120 13:16:06.867072 8 log.go:172] (0xc0009df130) (0xc0006ea820) Create stream I0120 13:16:06.867083 8 log.go:172] (0xc0009df130) (0xc0006ea820) Stream added, broadcasting: 5 I0120 13:16:06.868825 8 log.go:172] (0xc0009df130) Reply frame received for 5 I0120 13:16:08.011250 8 log.go:172] (0xc0009df130) Data frame received for 3 I0120 13:16:08.011302 8 log.go:172] (0xc0013541e0) (3) Data frame handling I0120 13:16:08.011322 8 log.go:172] (0xc0013541e0) (3) Data frame sent I0120 13:16:08.139459 8 log.go:172] (0xc0009df130) Data frame received for 1 I0120 13:16:08.139641 8 log.go:172] (0xc001dac820) (1) Data frame handling I0120 13:16:08.139680 8 log.go:172] (0xc001dac820) (1) Data frame sent I0120 13:16:08.140328 8 log.go:172] (0xc0009df130) (0xc0013541e0) Stream removed, broadcasting: 3 I0120 13:16:08.140425 8 log.go:172] (0xc0009df130) (0xc0006ea820) Stream removed, broadcasting: 5 I0120 13:16:08.140478 8 log.go:172] (0xc0009df130) (0xc001dac820) Stream removed, broadcasting: 1 I0120 13:16:08.140507 8 log.go:172] (0xc0009df130) Go away received I0120 13:16:08.140764 8 log.go:172] (0xc0009df130) (0xc001dac820) Stream removed, broadcasting: 1 I0120 13:16:08.140788 8 log.go:172] (0xc0009df130) (0xc0013541e0) Stream removed, broadcasting: 3 I0120 13:16:08.140800 8 log.go:172] (0xc0009df130) (0xc0006ea820) Stream removed, broadcasting: 5 Jan 20 13:16:08.140: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:16:08.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9071" for this suite. Jan 20 13:16:32.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:16:32.287: INFO: namespace pod-network-test-9071 deletion completed in 24.137634151s • [SLOW TEST:67.240 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:16:32.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 20 13:16:41.539: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:16:41.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7375" for this suite. Jan 20 13:16:47.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:16:47.697: INFO: namespace container-runtime-7375 deletion completed in 6.121746649s • [SLOW TEST:15.409 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:16:47.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Jan 20 13:16:47.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5818' Jan 20 13:16:48.166: INFO: stderr: "" Jan 20 13:16:48.166: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Jan 20 13:16:49.191: INFO: Selector matched 1 pods for map[app:redis] Jan 20 13:16:49.191: INFO: Found 0 / 1 Jan 20 13:16:50.177: INFO: Selector matched 1 pods for map[app:redis] Jan 20 13:16:50.177: INFO: Found 0 / 1 Jan 20 13:16:51.179: INFO: Selector matched 1 pods for map[app:redis] Jan 20 13:16:51.179: INFO: Found 0 / 1 Jan 20 13:16:52.188: INFO: Selector matched 1 pods for map[app:redis] Jan 20 13:16:52.188: INFO: Found 0 / 1 Jan 20 13:16:53.176: INFO: Selector matched 1 pods for map[app:redis] Jan 20 13:16:53.176: INFO: Found 0 / 1 Jan 20 13:16:54.189: INFO: Selector matched 1 pods for map[app:redis] Jan 20 13:16:54.190: INFO: Found 0 / 1 Jan 20 13:16:55.180: INFO: Selector matched 1 pods for map[app:redis] Jan 20 13:16:55.180: INFO: Found 0 / 1 Jan 20 13:16:56.187: INFO: Selector matched 1 pods for map[app:redis] Jan 20 13:16:56.187: INFO: Found 1 / 1 Jan 20 13:16:56.187: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 20 13:16:56.192: INFO: Selector matched 1 pods for map[app:redis] Jan 20 13:16:56.192: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jan 20 13:16:56.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-dtbnx redis-master --namespace=kubectl-5818' Jan 20 13:16:56.428: INFO: stderr: "" Jan 20 13:16:56.428: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 20 Jan 13:16:54.833 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 Jan 13:16:54.834 # Server started, Redis version 3.2.12\n1:M 20 Jan 13:16:54.834 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 Jan 13:16:54.834 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jan 20 13:16:56.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-dtbnx redis-master --namespace=kubectl-5818 --tail=1' Jan 20 13:16:56.596: INFO: stderr: "" Jan 20 13:16:56.596: INFO: stdout: "1:M 20 Jan 13:16:54.834 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jan 20 13:16:56.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-dtbnx redis-master --namespace=kubectl-5818 --limit-bytes=1' Jan 20 13:16:56.774: INFO: stderr: "" Jan 20 13:16:56.774: INFO: stdout: " " STEP: exposing timestamps Jan 20 13:16:56.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-dtbnx redis-master --namespace=kubectl-5818 --tail=1 --timestamps' Jan 20 13:16:56.997: INFO: stderr: "" Jan 20 13:16:56.997: INFO: stdout: "2020-01-20T13:16:54.835730197Z 1:M 20 Jan 13:16:54.834 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jan 20 13:16:59.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-dtbnx redis-master --namespace=kubectl-5818 --since=1s' Jan 20 13:16:59.746: INFO: stderr: "" Jan 20 13:16:59.746: INFO: stdout: "" Jan 20 13:16:59.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-dtbnx redis-master --namespace=kubectl-5818 --since=24h' Jan 20 13:16:59.932: INFO: stderr: "" Jan 20 13:16:59.932: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 20 Jan 13:16:54.833 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 Jan 13:16:54.834 # Server started, Redis version 3.2.12\n1:M 20 Jan 13:16:54.834 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 Jan 13:16:54.834 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Jan 20 13:16:59.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5818' Jan 20 13:17:00.064: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 20 13:17:00.064: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jan 20 13:17:00.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-5818' Jan 20 13:17:00.175: INFO: stderr: "No resources found.\n" Jan 20 13:17:00.175: INFO: stdout: "" Jan 20 13:17:00.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-5818 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 20 13:17:00.333: INFO: stderr: "" Jan 20 13:17:00.333: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:17:00.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5818" for this suite. Jan 20 13:17:22.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:17:22.520: INFO: namespace kubectl-5818 deletion completed in 22.179713044s • [SLOW TEST:34.823 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:17:22.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 20 13:17:22.678: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0e707e8a-4251-4ce5-a7c3-66ba1d928dad" in namespace "downward-api-6880" to be "success or failure" Jan 20 13:17:22.703: INFO: Pod "downwardapi-volume-0e707e8a-4251-4ce5-a7c3-66ba1d928dad": Phase="Pending", Reason="", readiness=false. Elapsed: 25.083727ms Jan 20 13:17:24.712: INFO: Pod "downwardapi-volume-0e707e8a-4251-4ce5-a7c3-66ba1d928dad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033405433s Jan 20 13:17:26.733: INFO: Pod "downwardapi-volume-0e707e8a-4251-4ce5-a7c3-66ba1d928dad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054984615s Jan 20 13:17:28.753: INFO: Pod "downwardapi-volume-0e707e8a-4251-4ce5-a7c3-66ba1d928dad": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07517198s Jan 20 13:17:30.780: INFO: Pod "downwardapi-volume-0e707e8a-4251-4ce5-a7c3-66ba1d928dad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.102186188s STEP: Saw pod success Jan 20 13:17:30.781: INFO: Pod "downwardapi-volume-0e707e8a-4251-4ce5-a7c3-66ba1d928dad" satisfied condition "success or failure" Jan 20 13:17:30.790: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-0e707e8a-4251-4ce5-a7c3-66ba1d928dad container client-container: STEP: delete the pod Jan 20 13:17:30.884: INFO: Waiting for pod downwardapi-volume-0e707e8a-4251-4ce5-a7c3-66ba1d928dad to disappear Jan 20 13:17:30.950: INFO: Pod downwardapi-volume-0e707e8a-4251-4ce5-a7c3-66ba1d928dad no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:17:30.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6880" for this suite. Jan 20 13:17:36.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:17:37.126: INFO: namespace downward-api-6880 deletion completed in 6.163223972s • [SLOW TEST:14.605 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:17:37.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 20 13:17:37.276: INFO: Waiting up to 5m0s for pod "downwardapi-volume-07c6739d-9d88-4857-be4e-ca75e9018340" in namespace "projected-536" to be "success or failure" Jan 20 13:17:37.303: INFO: Pod "downwardapi-volume-07c6739d-9d88-4857-be4e-ca75e9018340": Phase="Pending", Reason="", readiness=false. Elapsed: 26.567954ms Jan 20 13:17:39.312: INFO: Pod "downwardapi-volume-07c6739d-9d88-4857-be4e-ca75e9018340": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035608385s Jan 20 13:17:41.319: INFO: Pod "downwardapi-volume-07c6739d-9d88-4857-be4e-ca75e9018340": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0426774s Jan 20 13:17:43.333: INFO: Pod "downwardapi-volume-07c6739d-9d88-4857-be4e-ca75e9018340": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056808135s Jan 20 13:17:45.352: INFO: Pod "downwardapi-volume-07c6739d-9d88-4857-be4e-ca75e9018340": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075413255s Jan 20 13:17:47.358: INFO: Pod "downwardapi-volume-07c6739d-9d88-4857-be4e-ca75e9018340": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082011787s STEP: Saw pod success Jan 20 13:17:47.358: INFO: Pod "downwardapi-volume-07c6739d-9d88-4857-be4e-ca75e9018340" satisfied condition "success or failure" Jan 20 13:17:47.361: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-07c6739d-9d88-4857-be4e-ca75e9018340 container client-container: STEP: delete the pod Jan 20 13:17:47.494: INFO: Waiting for pod downwardapi-volume-07c6739d-9d88-4857-be4e-ca75e9018340 to disappear Jan 20 13:17:47.504: INFO: Pod downwardapi-volume-07c6739d-9d88-4857-be4e-ca75e9018340 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:17:47.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-536" for this suite. Jan 20 13:17:53.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:17:53.666: INFO: namespace projected-536 deletion completed in 6.154097547s • [SLOW TEST:16.539 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:17:53.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Jan 20 13:17:53.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jan 20 13:17:53.839: INFO: stderr: "" Jan 20 13:17:53.840: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:17:53.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1809" for this suite. Jan 20 13:17:59.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:18:00.027: INFO: namespace kubectl-1809 deletion completed in 6.182056633s • [SLOW TEST:6.361 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:18:00.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-75b0fa0e-88ad-4f51-935c-5899ccbd545b STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:18:10.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5681" for this suite. Jan 20 13:18:32.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:18:32.496: INFO: namespace configmap-5681 deletion completed in 22.208941198s • [SLOW TEST:32.469 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:18:32.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 20 13:18:32.630: INFO: Waiting up to 5m0s for pod "pod-3610c672-54eb-4151-a1de-ba0de96cf447" in namespace "emptydir-2606" to be "success or failure" Jan 20 13:18:32.635: INFO: Pod "pod-3610c672-54eb-4151-a1de-ba0de96cf447": Phase="Pending", Reason="", readiness=false. Elapsed: 4.897987ms Jan 20 13:18:34.643: INFO: Pod "pod-3610c672-54eb-4151-a1de-ba0de96cf447": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013773221s Jan 20 13:18:36.691: INFO: Pod "pod-3610c672-54eb-4151-a1de-ba0de96cf447": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061624788s Jan 20 13:18:38.702: INFO: Pod "pod-3610c672-54eb-4151-a1de-ba0de96cf447": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072610291s Jan 20 13:18:40.743: INFO: Pod "pod-3610c672-54eb-4151-a1de-ba0de96cf447": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.113700582s STEP: Saw pod success Jan 20 13:18:40.744: INFO: Pod "pod-3610c672-54eb-4151-a1de-ba0de96cf447" satisfied condition "success or failure" Jan 20 13:18:40.749: INFO: Trying to get logs from node iruya-node pod pod-3610c672-54eb-4151-a1de-ba0de96cf447 container test-container: STEP: delete the pod Jan 20 13:18:40.815: INFO: Waiting for pod pod-3610c672-54eb-4151-a1de-ba0de96cf447 to disappear Jan 20 13:18:40.836: INFO: Pod pod-3610c672-54eb-4151-a1de-ba0de96cf447 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:18:40.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2606" for this suite. Jan 20 13:18:46.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:18:46.981: INFO: namespace emptydir-2606 deletion completed in 6.112334708s • [SLOW TEST:14.484 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:18:46.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 20 13:18:47.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3922' Jan 20 13:18:47.218: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 20 13:18:47.219: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Jan 20 13:18:47.241: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Jan 20 13:18:47.280: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jan 20 13:18:47.333: INFO: scanned /root for discovery docs: Jan 20 13:18:47.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-3922' Jan 20 13:19:09.710: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 20 13:19:09.710: INFO: stdout: "Created e2e-test-nginx-rc-54c81682fa4e6ea69a4eba5cba0a1da9\nScaling up e2e-test-nginx-rc-54c81682fa4e6ea69a4eba5cba0a1da9 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-54c81682fa4e6ea69a4eba5cba0a1da9 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-54c81682fa4e6ea69a4eba5cba0a1da9 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jan 20 13:19:09.710: INFO: stdout: "Created e2e-test-nginx-rc-54c81682fa4e6ea69a4eba5cba0a1da9\nScaling up e2e-test-nginx-rc-54c81682fa4e6ea69a4eba5cba0a1da9 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-54c81682fa4e6ea69a4eba5cba0a1da9 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-54c81682fa4e6ea69a4eba5cba0a1da9 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jan 20 13:19:09.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3922' Jan 20 13:19:09.919: INFO: stderr: "" Jan 20 13:19:09.919: INFO: stdout: "e2e-test-nginx-rc-54c81682fa4e6ea69a4eba5cba0a1da9-mgbck " Jan 20 13:19:09.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-54c81682fa4e6ea69a4eba5cba0a1da9-mgbck -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3922' Jan 20 13:19:10.068: INFO: stderr: "" Jan 20 13:19:10.068: INFO: stdout: "true" Jan 20 13:19:10.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-54c81682fa4e6ea69a4eba5cba0a1da9-mgbck -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3922' Jan 20 13:19:10.199: INFO: stderr: "" Jan 20 13:19:10.199: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jan 20 13:19:10.199: INFO: e2e-test-nginx-rc-54c81682fa4e6ea69a4eba5cba0a1da9-mgbck is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Jan 20 13:19:10.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3922' Jan 20 13:19:10.339: INFO: stderr: "" Jan 20 13:19:10.339: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:19:10.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3922" for this suite. Jan 20 13:19:32.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:19:32.553: INFO: namespace kubectl-3922 deletion completed in 22.200234496s • [SLOW TEST:45.572 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:19:32.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 20 13:19:32.682: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jan 20 13:19:32.692: INFO: Number of nodes with available pods: 0 Jan 20 13:19:32.692: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 20 13:19:32.849: INFO: Number of nodes with available pods: 0 Jan 20 13:19:32.849: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:19:33.870: INFO: Number of nodes with available pods: 0 Jan 20 13:19:33.870: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:19:34.863: INFO: Number of nodes with available pods: 0 Jan 20 13:19:34.863: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:19:35.878: INFO: Number of nodes with available pods: 0 Jan 20 13:19:35.879: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:19:36.863: INFO: Number of nodes with available pods: 0 Jan 20 13:19:36.863: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:19:37.866: INFO: Number of nodes with available pods: 0 Jan 20 13:19:37.866: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:19:38.863: INFO: Number of nodes with available pods: 0 Jan 20 13:19:38.863: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:19:39.869: INFO: Number of nodes with available pods: 0 Jan 20 13:19:39.869: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:19:40.865: INFO: Number of nodes with available pods: 1 Jan 20 13:19:40.865: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jan 20 13:19:40.925: INFO: Number of nodes with available pods: 1 Jan 20 13:19:40.925: INFO: Number of running nodes: 0, number of available pods: 1 Jan 20 13:19:41.934: INFO: Number of nodes with available pods: 0 Jan 20 13:19:41.934: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 20 13:19:41.999: INFO: Number of nodes with available pods: 0 Jan 20 13:19:41.999: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:19:43.007: INFO: Number of nodes with available pods: 0 Jan 20 13:19:43.007: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:19:44.016: INFO: Number of nodes with available pods: 0 Jan 20 13:19:44.016: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:19:45.012: INFO: Number of nodes with available pods: 0 Jan 20 13:19:45.013: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:19:46.007: INFO: Number of nodes with available pods: 0 Jan 20 13:19:46.007: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:19:47.011: INFO: Number of nodes with available pods: 0 Jan 20 13:19:47.011: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:19:48.007: INFO: Number of nodes with available pods: 0 Jan 20 13:19:48.007: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:19:49.019: INFO: Number of nodes with available pods: 0 Jan 20 13:19:49.019: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:19:50.012: INFO: Number of nodes with available pods: 0 Jan 20 13:19:50.012: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:19:51.011: INFO: Number of nodes with available pods: 0 Jan 20 13:19:51.011: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:19:52.014: INFO: Number of nodes with available pods: 0 Jan 20 13:19:52.014: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:19:53.007: INFO: Number of nodes with available pods: 0 Jan 20 13:19:53.007: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:19:54.015: INFO: Number of nodes with available pods: 0 Jan 20 13:19:54.015: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:19:55.009: INFO: Number of nodes with available pods: 0 Jan 20 13:19:55.009: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:19:56.007: INFO: Number of nodes with available pods: 1 Jan 20 13:19:56.007: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7133, will wait for the garbage collector to delete the pods Jan 20 13:19:56.084: INFO: Deleting DaemonSet.extensions daemon-set took: 14.41861ms Jan 20 13:19:56.385: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.536795ms Jan 20 13:20:06.597: INFO: Number of nodes with available pods: 0 Jan 20 13:20:06.597: INFO: Number of running nodes: 0, number of available pods: 0 Jan 20 13:20:06.611: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7133/daemonsets","resourceVersion":"21181396"},"items":null} Jan 20 13:20:06.617: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7133/pods","resourceVersion":"21181396"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:20:06.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7133" for this suite. Jan 20 13:20:12.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:20:12.935: INFO: namespace daemonsets-7133 deletion completed in 6.244527358s • [SLOW TEST:40.381 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:20:12.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:20:19.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4942" for this suite. Jan 20 13:20:25.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:20:25.693: INFO: namespace namespaces-4942 deletion completed in 6.140654013s STEP: Destroying namespace "nsdeletetest-9860" for this suite. Jan 20 13:20:25.696: INFO: Namespace nsdeletetest-9860 was already deleted STEP: Destroying namespace "nsdeletetest-8312" for this suite. Jan 20 13:20:31.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:20:31.916: INFO: namespace nsdeletetest-8312 deletion completed in 6.219914345s • [SLOW TEST:18.981 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:20:31.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0120 13:21:02.213740 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 20 13:21:02.213: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:21:02.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6301" for this suite. Jan 20 13:21:10.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:21:10.385: INFO: namespace gc-6301 deletion completed in 8.164688002s • [SLOW TEST:38.469 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:21:10.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-ad4fbcb8-5262-4d52-8e8a-1b32e524afc8 in namespace container-probe-2229 Jan 20 13:21:20.711: INFO: Started pod busybox-ad4fbcb8-5262-4d52-8e8a-1b32e524afc8 in namespace container-probe-2229 STEP: checking the pod's current state and verifying that restartCount is present Jan 20 13:21:20.716: INFO: Initial restart count of pod busybox-ad4fbcb8-5262-4d52-8e8a-1b32e524afc8 is 0 Jan 20 13:22:17.121: INFO: Restart count of pod container-probe-2229/busybox-ad4fbcb8-5262-4d52-8e8a-1b32e524afc8 is now 1 (56.404664176s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:22:17.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2229" for this suite. Jan 20 13:22:23.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:22:23.502: INFO: namespace container-probe-2229 deletion completed in 6.236659714s • [SLOW TEST:73.116 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:22:23.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 20 13:22:23.653: INFO: Number of nodes with available pods: 0 Jan 20 13:22:23.653: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:22:25.147: INFO: Number of nodes with available pods: 0 Jan 20 13:22:25.147: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:22:25.668: INFO: Number of nodes with available pods: 0 Jan 20 13:22:25.668: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:22:26.675: INFO: Number of nodes with available pods: 0 Jan 20 13:22:26.675: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:22:27.663: INFO: Number of nodes with available pods: 0 Jan 20 13:22:27.663: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:22:29.481: INFO: Number of nodes with available pods: 0 Jan 20 13:22:29.481: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:22:30.056: INFO: Number of nodes with available pods: 0 Jan 20 13:22:30.057: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:22:30.695: INFO: Number of nodes with available pods: 0 Jan 20 13:22:30.695: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:22:31.672: INFO: Number of nodes with available pods: 0 Jan 20 13:22:31.672: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:22:32.668: INFO: Number of nodes with available pods: 1 Jan 20 13:22:32.668: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 20 13:22:33.669: INFO: Number of nodes with available pods: 2 Jan 20 13:22:33.669: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jan 20 13:22:33.699: INFO: Number of nodes with available pods: 1 Jan 20 13:22:33.699: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 20 13:22:34.716: INFO: Number of nodes with available pods: 1 Jan 20 13:22:34.716: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 20 13:22:35.716: INFO: Number of nodes with available pods: 1 Jan 20 13:22:35.716: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 20 13:22:36.721: INFO: Number of nodes with available pods: 1 Jan 20 13:22:36.721: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 20 13:22:37.907: INFO: Number of nodes with available pods: 1 Jan 20 13:22:37.907: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 20 13:22:38.730: INFO: Number of nodes with available pods: 1 Jan 20 13:22:38.730: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 20 13:22:39.739: INFO: Number of nodes with available pods: 1 Jan 20 13:22:39.739: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 20 13:22:40.725: INFO: Number of nodes with available pods: 1 Jan 20 13:22:40.725: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 20 13:22:41.721: INFO: Number of nodes with available pods: 1 Jan 20 13:22:41.721: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 20 13:22:42.725: INFO: Number of nodes with available pods: 1 Jan 20 13:22:42.726: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 20 13:22:44.801: INFO: Number of nodes with available pods: 1 Jan 20 13:22:44.801: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 20 13:22:45.784: INFO: Number of nodes with available pods: 1 Jan 20 13:22:45.784: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 20 13:22:46.712: INFO: Number of nodes with available pods: 2 Jan 20 13:22:46.712: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7155, will wait for the garbage collector to delete the pods Jan 20 13:22:46.779: INFO: Deleting DaemonSet.extensions daemon-set took: 12.093831ms Jan 20 13:22:47.080: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.451031ms Jan 20 13:22:57.899: INFO: Number of nodes with available pods: 0 Jan 20 13:22:57.899: INFO: Number of running nodes: 0, number of available pods: 0 Jan 20 13:22:57.906: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7155/daemonsets","resourceVersion":"21181815"},"items":null} Jan 20 13:22:57.912: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7155/pods","resourceVersion":"21181815"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:22:57.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7155" for this suite. Jan 20 13:23:03.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:23:04.064: INFO: namespace daemonsets-7155 deletion completed in 6.128771472s • [SLOW TEST:40.562 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:23:04.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-9710 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9710 to expose endpoints map[] Jan 20 13:23:04.250: INFO: Get endpoints failed (12.800805ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jan 20 13:23:05.280: INFO: successfully validated that service endpoint-test2 in namespace services-9710 exposes endpoints map[] (1.043223166s elapsed) STEP: Creating pod pod1 in namespace services-9710 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9710 to expose endpoints map[pod1:[80]] Jan 20 13:23:09.385: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.090284715s elapsed, will retry) Jan 20 13:23:12.514: INFO: successfully validated that service endpoint-test2 in namespace services-9710 exposes endpoints map[pod1:[80]] (7.2193798s elapsed) STEP: Creating pod pod2 in namespace services-9710 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9710 to expose endpoints map[pod1:[80] pod2:[80]] Jan 20 13:23:17.456: INFO: Unexpected endpoints: found map[320b8167-24fd-4f20-a67f-68ac70185a63:[80]], expected map[pod1:[80] pod2:[80]] (4.922785277s elapsed, will retry) Jan 20 13:23:20.554: INFO: successfully validated that service endpoint-test2 in namespace services-9710 exposes endpoints map[pod1:[80] pod2:[80]] (8.020853482s elapsed) STEP: Deleting pod pod1 in namespace services-9710 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9710 to expose endpoints map[pod2:[80]] Jan 20 13:23:20.659: INFO: successfully validated that service endpoint-test2 in namespace services-9710 exposes endpoints map[pod2:[80]] (66.695123ms elapsed) STEP: Deleting pod pod2 in namespace services-9710 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9710 to expose endpoints map[] Jan 20 13:23:21.735: INFO: successfully validated that service endpoint-test2 in namespace services-9710 exposes endpoints map[] (1.06242492s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:23:21.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9710" for this suite. Jan 20 13:23:44.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:23:44.948: INFO: namespace services-9710 deletion completed in 23.13240281s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:40.883 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:23:44.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jan 20 13:23:45.097: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5440,SelfLink:/api/v1/namespaces/watch-5440/configmaps/e2e-watch-test-watch-closed,UID:cd691952-d304-4a6a-8cb3-1de64c6cf434,ResourceVersion:21181957,Generation:0,CreationTimestamp:2020-01-20 13:23:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 20 13:23:45.097: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5440,SelfLink:/api/v1/namespaces/watch-5440/configmaps/e2e-watch-test-watch-closed,UID:cd691952-d304-4a6a-8cb3-1de64c6cf434,ResourceVersion:21181958,Generation:0,CreationTimestamp:2020-01-20 13:23:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 20 13:23:45.127: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5440,SelfLink:/api/v1/namespaces/watch-5440/configmaps/e2e-watch-test-watch-closed,UID:cd691952-d304-4a6a-8cb3-1de64c6cf434,ResourceVersion:21181959,Generation:0,CreationTimestamp:2020-01-20 13:23:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 20 13:23:45.127: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5440,SelfLink:/api/v1/namespaces/watch-5440/configmaps/e2e-watch-test-watch-closed,UID:cd691952-d304-4a6a-8cb3-1de64c6cf434,ResourceVersion:21181960,Generation:0,CreationTimestamp:2020-01-20 13:23:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:23:45.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5440" for this suite. Jan 20 13:23:51.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:23:51.268: INFO: namespace watch-5440 deletion completed in 6.133668379s • [SLOW TEST:6.318 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:23:51.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 20 13:23:51.386: INFO: Create a RollingUpdate DaemonSet Jan 20 13:23:51.390: INFO: Check that daemon pods launch on every node of the cluster Jan 20 13:23:51.420: INFO: Number of nodes with available pods: 0 Jan 20 13:23:51.420: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:23:53.093: INFO: Number of nodes with available pods: 0 Jan 20 13:23:53.093: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:23:53.438: INFO: Number of nodes with available pods: 0 Jan 20 13:23:53.438: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:23:54.782: INFO: Number of nodes with available pods: 0 Jan 20 13:23:54.782: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:23:55.442: INFO: Number of nodes with available pods: 0 Jan 20 13:23:55.442: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:23:56.448: INFO: Number of nodes with available pods: 0 Jan 20 13:23:56.448: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:23:58.031: INFO: Number of nodes with available pods: 0 Jan 20 13:23:58.031: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:23:58.618: INFO: Number of nodes with available pods: 0 Jan 20 13:23:58.618: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:23:59.642: INFO: Number of nodes with available pods: 0 Jan 20 13:23:59.642: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:24:00.441: INFO: Number of nodes with available pods: 0 Jan 20 13:24:00.441: INFO: Node iruya-node is running more than one daemon pod Jan 20 13:24:01.437: INFO: Number of nodes with available pods: 2 Jan 20 13:24:01.437: INFO: Number of running nodes: 2, number of available pods: 2 Jan 20 13:24:01.437: INFO: Update the DaemonSet to trigger a rollout Jan 20 13:24:01.451: INFO: Updating DaemonSet daemon-set Jan 20 13:24:17.483: INFO: Roll back the DaemonSet before rollout is complete Jan 20 13:24:17.496: INFO: Updating DaemonSet daemon-set Jan 20 13:24:17.496: INFO: Make sure DaemonSet rollback is complete Jan 20 13:24:17.506: INFO: Wrong image for pod: daemon-set-zwj5z. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 20 13:24:17.506: INFO: Pod daemon-set-zwj5z is not available Jan 20 13:24:18.591: INFO: Wrong image for pod: daemon-set-zwj5z. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 20 13:24:18.591: INFO: Pod daemon-set-zwj5z is not available Jan 20 13:24:19.591: INFO: Wrong image for pod: daemon-set-zwj5z. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 20 13:24:19.592: INFO: Pod daemon-set-zwj5z is not available Jan 20 13:24:20.592: INFO: Wrong image for pod: daemon-set-zwj5z. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 20 13:24:20.592: INFO: Pod daemon-set-zwj5z is not available Jan 20 13:24:21.590: INFO: Wrong image for pod: daemon-set-zwj5z. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 20 13:24:21.590: INFO: Pod daemon-set-zwj5z is not available Jan 20 13:24:22.601: INFO: Pod daemon-set-nxlll is not available [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7216, will wait for the garbage collector to delete the pods Jan 20 13:24:22.697: INFO: Deleting DaemonSet.extensions daemon-set took: 9.21921ms Jan 20 13:24:23.098: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.931934ms Jan 20 13:24:38.006: INFO: Number of nodes with available pods: 0 Jan 20 13:24:38.006: INFO: Number of running nodes: 0, number of available pods: 0 Jan 20 13:24:38.011: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7216/daemonsets","resourceVersion":"21182106"},"items":null} Jan 20 13:24:38.034: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7216/pods","resourceVersion":"21182106"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:24:38.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7216" for this suite. Jan 20 13:24:44.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:24:44.182: INFO: namespace daemonsets-7216 deletion completed in 6.128918668s • [SLOW TEST:52.915 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:24:44.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 20 13:24:44.331: INFO: Waiting up to 5m0s for pod "downwardapi-volume-98f58921-71bb-401d-a401-6f2c30484265" in namespace "projected-8635" to be "success or failure" Jan 20 13:24:44.354: INFO: Pod "downwardapi-volume-98f58921-71bb-401d-a401-6f2c30484265": Phase="Pending", Reason="", readiness=false. Elapsed: 22.906334ms Jan 20 13:24:46.368: INFO: Pod "downwardapi-volume-98f58921-71bb-401d-a401-6f2c30484265": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037291701s Jan 20 13:24:48.378: INFO: Pod "downwardapi-volume-98f58921-71bb-401d-a401-6f2c30484265": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046745691s Jan 20 13:24:50.388: INFO: Pod "downwardapi-volume-98f58921-71bb-401d-a401-6f2c30484265": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057046203s Jan 20 13:24:52.399: INFO: Pod "downwardapi-volume-98f58921-71bb-401d-a401-6f2c30484265": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068242877s STEP: Saw pod success Jan 20 13:24:52.399: INFO: Pod "downwardapi-volume-98f58921-71bb-401d-a401-6f2c30484265" satisfied condition "success or failure" Jan 20 13:24:52.405: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-98f58921-71bb-401d-a401-6f2c30484265 container client-container: STEP: delete the pod Jan 20 13:24:52.504: INFO: Waiting for pod downwardapi-volume-98f58921-71bb-401d-a401-6f2c30484265 to disappear Jan 20 13:24:52.515: INFO: Pod downwardapi-volume-98f58921-71bb-401d-a401-6f2c30484265 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:24:52.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8635" for this suite. Jan 20 13:24:58.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:24:58.702: INFO: namespace projected-8635 deletion completed in 6.18019447s • [SLOW TEST:14.520 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:24:58.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 20 13:24:58.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jan 20 13:24:58.991: INFO: stderr: "" Jan 20 13:24:58.991: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:24:58.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1298" for this suite. Jan 20 13:25:05.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:25:05.123: INFO: namespace kubectl-1298 deletion completed in 6.124817473s • [SLOW TEST:6.420 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:25:05.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 20 13:25:05.251: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jan 20 13:25:10.257: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 20 13:25:14.271: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 20 13:25:14.391: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-8613,SelfLink:/apis/apps/v1/namespaces/deployment-8613/deployments/test-cleanup-deployment,UID:b3431088-0d1a-46e3-95d7-9eefe9953d9e,ResourceVersion:21182231,Generation:1,CreationTimestamp:2020-01-20 13:25:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Jan 20 13:25:14.449: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-8613,SelfLink:/apis/apps/v1/namespaces/deployment-8613/replicasets/test-cleanup-deployment-55bbcbc84c,UID:cc2571c2-317a-4cd8-9481-066f7213d988,ResourceVersion:21182238,Generation:1,CreationTimestamp:2020-01-20 13:25:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment b3431088-0d1a-46e3-95d7-9eefe9953d9e 0xc0031fdb37 0xc0031fdb38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 20 13:25:14.449: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jan 20 13:25:14.449: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-8613,SelfLink:/apis/apps/v1/namespaces/deployment-8613/replicasets/test-cleanup-controller,UID:831e7157-1cb8-4804-9482-f0990ae16c31,ResourceVersion:21182232,Generation:1,CreationTimestamp:2020-01-20 13:25:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment b3431088-0d1a-46e3-95d7-9eefe9953d9e 0xc0031fda67 0xc0031fda68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 20 13:25:14.590: INFO: Pod "test-cleanup-controller-lnv82" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-lnv82,GenerateName:test-cleanup-controller-,Namespace:deployment-8613,SelfLink:/api/v1/namespaces/deployment-8613/pods/test-cleanup-controller-lnv82,UID:b855759b-9e90-4390-aa84-46649ced7713,ResourceVersion:21182226,Generation:0,CreationTimestamp:2020-01-20 13:25:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 831e7157-1cb8-4804-9482-f0990ae16c31 0xc002c8a3ef 0xc002c8a400}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-25kt4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-25kt4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-25kt4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c8a470} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c8a490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:25:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:25:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:25:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:25:05 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-20 13:25:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-20 13:25:12 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://424200bb7382c61b8f3684f6be31fab72d89e21dbf0fb04becbaad72c923ef59}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 20 13:25:14.591: INFO: Pod "test-cleanup-deployment-55bbcbc84c-6rdln" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-6rdln,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-8613,SelfLink:/api/v1/namespaces/deployment-8613/pods/test-cleanup-deployment-55bbcbc84c-6rdln,UID:651b8c9b-83e0-4aa9-a963-1f1f86cf1065,ResourceVersion:21182237,Generation:0,CreationTimestamp:2020-01-20 13:25:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c cc2571c2-317a-4cd8-9481-066f7213d988 0xc002c8a577 0xc002c8a578}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-25kt4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-25kt4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-25kt4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c8a5f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c8a610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:25:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:25:14.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8613" for this suite. Jan 20 13:25:20.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:25:20.833: INFO: namespace deployment-8613 deletion completed in 6.226809363s • [SLOW TEST:15.709 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:25:20.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 20 13:25:21.006: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 20 13:25:21.267: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 20 13:25:26.278: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 20 13:25:30.310: INFO: Creating deployment "test-rolling-update-deployment" Jan 20 13:25:30.319: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 20 13:25:30.349: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 20 13:25:32.360: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 20 13:25:32.364: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715123530, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715123530, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715123530, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715123530, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 13:25:34.378: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715123530, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715123530, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715123530, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715123530, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 13:25:36.373: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715123530, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715123530, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715123530, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715123530, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 13:25:38.377: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 20 13:25:38.395: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-7563,SelfLink:/apis/apps/v1/namespaces/deployment-7563/deployments/test-rolling-update-deployment,UID:ef2e0cc3-9088-411d-865e-ab8c8fa55b5d,ResourceVersion:21182342,Generation:1,CreationTimestamp:2020-01-20 13:25:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-20 13:25:30 +0000 UTC 2020-01-20 13:25:30 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-20 13:25:37 +0000 UTC 2020-01-20 13:25:30 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 20 13:25:38.404: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-7563,SelfLink:/apis/apps/v1/namespaces/deployment-7563/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:d6f55513-1b8f-45c6-98c3-00cee22ef190,ResourceVersion:21182331,Generation:1,CreationTimestamp:2020-01-20 13:25:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment ef2e0cc3-9088-411d-865e-ab8c8fa55b5d 0xc001f642a7 0xc001f642a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 20 13:25:38.404: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 20 13:25:38.405: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-7563,SelfLink:/apis/apps/v1/namespaces/deployment-7563/replicasets/test-rolling-update-controller,UID:49d44e0e-3c74-4bf7-9089-b095fa67f52e,ResourceVersion:21182340,Generation:2,CreationTimestamp:2020-01-20 13:25:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment ef2e0cc3-9088-411d-865e-ab8c8fa55b5d 0xc001f64167 0xc001f64168}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 20 13:25:38.414: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-t9j8v" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-t9j8v,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-7563,SelfLink:/api/v1/namespaces/deployment-7563/pods/test-rolling-update-deployment-79f6b9d75c-t9j8v,UID:cd01515a-b897-4bcd-b27f-9e482546bad5,ResourceVersion:21182330,Generation:0,CreationTimestamp:2020-01-20 13:25:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c d6f55513-1b8f-45c6-98c3-00cee22ef190 0xc0023f08a7 0xc0023f08a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5qwwg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qwwg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-5qwwg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023f0920} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023f0940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:25:30 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:25:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:25:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:25:30 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-20 13:25:30 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-20 13:25:36 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://b1f197d8ed5d99fa8ebbf6f726049049d826c5e8c91f8a53e9ba3b1a84923008}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:25:38.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7563" for this suite. Jan 20 13:25:46.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:25:46.704: INFO: namespace deployment-7563 deletion completed in 8.153606626s • [SLOW TEST:25.871 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:25:46.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9550 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-9550 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9550 Jan 20 13:25:46.933: INFO: Found 0 stateful pods, waiting for 1 Jan 20 13:25:56.943: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jan 20 13:25:56.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9550 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 20 13:25:59.432: INFO: stderr: "I0120 13:25:58.935643 827 log.go:172] (0xc00013adc0) (0xc00077a640) Create stream\nI0120 13:25:58.935783 827 log.go:172] (0xc00013adc0) (0xc00077a640) Stream added, broadcasting: 1\nI0120 13:25:58.958366 827 log.go:172] (0xc00013adc0) Reply frame received for 1\nI0120 13:25:58.958813 827 log.go:172] (0xc00013adc0) (0xc000674320) Create stream\nI0120 13:25:58.958898 827 log.go:172] (0xc00013adc0) (0xc000674320) Stream added, broadcasting: 3\nI0120 13:25:58.965008 827 log.go:172] (0xc00013adc0) Reply frame received for 3\nI0120 13:25:58.965098 827 log.go:172] (0xc00013adc0) (0xc0004e6000) Create stream\nI0120 13:25:58.965116 827 log.go:172] (0xc00013adc0) (0xc0004e6000) Stream added, broadcasting: 5\nI0120 13:25:58.968237 827 log.go:172] (0xc00013adc0) Reply frame received for 5\nI0120 13:25:59.262323 827 log.go:172] (0xc00013adc0) Data frame received for 5\nI0120 13:25:59.262453 827 log.go:172] (0xc0004e6000) (5) Data frame handling\nI0120 13:25:59.262503 827 log.go:172] (0xc0004e6000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0120 13:25:59.316390 827 log.go:172] (0xc00013adc0) Data frame received for 3\nI0120 13:25:59.316435 827 log.go:172] (0xc000674320) (3) Data frame handling\nI0120 13:25:59.316461 827 log.go:172] (0xc000674320) (3) Data frame sent\nI0120 13:25:59.420744 827 log.go:172] (0xc00013adc0) Data frame received for 1\nI0120 13:25:59.420882 827 log.go:172] (0xc00077a640) (1) Data frame handling\nI0120 13:25:59.420941 827 log.go:172] (0xc00077a640) (1) Data frame sent\nI0120 13:25:59.420976 827 log.go:172] (0xc00013adc0) (0xc00077a640) Stream removed, broadcasting: 1\nI0120 13:25:59.421260 827 log.go:172] (0xc00013adc0) (0xc000674320) Stream removed, broadcasting: 3\nI0120 13:25:59.421909 827 log.go:172] (0xc00013adc0) (0xc0004e6000) Stream removed, broadcasting: 5\nI0120 13:25:59.422046 827 log.go:172] (0xc00013adc0) (0xc00077a640) Stream removed, broadcasting: 1\nI0120 13:25:59.422070 827 log.go:172] (0xc00013adc0) (0xc000674320) Stream removed, broadcasting: 3\nI0120 13:25:59.422083 827 log.go:172] (0xc00013adc0) (0xc0004e6000) Stream removed, broadcasting: 5\nI0120 13:25:59.422613 827 log.go:172] (0xc00013adc0) Go away received\n" Jan 20 13:25:59.433: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 20 13:25:59.433: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 20 13:25:59.441: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 20 13:26:09.456: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 20 13:26:09.456: INFO: Waiting for statefulset status.replicas updated to 0 Jan 20 13:26:09.488: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999998357s Jan 20 13:26:10.507: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.985723194s Jan 20 13:26:11.516: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.966631853s Jan 20 13:26:12.538: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.957376857s Jan 20 13:26:13.549: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.935517954s Jan 20 13:26:14.565: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.92530426s Jan 20 13:26:15.577: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.908732086s Jan 20 13:26:16.590: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.897178457s Jan 20 13:26:17.604: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.883425553s Jan 20 13:26:18.622: INFO: Verifying statefulset ss doesn't scale past 1 for another 869.424619ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9550 Jan 20 13:26:19.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9550 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 20 13:26:20.403: INFO: stderr: "I0120 13:26:20.009503 855 log.go:172] (0xc000994000) (0xc0008b00a0) Create stream\nI0120 13:26:20.009939 855 log.go:172] (0xc000994000) (0xc0008b00a0) Stream added, broadcasting: 1\nI0120 13:26:20.018371 855 log.go:172] (0xc000994000) Reply frame received for 1\nI0120 13:26:20.018461 855 log.go:172] (0xc000994000) (0xc0008b0140) Create stream\nI0120 13:26:20.018475 855 log.go:172] (0xc000994000) (0xc0008b0140) Stream added, broadcasting: 3\nI0120 13:26:20.019859 855 log.go:172] (0xc000994000) Reply frame received for 3\nI0120 13:26:20.019882 855 log.go:172] (0xc000994000) (0xc0008b01e0) Create stream\nI0120 13:26:20.019888 855 log.go:172] (0xc000994000) (0xc0008b01e0) Stream added, broadcasting: 5\nI0120 13:26:20.020950 855 log.go:172] (0xc000994000) Reply frame received for 5\nI0120 13:26:20.122848 855 log.go:172] (0xc000994000) Data frame received for 5\nI0120 13:26:20.123122 855 log.go:172] (0xc0008b01e0) (5) Data frame handling\nI0120 13:26:20.123159 855 log.go:172] (0xc0008b01e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0120 13:26:20.123270 855 log.go:172] (0xc000994000) Data frame received for 3\nI0120 13:26:20.123311 855 log.go:172] (0xc0008b0140) (3) Data frame handling\nI0120 13:26:20.123342 855 log.go:172] (0xc0008b0140) (3) Data frame sent\nI0120 13:26:20.389039 855 log.go:172] (0xc000994000) (0xc0008b0140) Stream removed, broadcasting: 3\nI0120 13:26:20.389414 855 log.go:172] (0xc000994000) Data frame received for 1\nI0120 13:26:20.389480 855 log.go:172] (0xc0008b00a0) (1) Data frame handling\nI0120 13:26:20.389535 855 log.go:172] (0xc0008b00a0) (1) Data frame sent\nI0120 13:26:20.389563 855 log.go:172] (0xc000994000) (0xc0008b00a0) Stream removed, broadcasting: 1\nI0120 13:26:20.390063 855 log.go:172] (0xc000994000) (0xc0008b01e0) Stream removed, broadcasting: 5\nI0120 13:26:20.390686 855 log.go:172] (0xc000994000) Go away received\nI0120 13:26:20.392166 855 log.go:172] (0xc000994000) (0xc0008b00a0) Stream removed, broadcasting: 1\nI0120 13:26:20.392203 855 log.go:172] (0xc000994000) (0xc0008b0140) Stream removed, broadcasting: 3\nI0120 13:26:20.392218 855 log.go:172] (0xc000994000) (0xc0008b01e0) Stream removed, broadcasting: 5\n" Jan 20 13:26:20.403: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 20 13:26:20.403: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 20 13:26:20.410: INFO: Found 1 stateful pods, waiting for 3 Jan 20 13:26:30.425: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 20 13:26:30.425: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 20 13:26:30.425: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 20 13:26:40.424: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 20 13:26:40.424: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 20 13:26:40.424: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jan 20 13:26:40.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9550 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 20 13:26:41.101: INFO: stderr: "I0120 13:26:40.687123 873 log.go:172] (0xc000116790) (0xc0004fe6e0) Create stream\nI0120 13:26:40.687449 873 log.go:172] (0xc000116790) (0xc0004fe6e0) Stream added, broadcasting: 1\nI0120 13:26:40.694820 873 log.go:172] (0xc000116790) Reply frame received for 1\nI0120 13:26:40.694871 873 log.go:172] (0xc000116790) (0xc00021e000) Create stream\nI0120 13:26:40.694878 873 log.go:172] (0xc000116790) (0xc00021e000) Stream added, broadcasting: 3\nI0120 13:26:40.698039 873 log.go:172] (0xc000116790) Reply frame received for 3\nI0120 13:26:40.698070 873 log.go:172] (0xc000116790) (0xc0007f6000) Create stream\nI0120 13:26:40.698078 873 log.go:172] (0xc000116790) (0xc0007f6000) Stream added, broadcasting: 5\nI0120 13:26:40.699770 873 log.go:172] (0xc000116790) Reply frame received for 5\nI0120 13:26:40.913555 873 log.go:172] (0xc000116790) Data frame received for 3\nI0120 13:26:40.913676 873 log.go:172] (0xc00021e000) (3) Data frame handling\nI0120 13:26:40.913694 873 log.go:172] (0xc00021e000) (3) Data frame sent\nI0120 13:26:40.914518 873 log.go:172] (0xc000116790) Data frame received for 5\nI0120 13:26:40.914531 873 log.go:172] (0xc0007f6000) (5) Data frame handling\nI0120 13:26:40.914543 873 log.go:172] (0xc0007f6000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0120 13:26:41.091229 873 log.go:172] (0xc000116790) (0xc00021e000) Stream removed, broadcasting: 3\nI0120 13:26:41.091399 873 log.go:172] (0xc000116790) Data frame received for 1\nI0120 13:26:41.091427 873 log.go:172] (0xc0004fe6e0) (1) Data frame handling\nI0120 13:26:41.091448 873 log.go:172] (0xc0004fe6e0) (1) Data frame sent\nI0120 13:26:41.091558 873 log.go:172] (0xc000116790) (0xc0007f6000) Stream removed, broadcasting: 5\nI0120 13:26:41.091621 873 log.go:172] (0xc000116790) (0xc0004fe6e0) Stream removed, broadcasting: 1\nI0120 13:26:41.091644 873 log.go:172] (0xc000116790) Go away received\nI0120 13:26:41.092943 873 log.go:172] (0xc000116790) (0xc0004fe6e0) Stream removed, broadcasting: 1\nI0120 13:26:41.092965 873 log.go:172] (0xc000116790) (0xc00021e000) Stream removed, broadcasting: 3\nI0120 13:26:41.092973 873 log.go:172] (0xc000116790) (0xc0007f6000) Stream removed, broadcasting: 5\n" Jan 20 13:26:41.101: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 20 13:26:41.101: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 20 13:26:41.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9550 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 20 13:26:41.638: INFO: stderr: "I0120 13:26:41.258488 887 log.go:172] (0xc000116f20) (0xc0005b8b40) Create stream\nI0120 13:26:41.258831 887 log.go:172] (0xc000116f20) (0xc0005b8b40) Stream added, broadcasting: 1\nI0120 13:26:41.263411 887 log.go:172] (0xc000116f20) Reply frame received for 1\nI0120 13:26:41.263456 887 log.go:172] (0xc000116f20) (0xc00080c000) Create stream\nI0120 13:26:41.263471 887 log.go:172] (0xc000116f20) (0xc00080c000) Stream added, broadcasting: 3\nI0120 13:26:41.264974 887 log.go:172] (0xc000116f20) Reply frame received for 3\nI0120 13:26:41.265035 887 log.go:172] (0xc000116f20) (0xc000892000) Create stream\nI0120 13:26:41.265053 887 log.go:172] (0xc000116f20) (0xc000892000) Stream added, broadcasting: 5\nI0120 13:26:41.266185 887 log.go:172] (0xc000116f20) Reply frame received for 5\nI0120 13:26:41.434188 887 log.go:172] (0xc000116f20) Data frame received for 5\nI0120 13:26:41.434252 887 log.go:172] (0xc000892000) (5) Data frame handling\nI0120 13:26:41.434269 887 log.go:172] (0xc000892000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0120 13:26:41.536504 887 log.go:172] (0xc000116f20) Data frame received for 3\nI0120 13:26:41.536565 887 log.go:172] (0xc00080c000) (3) Data frame handling\nI0120 13:26:41.536591 887 log.go:172] (0xc00080c000) (3) Data frame sent\nI0120 13:26:41.630653 887 log.go:172] (0xc000116f20) Data frame received for 1\nI0120 13:26:41.630745 887 log.go:172] (0xc0005b8b40) (1) Data frame handling\nI0120 13:26:41.630766 887 log.go:172] (0xc0005b8b40) (1) Data frame sent\nI0120 13:26:41.630783 887 log.go:172] (0xc000116f20) (0xc00080c000) Stream removed, broadcasting: 3\nI0120 13:26:41.630815 887 log.go:172] (0xc000116f20) (0xc000892000) Stream removed, broadcasting: 5\nI0120 13:26:41.630896 887 log.go:172] (0xc000116f20) (0xc0005b8b40) Stream removed, broadcasting: 1\nI0120 13:26:41.630947 887 log.go:172] (0xc000116f20) Go away received\nI0120 13:26:41.632222 887 log.go:172] (0xc000116f20) (0xc0005b8b40) Stream removed, broadcasting: 1\nI0120 13:26:41.632396 887 log.go:172] (0xc000116f20) (0xc00080c000) Stream removed, broadcasting: 3\nI0120 13:26:41.632415 887 log.go:172] (0xc000116f20) (0xc000892000) Stream removed, broadcasting: 5\n" Jan 20 13:26:41.638: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 20 13:26:41.638: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 20 13:26:41.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9550 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 20 13:26:42.373: INFO: stderr: "I0120 13:26:41.985507 905 log.go:172] (0xc0009b80b0) (0xc00097e0a0) Create stream\nI0120 13:26:41.986310 905 log.go:172] (0xc0009b80b0) (0xc00097e0a0) Stream added, broadcasting: 1\nI0120 13:26:41.995104 905 log.go:172] (0xc0009b80b0) Reply frame received for 1\nI0120 13:26:41.995145 905 log.go:172] (0xc0009b80b0) (0xc00064e1e0) Create stream\nI0120 13:26:41.995157 905 log.go:172] (0xc0009b80b0) (0xc00064e1e0) Stream added, broadcasting: 3\nI0120 13:26:42.002959 905 log.go:172] (0xc0009b80b0) Reply frame received for 3\nI0120 13:26:42.003265 905 log.go:172] (0xc0009b80b0) (0xc000268000) Create stream\nI0120 13:26:42.003337 905 log.go:172] (0xc0009b80b0) (0xc000268000) Stream added, broadcasting: 5\nI0120 13:26:42.008099 905 log.go:172] (0xc0009b80b0) Reply frame received for 5\nI0120 13:26:42.150078 905 log.go:172] (0xc0009b80b0) Data frame received for 5\nI0120 13:26:42.150313 905 log.go:172] (0xc000268000) (5) Data frame handling\nI0120 13:26:42.150374 905 log.go:172] (0xc000268000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0120 13:26:42.190347 905 log.go:172] (0xc0009b80b0) Data frame received for 3\nI0120 13:26:42.190592 905 log.go:172] (0xc00064e1e0) (3) Data frame handling\nI0120 13:26:42.190638 905 log.go:172] (0xc00064e1e0) (3) Data frame sent\nI0120 13:26:42.354472 905 log.go:172] (0xc0009b80b0) Data frame received for 1\nI0120 13:26:42.354635 905 log.go:172] (0xc00097e0a0) (1) Data frame handling\nI0120 13:26:42.354664 905 log.go:172] (0xc00097e0a0) (1) Data frame sent\nI0120 13:26:42.354716 905 log.go:172] (0xc0009b80b0) (0xc00097e0a0) Stream removed, broadcasting: 1\nI0120 13:26:42.356798 905 log.go:172] (0xc0009b80b0) (0xc00064e1e0) Stream removed, broadcasting: 3\nI0120 13:26:42.357208 905 log.go:172] (0xc0009b80b0) (0xc000268000) Stream removed, broadcasting: 5\nI0120 13:26:42.357666 905 log.go:172] (0xc0009b80b0) (0xc00097e0a0) Stream removed, broadcasting: 1\nI0120 13:26:42.357724 905 log.go:172] (0xc0009b80b0) (0xc00064e1e0) Stream removed, broadcasting: 3\nI0120 13:26:42.357845 905 log.go:172] (0xc0009b80b0) (0xc000268000) Stream removed, broadcasting: 5\n" Jan 20 13:26:42.373: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 20 13:26:42.373: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 20 13:26:42.373: INFO: Waiting for statefulset status.replicas updated to 0 Jan 20 13:26:42.381: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 20 13:26:53.289: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 20 13:26:53.289: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 20 13:26:53.289: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 20 13:26:53.344: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999592s Jan 20 13:26:54.367: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.972391262s Jan 20 13:26:55.376: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.949706061s Jan 20 13:26:56.384: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.940266802s Jan 20 13:26:57.408: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.93235195s Jan 20 13:26:58.417: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.908339382s Jan 20 13:26:59.426: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.900083465s Jan 20 13:27:00.436: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.890787325s Jan 20 13:27:01.447: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.881114833s Jan 20 13:27:02.467: INFO: Verifying statefulset ss doesn't scale past 3 for another 870.052233ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9550 Jan 20 13:27:03.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9550 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 20 13:27:04.182: INFO: stderr: "I0120 13:27:03.736705 925 log.go:172] (0xc000a60420) (0xc00040a820) Create stream\nI0120 13:27:03.736926 925 log.go:172] (0xc000a60420) (0xc00040a820) Stream added, broadcasting: 1\nI0120 13:27:03.749794 925 log.go:172] (0xc000a60420) Reply frame received for 1\nI0120 13:27:03.749934 925 log.go:172] (0xc000a60420) (0xc0006601e0) Create stream\nI0120 13:27:03.749949 925 log.go:172] (0xc000a60420) (0xc0006601e0) Stream added, broadcasting: 3\nI0120 13:27:03.752097 925 log.go:172] (0xc000a60420) Reply frame received for 3\nI0120 13:27:03.752120 925 log.go:172] (0xc000a60420) (0xc00040a000) Create stream\nI0120 13:27:03.752127 925 log.go:172] (0xc000a60420) (0xc00040a000) Stream added, broadcasting: 5\nI0120 13:27:03.754785 925 log.go:172] (0xc000a60420) Reply frame received for 5\nI0120 13:27:03.970129 925 log.go:172] (0xc000a60420) Data frame received for 3\nI0120 13:27:03.970264 925 log.go:172] (0xc0006601e0) (3) Data frame handling\nI0120 13:27:03.970291 925 log.go:172] (0xc0006601e0) (3) Data frame sent\nI0120 13:27:03.970361 925 log.go:172] (0xc000a60420) Data frame received for 5\nI0120 13:27:03.970389 925 log.go:172] (0xc00040a000) (5) Data frame handling\nI0120 13:27:03.970407 925 log.go:172] (0xc00040a000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0120 13:27:04.165592 925 log.go:172] (0xc000a60420) Data frame received for 1\nI0120 13:27:04.165810 925 log.go:172] (0xc000a60420) (0xc0006601e0) Stream removed, broadcasting: 3\nI0120 13:27:04.165905 925 log.go:172] (0xc00040a820) (1) Data frame handling\nI0120 13:27:04.165941 925 log.go:172] (0xc00040a820) (1) Data frame sent\nI0120 13:27:04.165989 925 log.go:172] (0xc000a60420) (0xc00040a000) Stream removed, broadcasting: 5\nI0120 13:27:04.166117 925 log.go:172] (0xc000a60420) (0xc00040a820) Stream removed, broadcasting: 1\nI0120 13:27:04.166147 925 log.go:172] (0xc000a60420) Go away received\nI0120 13:27:04.167522 925 log.go:172] (0xc000a60420) (0xc00040a820) Stream removed, broadcasting: 1\nI0120 13:27:04.167540 925 log.go:172] (0xc000a60420) (0xc0006601e0) Stream removed, broadcasting: 3\nI0120 13:27:04.167550 925 log.go:172] (0xc000a60420) (0xc00040a000) Stream removed, broadcasting: 5\n" Jan 20 13:27:04.182: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 20 13:27:04.182: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 20 13:27:04.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9550 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 20 13:27:04.467: INFO: stderr: "I0120 13:27:04.326154 945 log.go:172] (0xc000116dc0) (0xc00021e820) Create stream\nI0120 13:27:04.326375 945 log.go:172] (0xc000116dc0) (0xc00021e820) Stream added, broadcasting: 1\nI0120 13:27:04.328688 945 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0120 13:27:04.328721 945 log.go:172] (0xc000116dc0) (0xc00061a000) Create stream\nI0120 13:27:04.328730 945 log.go:172] (0xc000116dc0) (0xc00061a000) Stream added, broadcasting: 3\nI0120 13:27:04.329729 945 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0120 13:27:04.329746 945 log.go:172] (0xc000116dc0) (0xc00061a0a0) Create stream\nI0120 13:27:04.329754 945 log.go:172] (0xc000116dc0) (0xc00061a0a0) Stream added, broadcasting: 5\nI0120 13:27:04.330932 945 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0120 13:27:04.387938 945 log.go:172] (0xc000116dc0) Data frame received for 3\nI0120 13:27:04.388072 945 log.go:172] (0xc00061a000) (3) Data frame handling\nI0120 13:27:04.388088 945 log.go:172] (0xc00061a000) (3) Data frame sent\nI0120 13:27:04.388336 945 log.go:172] (0xc000116dc0) Data frame received for 5\nI0120 13:27:04.388350 945 log.go:172] (0xc00061a0a0) (5) Data frame handling\nI0120 13:27:04.388363 945 log.go:172] (0xc00061a0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0120 13:27:04.459455 945 log.go:172] (0xc000116dc0) (0xc00061a000) Stream removed, broadcasting: 3\nI0120 13:27:04.459615 945 log.go:172] (0xc000116dc0) Data frame received for 1\nI0120 13:27:04.459626 945 log.go:172] (0xc00021e820) (1) Data frame handling\nI0120 13:27:04.459640 945 log.go:172] (0xc00021e820) (1) Data frame sent\nI0120 13:27:04.459646 945 log.go:172] (0xc000116dc0) (0xc00021e820) Stream removed, broadcasting: 1\nI0120 13:27:04.460008 945 log.go:172] (0xc000116dc0) (0xc00061a0a0) Stream removed, broadcasting: 5\nI0120 13:27:04.460077 945 log.go:172] (0xc000116dc0) Go away received\nI0120 13:27:04.460464 945 log.go:172] (0xc000116dc0) (0xc00021e820) Stream removed, broadcasting: 1\nI0120 13:27:04.460476 945 log.go:172] (0xc000116dc0) (0xc00061a000) Stream removed, broadcasting: 3\nI0120 13:27:04.460480 945 log.go:172] (0xc000116dc0) (0xc00061a0a0) Stream removed, broadcasting: 5\n" Jan 20 13:27:04.468: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 20 13:27:04.468: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 20 13:27:04.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9550 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 20 13:27:05.102: INFO: stderr: "I0120 13:27:04.631222 960 log.go:172] (0xc0009fe420) (0xc00080c640) Create stream\nI0120 13:27:04.631354 960 log.go:172] (0xc0009fe420) (0xc00080c640) Stream added, broadcasting: 1\nI0120 13:27:04.645130 960 log.go:172] (0xc0009fe420) Reply frame received for 1\nI0120 13:27:04.645183 960 log.go:172] (0xc0009fe420) (0xc000a0a000) Create stream\nI0120 13:27:04.645198 960 log.go:172] (0xc0009fe420) (0xc000a0a000) Stream added, broadcasting: 3\nI0120 13:27:04.648366 960 log.go:172] (0xc0009fe420) Reply frame received for 3\nI0120 13:27:04.648392 960 log.go:172] (0xc0009fe420) (0xc0005781e0) Create stream\nI0120 13:27:04.648421 960 log.go:172] (0xc0009fe420) (0xc0005781e0) Stream added, broadcasting: 5\nI0120 13:27:04.649577 960 log.go:172] (0xc0009fe420) Reply frame received for 5\nI0120 13:27:04.850574 960 log.go:172] (0xc0009fe420) Data frame received for 3\nI0120 13:27:04.850893 960 log.go:172] (0xc000a0a000) (3) Data frame handling\nI0120 13:27:04.850969 960 log.go:172] (0xc000a0a000) (3) Data frame sent\nI0120 13:27:04.850994 960 log.go:172] (0xc0009fe420) Data frame received for 5\nI0120 13:27:04.851030 960 log.go:172] (0xc0005781e0) (5) Data frame handling\nI0120 13:27:04.851068 960 log.go:172] (0xc0005781e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0120 13:27:05.091691 960 log.go:172] (0xc0009fe420) (0xc000a0a000) Stream removed, broadcasting: 3\nI0120 13:27:05.091973 960 log.go:172] (0xc0009fe420) Data frame received for 1\nI0120 13:27:05.092011 960 log.go:172] (0xc00080c640) (1) Data frame handling\nI0120 13:27:05.092048 960 log.go:172] (0xc00080c640) (1) Data frame sent\nI0120 13:27:05.092060 960 log.go:172] (0xc0009fe420) (0xc0005781e0) Stream removed, broadcasting: 5\nI0120 13:27:05.092183 960 log.go:172] (0xc0009fe420) (0xc00080c640) Stream removed, broadcasting: 1\nI0120 13:27:05.092214 960 log.go:172] (0xc0009fe420) Go away received\nI0120 13:27:05.094049 960 log.go:172] (0xc0009fe420) (0xc00080c640) Stream removed, broadcasting: 1\nI0120 13:27:05.094075 960 log.go:172] (0xc0009fe420) (0xc000a0a000) Stream removed, broadcasting: 3\nI0120 13:27:05.094087 960 log.go:172] (0xc0009fe420) (0xc0005781e0) Stream removed, broadcasting: 5\n" Jan 20 13:27:05.102: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 20 13:27:05.102: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 20 13:27:05.102: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 20 13:27:35.147: INFO: Deleting all statefulset in ns statefulset-9550 Jan 20 13:27:35.152: INFO: Scaling statefulset ss to 0 Jan 20 13:27:35.171: INFO: Waiting for statefulset status.replicas updated to 0 Jan 20 13:27:35.175: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:27:35.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9550" for this suite. Jan 20 13:27:41.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:27:41.462: INFO: namespace statefulset-9550 deletion completed in 6.168408256s • [SLOW TEST:114.757 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:27:41.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-7a6be3aa-28ac-45ed-91a5-bde07b9c513c [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:27:41.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5647" for this suite. Jan 20 13:27:47.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:27:47.723: INFO: namespace secrets-5647 deletion completed in 6.156715371s • [SLOW TEST:6.261 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:27:47.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-8176 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 20 13:27:47.862: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 20 13:28:28.189: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8176 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 20 13:28:28.189: INFO: >>> kubeConfig: /root/.kube/config I0120 13:28:28.265089 8 log.go:172] (0xc00107e790) (0xc00242d9a0) Create stream I0120 13:28:28.265261 8 log.go:172] (0xc00107e790) (0xc00242d9a0) Stream added, broadcasting: 1 I0120 13:28:28.277835 8 log.go:172] (0xc00107e790) Reply frame received for 1 I0120 13:28:28.277930 8 log.go:172] (0xc00107e790) (0xc001c240a0) Create stream I0120 13:28:28.277967 8 log.go:172] (0xc00107e790) (0xc001c240a0) Stream added, broadcasting: 3 I0120 13:28:28.283459 8 log.go:172] (0xc00107e790) Reply frame received for 3 I0120 13:28:28.283513 8 log.go:172] (0xc00107e790) (0xc00242da40) Create stream I0120 13:28:28.283527 8 log.go:172] (0xc00107e790) (0xc00242da40) Stream added, broadcasting: 5 I0120 13:28:28.286397 8 log.go:172] (0xc00107e790) Reply frame received for 5 I0120 13:28:28.487132 8 log.go:172] (0xc00107e790) Data frame received for 3 I0120 13:28:28.487235 8 log.go:172] (0xc001c240a0) (3) Data frame handling I0120 13:28:28.487283 8 log.go:172] (0xc001c240a0) (3) Data frame sent I0120 13:28:28.720970 8 log.go:172] (0xc00107e790) Data frame received for 1 I0120 13:28:28.721167 8 log.go:172] (0xc00107e790) (0xc001c240a0) Stream removed, broadcasting: 3 I0120 13:28:28.721296 8 log.go:172] (0xc00242d9a0) (1) Data frame handling I0120 13:28:28.721325 8 log.go:172] (0xc00242d9a0) (1) Data frame sent I0120 13:28:28.721389 8 log.go:172] (0xc00107e790) (0xc00242da40) Stream removed, broadcasting: 5 I0120 13:28:28.721493 8 log.go:172] (0xc00107e790) (0xc00242d9a0) Stream removed, broadcasting: 1 I0120 13:28:28.721548 8 log.go:172] (0xc00107e790) Go away received I0120 13:28:28.722432 8 log.go:172] (0xc00107e790) (0xc00242d9a0) Stream removed, broadcasting: 1 I0120 13:28:28.722503 8 log.go:172] (0xc00107e790) (0xc001c240a0) Stream removed, broadcasting: 3 I0120 13:28:28.722532 8 log.go:172] (0xc00107e790) (0xc00242da40) Stream removed, broadcasting: 5 Jan 20 13:28:28.722: INFO: Found all expected endpoints: [netserver-0] Jan 20 13:28:28.733: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8176 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 20 13:28:28.733: INFO: >>> kubeConfig: /root/.kube/config I0120 13:28:28.832260 8 log.go:172] (0xc000ec26e0) (0xc001c24280) Create stream I0120 13:28:28.832444 8 log.go:172] (0xc000ec26e0) (0xc001c24280) Stream added, broadcasting: 1 I0120 13:28:28.843793 8 log.go:172] (0xc000ec26e0) Reply frame received for 1 I0120 13:28:28.843904 8 log.go:172] (0xc000ec26e0) (0xc00242dae0) Create stream I0120 13:28:28.843924 8 log.go:172] (0xc000ec26e0) (0xc00242dae0) Stream added, broadcasting: 3 I0120 13:28:28.848695 8 log.go:172] (0xc000ec26e0) Reply frame received for 3 I0120 13:28:28.848956 8 log.go:172] (0xc000ec26e0) (0xc00318c320) Create stream I0120 13:28:28.848981 8 log.go:172] (0xc000ec26e0) (0xc00318c320) Stream added, broadcasting: 5 I0120 13:28:28.854669 8 log.go:172] (0xc000ec26e0) Reply frame received for 5 I0120 13:28:29.043819 8 log.go:172] (0xc000ec26e0) Data frame received for 3 I0120 13:28:29.043916 8 log.go:172] (0xc00242dae0) (3) Data frame handling I0120 13:28:29.043945 8 log.go:172] (0xc00242dae0) (3) Data frame sent I0120 13:28:29.173857 8 log.go:172] (0xc000ec26e0) Data frame received for 1 I0120 13:28:29.174082 8 log.go:172] (0xc000ec26e0) (0xc00242dae0) Stream removed, broadcasting: 3 I0120 13:28:29.174199 8 log.go:172] (0xc001c24280) (1) Data frame handling I0120 13:28:29.174235 8 log.go:172] (0xc001c24280) (1) Data frame sent I0120 13:28:29.174265 8 log.go:172] (0xc000ec26e0) (0xc00318c320) Stream removed, broadcasting: 5 I0120 13:28:29.174327 8 log.go:172] (0xc000ec26e0) (0xc001c24280) Stream removed, broadcasting: 1 I0120 13:28:29.174359 8 log.go:172] (0xc000ec26e0) Go away received I0120 13:28:29.174755 8 log.go:172] (0xc000ec26e0) (0xc001c24280) Stream removed, broadcasting: 1 I0120 13:28:29.174774 8 log.go:172] (0xc000ec26e0) (0xc00242dae0) Stream removed, broadcasting: 3 I0120 13:28:29.174832 8 log.go:172] (0xc000ec26e0) (0xc00318c320) Stream removed, broadcasting: 5 Jan 20 13:28:29.174: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:28:29.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8176" for this suite. Jan 20 13:28:53.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:28:53.760: INFO: namespace pod-network-test-8176 deletion completed in 24.575741384s • [SLOW TEST:66.036 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:28:53.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jan 20 13:29:03.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-5de84fd4-ee77-4240-b0a3-217bf0d0148a -c busybox-main-container --namespace=emptydir-8533 -- cat /usr/share/volumeshare/shareddata.txt' Jan 20 13:29:04.485: INFO: stderr: "I0120 13:29:04.169086 980 log.go:172] (0xc0007e22c0) (0xc0005f2b40) Create stream\nI0120 13:29:04.169422 980 log.go:172] (0xc0007e22c0) (0xc0005f2b40) Stream added, broadcasting: 1\nI0120 13:29:04.179731 980 log.go:172] (0xc0007e22c0) Reply frame received for 1\nI0120 13:29:04.179931 980 log.go:172] (0xc0007e22c0) (0xc0003d0000) Create stream\nI0120 13:29:04.179961 980 log.go:172] (0xc0007e22c0) (0xc0003d0000) Stream added, broadcasting: 3\nI0120 13:29:04.181915 980 log.go:172] (0xc0007e22c0) Reply frame received for 3\nI0120 13:29:04.181943 980 log.go:172] (0xc0007e22c0) (0xc0003e8000) Create stream\nI0120 13:29:04.181952 980 log.go:172] (0xc0007e22c0) (0xc0003e8000) Stream added, broadcasting: 5\nI0120 13:29:04.183115 980 log.go:172] (0xc0007e22c0) Reply frame received for 5\nI0120 13:29:04.325908 980 log.go:172] (0xc0007e22c0) Data frame received for 3\nI0120 13:29:04.326018 980 log.go:172] (0xc0003d0000) (3) Data frame handling\nI0120 13:29:04.326037 980 log.go:172] (0xc0003d0000) (3) Data frame sent\nI0120 13:29:04.474858 980 log.go:172] (0xc0007e22c0) (0xc0003d0000) Stream removed, broadcasting: 3\nI0120 13:29:04.475042 980 log.go:172] (0xc0007e22c0) Data frame received for 1\nI0120 13:29:04.475061 980 log.go:172] (0xc0005f2b40) (1) Data frame handling\nI0120 13:29:04.475078 980 log.go:172] (0xc0005f2b40) (1) Data frame sent\nI0120 13:29:04.475126 980 log.go:172] (0xc0007e22c0) (0xc0005f2b40) Stream removed, broadcasting: 1\nI0120 13:29:04.475285 980 log.go:172] (0xc0007e22c0) (0xc0003e8000) Stream removed, broadcasting: 5\nI0120 13:29:04.475429 980 log.go:172] (0xc0007e22c0) Go away received\nI0120 13:29:04.476338 980 log.go:172] (0xc0007e22c0) (0xc0005f2b40) Stream removed, broadcasting: 1\nI0120 13:29:04.476349 980 log.go:172] (0xc0007e22c0) (0xc0003d0000) Stream removed, broadcasting: 3\nI0120 13:29:04.476355 980 log.go:172] (0xc0007e22c0) (0xc0003e8000) Stream removed, broadcasting: 5\n" Jan 20 13:29:04.485: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:29:04.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8533" for this suite. Jan 20 13:29:10.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:29:10.687: INFO: namespace emptydir-8533 deletion completed in 6.192423551s • [SLOW TEST:16.928 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:29:10.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 20 13:29:10.847: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:29:11.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3602" for this suite. Jan 20 13:29:18.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:29:18.140: INFO: namespace custom-resource-definition-3602 deletion completed in 6.157309217s • [SLOW TEST:7.452 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:29:18.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-13c8d089-9ac8-4f17-b4a2-8e9a7719c113 STEP: Creating a pod to test consume configMaps Jan 20 13:29:18.336: INFO: Waiting up to 5m0s for pod "pod-configmaps-6e5c8aaa-8cb5-411d-9ab3-7878530200be" in namespace "configmap-4248" to be "success or failure" Jan 20 13:29:18.340: INFO: Pod "pod-configmaps-6e5c8aaa-8cb5-411d-9ab3-7878530200be": Phase="Pending", Reason="", readiness=false. Elapsed: 3.8699ms Jan 20 13:29:20.357: INFO: Pod "pod-configmaps-6e5c8aaa-8cb5-411d-9ab3-7878530200be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020490919s Jan 20 13:29:22.367: INFO: Pod "pod-configmaps-6e5c8aaa-8cb5-411d-9ab3-7878530200be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030766548s Jan 20 13:29:24.384: INFO: Pod "pod-configmaps-6e5c8aaa-8cb5-411d-9ab3-7878530200be": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047673121s Jan 20 13:29:26.397: INFO: Pod "pod-configmaps-6e5c8aaa-8cb5-411d-9ab3-7878530200be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060896249s STEP: Saw pod success Jan 20 13:29:26.397: INFO: Pod "pod-configmaps-6e5c8aaa-8cb5-411d-9ab3-7878530200be" satisfied condition "success or failure" Jan 20 13:29:26.403: INFO: Trying to get logs from node iruya-node pod pod-configmaps-6e5c8aaa-8cb5-411d-9ab3-7878530200be container configmap-volume-test: STEP: delete the pod Jan 20 13:29:26.456: INFO: Waiting for pod pod-configmaps-6e5c8aaa-8cb5-411d-9ab3-7878530200be to disappear Jan 20 13:29:26.461: INFO: Pod pod-configmaps-6e5c8aaa-8cb5-411d-9ab3-7878530200be no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:29:26.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4248" for this suite. Jan 20 13:29:32.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:29:32.627: INFO: namespace configmap-4248 deletion completed in 6.160245187s • [SLOW TEST:14.485 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:29:32.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-111566c9-ef39-4dad-bb67-14f07d9a3608 STEP: Creating a pod to test consume configMaps Jan 20 13:29:32.749: INFO: Waiting up to 5m0s for pod "pod-configmaps-e882c713-9001-4f67-996a-56ba0ee1379c" in namespace "configmap-2145" to be "success or failure" Jan 20 13:29:32.778: INFO: Pod "pod-configmaps-e882c713-9001-4f67-996a-56ba0ee1379c": Phase="Pending", Reason="", readiness=false. Elapsed: 28.411794ms Jan 20 13:29:34.791: INFO: Pod "pod-configmaps-e882c713-9001-4f67-996a-56ba0ee1379c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04178658s Jan 20 13:29:36.799: INFO: Pod "pod-configmaps-e882c713-9001-4f67-996a-56ba0ee1379c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049635743s Jan 20 13:29:38.808: INFO: Pod "pod-configmaps-e882c713-9001-4f67-996a-56ba0ee1379c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058923219s Jan 20 13:29:40.824: INFO: Pod "pod-configmaps-e882c713-9001-4f67-996a-56ba0ee1379c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074578716s STEP: Saw pod success Jan 20 13:29:40.824: INFO: Pod "pod-configmaps-e882c713-9001-4f67-996a-56ba0ee1379c" satisfied condition "success or failure" Jan 20 13:29:40.832: INFO: Trying to get logs from node iruya-node pod pod-configmaps-e882c713-9001-4f67-996a-56ba0ee1379c container configmap-volume-test: STEP: delete the pod Jan 20 13:29:40.988: INFO: Waiting for pod pod-configmaps-e882c713-9001-4f67-996a-56ba0ee1379c to disappear Jan 20 13:29:41.000: INFO: Pod pod-configmaps-e882c713-9001-4f67-996a-56ba0ee1379c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:29:41.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2145" for this suite. Jan 20 13:29:47.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:29:47.188: INFO: namespace configmap-2145 deletion completed in 6.180526866s • [SLOW TEST:14.561 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:29:47.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-64e92b13-df2d-4f8e-baba-d15b848ddac0 STEP: Creating a pod to test consume secrets Jan 20 13:29:47.349: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a10d1dc7-a46f-4400-aafa-709b91c59162" in namespace "projected-2496" to be "success or failure" Jan 20 13:29:47.418: INFO: Pod "pod-projected-secrets-a10d1dc7-a46f-4400-aafa-709b91c59162": Phase="Pending", Reason="", readiness=false. Elapsed: 68.955053ms Jan 20 13:29:49.427: INFO: Pod "pod-projected-secrets-a10d1dc7-a46f-4400-aafa-709b91c59162": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077444251s Jan 20 13:29:51.436: INFO: Pod "pod-projected-secrets-a10d1dc7-a46f-4400-aafa-709b91c59162": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086780007s Jan 20 13:29:53.446: INFO: Pod "pod-projected-secrets-a10d1dc7-a46f-4400-aafa-709b91c59162": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097155165s Jan 20 13:29:55.454: INFO: Pod "pod-projected-secrets-a10d1dc7-a46f-4400-aafa-709b91c59162": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.104371924s STEP: Saw pod success Jan 20 13:29:55.454: INFO: Pod "pod-projected-secrets-a10d1dc7-a46f-4400-aafa-709b91c59162" satisfied condition "success or failure" Jan 20 13:29:55.459: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-a10d1dc7-a46f-4400-aafa-709b91c59162 container projected-secret-volume-test: STEP: delete the pod Jan 20 13:29:55.563: INFO: Waiting for pod pod-projected-secrets-a10d1dc7-a46f-4400-aafa-709b91c59162 to disappear Jan 20 13:29:55.577: INFO: Pod pod-projected-secrets-a10d1dc7-a46f-4400-aafa-709b91c59162 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:29:55.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2496" for this suite. Jan 20 13:30:01.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:30:01.744: INFO: namespace projected-2496 deletion completed in 6.158710546s • [SLOW TEST:14.556 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:30:01.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-9b9bb100-4ac4-49a4-946a-096e2b945cee in namespace container-probe-8322 Jan 20 13:30:09.981: INFO: Started pod busybox-9b9bb100-4ac4-49a4-946a-096e2b945cee in namespace container-probe-8322 STEP: checking the pod's current state and verifying that restartCount is present Jan 20 13:30:09.985: INFO: Initial restart count of pod busybox-9b9bb100-4ac4-49a4-946a-096e2b945cee is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:34:11.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8322" for this suite. Jan 20 13:34:17.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:34:17.723: INFO: namespace container-probe-8322 deletion completed in 6.201621386s • [SLOW TEST:255.978 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:34:17.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:35:17.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7425" for this suite. Jan 20 13:35:39.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:35:40.081: INFO: namespace container-probe-7425 deletion completed in 22.201834043s • [SLOW TEST:82.357 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:35:40.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:35:40.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-182" for this suite. Jan 20 13:36:04.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:36:04.434: INFO: namespace pods-182 deletion completed in 24.158687642s • [SLOW TEST:24.353 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:36:04.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Jan 20 13:36:04.567: INFO: Waiting up to 5m0s for pod "client-containers-54e185eb-45e1-4a6b-bb9d-f5304817b53d" in namespace "containers-6859" to be "success or failure" Jan 20 13:36:04.572: INFO: Pod "client-containers-54e185eb-45e1-4a6b-bb9d-f5304817b53d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.388407ms Jan 20 13:36:06.587: INFO: Pod "client-containers-54e185eb-45e1-4a6b-bb9d-f5304817b53d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0189512s Jan 20 13:36:08.600: INFO: Pod "client-containers-54e185eb-45e1-4a6b-bb9d-f5304817b53d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032085308s Jan 20 13:36:10.618: INFO: Pod "client-containers-54e185eb-45e1-4a6b-bb9d-f5304817b53d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050044102s Jan 20 13:36:12.625: INFO: Pod "client-containers-54e185eb-45e1-4a6b-bb9d-f5304817b53d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.056801806s STEP: Saw pod success Jan 20 13:36:12.625: INFO: Pod "client-containers-54e185eb-45e1-4a6b-bb9d-f5304817b53d" satisfied condition "success or failure" Jan 20 13:36:12.629: INFO: Trying to get logs from node iruya-node pod client-containers-54e185eb-45e1-4a6b-bb9d-f5304817b53d container test-container: STEP: delete the pod Jan 20 13:36:12.791: INFO: Waiting for pod client-containers-54e185eb-45e1-4a6b-bb9d-f5304817b53d to disappear Jan 20 13:36:12.812: INFO: Pod client-containers-54e185eb-45e1-4a6b-bb9d-f5304817b53d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 20 13:36:12.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6859" for this suite. Jan 20 13:36:18.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 13:36:18.991: INFO: namespace containers-6859 deletion completed in 6.121064176s • [SLOW TEST:14.556 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 20 13:36:18.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 20 13:36:19.115: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/:
alternatives.log
alternatives.l... (200; 11.41027ms)
Jan 20 13:36:19.122: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.339797ms)
Jan 20 13:36:19.127: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.791338ms)
Jan 20 13:36:19.133: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.988194ms)
Jan 20 13:36:19.139: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.120109ms)
Jan 20 13:36:19.145: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.171639ms)
Jan 20 13:36:19.149: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.50057ms)
Jan 20 13:36:19.173: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 23.537278ms)
Jan 20 13:36:19.178: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.771106ms)
Jan 20 13:36:19.183: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.153494ms)
Jan 20 13:36:19.189: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.799817ms)
Jan 20 13:36:19.196: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.687079ms)
Jan 20 13:36:19.199: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.616651ms)
Jan 20 13:36:19.203: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.090008ms)
Jan 20 13:36:19.208: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.662312ms)
Jan 20 13:36:19.221: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.534653ms)
Jan 20 13:36:19.229: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.336719ms)
Jan 20 13:36:19.235: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.602524ms)
Jan 20 13:36:19.240: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.032355ms)
Jan 20 13:36:19.246: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.229933ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:36:19.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-221" for this suite.
Jan 20 13:36:25.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:36:25.457: INFO: namespace proxy-221 deletion completed in 6.20598896s

• [SLOW TEST:6.465 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:36:25.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-27c1c50c-c549-4f05-b0ad-0b5132cdc13e
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:36:25.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6560" for this suite.
Jan 20 13:36:31.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:36:31.708: INFO: namespace configmap-6560 deletion completed in 6.191223892s

• [SLOW TEST:6.251 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:36:31.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-47045a45-fc4c-43f2-9bfd-f09e68c002c0
STEP: Creating a pod to test consume secrets
Jan 20 13:36:31.803: INFO: Waiting up to 5m0s for pod "pod-secrets-916e18f1-946f-411d-b9b5-e43e6f612cbf" in namespace "secrets-1552" to be "success or failure"
Jan 20 13:36:31.876: INFO: Pod "pod-secrets-916e18f1-946f-411d-b9b5-e43e6f612cbf": Phase="Pending", Reason="", readiness=false. Elapsed: 72.879362ms
Jan 20 13:36:33.897: INFO: Pod "pod-secrets-916e18f1-946f-411d-b9b5-e43e6f612cbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093290856s
Jan 20 13:36:35.910: INFO: Pod "pod-secrets-916e18f1-946f-411d-b9b5-e43e6f612cbf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106393152s
Jan 20 13:36:37.918: INFO: Pod "pod-secrets-916e18f1-946f-411d-b9b5-e43e6f612cbf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115080466s
Jan 20 13:36:39.932: INFO: Pod "pod-secrets-916e18f1-946f-411d-b9b5-e43e6f612cbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.128380153s
STEP: Saw pod success
Jan 20 13:36:39.932: INFO: Pod "pod-secrets-916e18f1-946f-411d-b9b5-e43e6f612cbf" satisfied condition "success or failure"
Jan 20 13:36:39.937: INFO: Trying to get logs from node iruya-node pod pod-secrets-916e18f1-946f-411d-b9b5-e43e6f612cbf container secret-volume-test: 
STEP: delete the pod
Jan 20 13:36:39.986: INFO: Waiting for pod pod-secrets-916e18f1-946f-411d-b9b5-e43e6f612cbf to disappear
Jan 20 13:36:40.120: INFO: Pod pod-secrets-916e18f1-946f-411d-b9b5-e43e6f612cbf no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:36:40.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1552" for this suite.
Jan 20 13:36:46.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:36:46.268: INFO: namespace secrets-1552 deletion completed in 6.138261513s

• [SLOW TEST:14.559 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:36:46.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan 20 13:36:46.394: INFO: PodSpec: initContainers in spec.initContainers
Jan 20 13:37:49.386: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-9f47df56-8fdc-4bf0-b7fd-b12baea8ed6e", GenerateName:"", Namespace:"init-container-2963", SelfLink:"/api/v1/namespaces/init-container-2963/pods/pod-init-9f47df56-8fdc-4bf0-b7fd-b12baea8ed6e", UID:"3fb43a13-491a-4084-903f-56c4d682161d", ResourceVersion:"21183897", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715124206, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"394184746"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-bph46", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001678040), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bph46", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bph46", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bph46", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000aec088), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001fc2120), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000aec130)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000aec150)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000aec158), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000aec15c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715124206, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715124206, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715124206, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715124206, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc0022ba060), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000f46af0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000f46bd0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://8fa4cf883a164387cd42107b94caa241b88a8a31d8770482a74a17da7b2fc848"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0022ba0a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0022ba080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:37:49.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2963" for this suite.
Jan 20 13:38:11.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:38:11.642: INFO: namespace init-container-2963 deletion completed in 22.228933071s

• [SLOW TEST:85.373 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:38:11.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 20 13:38:11.735: INFO: Waiting up to 5m0s for pod "downward-api-5d4983a4-eba0-4085-8159-9ba92b26247a" in namespace "downward-api-7530" to be "success or failure"
Jan 20 13:38:11.763: INFO: Pod "downward-api-5d4983a4-eba0-4085-8159-9ba92b26247a": Phase="Pending", Reason="", readiness=false. Elapsed: 28.17621ms
Jan 20 13:38:13.776: INFO: Pod "downward-api-5d4983a4-eba0-4085-8159-9ba92b26247a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041229097s
Jan 20 13:38:15.788: INFO: Pod "downward-api-5d4983a4-eba0-4085-8159-9ba92b26247a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052870208s
Jan 20 13:38:17.800: INFO: Pod "downward-api-5d4983a4-eba0-4085-8159-9ba92b26247a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065563487s
Jan 20 13:38:19.811: INFO: Pod "downward-api-5d4983a4-eba0-4085-8159-9ba92b26247a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.076408401s
STEP: Saw pod success
Jan 20 13:38:19.811: INFO: Pod "downward-api-5d4983a4-eba0-4085-8159-9ba92b26247a" satisfied condition "success or failure"
Jan 20 13:38:19.816: INFO: Trying to get logs from node iruya-node pod downward-api-5d4983a4-eba0-4085-8159-9ba92b26247a container dapi-container: 
STEP: delete the pod
Jan 20 13:38:19.935: INFO: Waiting for pod downward-api-5d4983a4-eba0-4085-8159-9ba92b26247a to disappear
Jan 20 13:38:19.939: INFO: Pod downward-api-5d4983a4-eba0-4085-8159-9ba92b26247a no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:38:19.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7530" for this suite.
Jan 20 13:38:26.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:38:26.167: INFO: namespace downward-api-7530 deletion completed in 6.221289738s

• [SLOW TEST:14.525 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:38:26.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-9c996e68-0408-4991-8714-e78b4420a43f
STEP: Creating a pod to test consume secrets
Jan 20 13:38:26.540: INFO: Waiting up to 5m0s for pod "pod-secrets-be341737-53c8-4d4e-997f-6bf2522a16cc" in namespace "secrets-9917" to be "success or failure"
Jan 20 13:38:26.551: INFO: Pod "pod-secrets-be341737-53c8-4d4e-997f-6bf2522a16cc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.831885ms
Jan 20 13:38:28.563: INFO: Pod "pod-secrets-be341737-53c8-4d4e-997f-6bf2522a16cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022084901s
Jan 20 13:38:30.574: INFO: Pod "pod-secrets-be341737-53c8-4d4e-997f-6bf2522a16cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033417507s
Jan 20 13:38:32.591: INFO: Pod "pod-secrets-be341737-53c8-4d4e-997f-6bf2522a16cc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050370214s
Jan 20 13:38:34.605: INFO: Pod "pod-secrets-be341737-53c8-4d4e-997f-6bf2522a16cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065002728s
STEP: Saw pod success
Jan 20 13:38:34.606: INFO: Pod "pod-secrets-be341737-53c8-4d4e-997f-6bf2522a16cc" satisfied condition "success or failure"
Jan 20 13:38:34.610: INFO: Trying to get logs from node iruya-node pod pod-secrets-be341737-53c8-4d4e-997f-6bf2522a16cc container secret-volume-test: 
STEP: delete the pod
Jan 20 13:38:34.704: INFO: Waiting for pod pod-secrets-be341737-53c8-4d4e-997f-6bf2522a16cc to disappear
Jan 20 13:38:34.715: INFO: Pod pod-secrets-be341737-53c8-4d4e-997f-6bf2522a16cc no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:38:34.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9917" for this suite.
Jan 20 13:38:40.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:38:40.914: INFO: namespace secrets-9917 deletion completed in 6.193502973s
STEP: Destroying namespace "secret-namespace-3166" for this suite.
Jan 20 13:38:46.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:38:47.108: INFO: namespace secret-namespace-3166 deletion completed in 6.194083823s

• [SLOW TEST:20.940 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:38:47.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-xj4c
STEP: Creating a pod to test atomic-volume-subpath
Jan 20 13:38:47.271: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-xj4c" in namespace "subpath-6247" to be "success or failure"
Jan 20 13:38:47.288: INFO: Pod "pod-subpath-test-configmap-xj4c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.931862ms
Jan 20 13:38:49.300: INFO: Pod "pod-subpath-test-configmap-xj4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029429069s
Jan 20 13:38:51.312: INFO: Pod "pod-subpath-test-configmap-xj4c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040774848s
Jan 20 13:38:53.320: INFO: Pod "pod-subpath-test-configmap-xj4c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048718583s
Jan 20 13:38:55.328: INFO: Pod "pod-subpath-test-configmap-xj4c": Phase="Running", Reason="", readiness=true. Elapsed: 8.05735612s
Jan 20 13:38:57.338: INFO: Pod "pod-subpath-test-configmap-xj4c": Phase="Running", Reason="", readiness=true. Elapsed: 10.067619581s
Jan 20 13:38:59.347: INFO: Pod "pod-subpath-test-configmap-xj4c": Phase="Running", Reason="", readiness=true. Elapsed: 12.076258913s
Jan 20 13:39:01.356: INFO: Pod "pod-subpath-test-configmap-xj4c": Phase="Running", Reason="", readiness=true. Elapsed: 14.085459556s
Jan 20 13:39:03.367: INFO: Pod "pod-subpath-test-configmap-xj4c": Phase="Running", Reason="", readiness=true. Elapsed: 16.095966287s
Jan 20 13:39:05.375: INFO: Pod "pod-subpath-test-configmap-xj4c": Phase="Running", Reason="", readiness=true. Elapsed: 18.103758718s
Jan 20 13:39:07.388: INFO: Pod "pod-subpath-test-configmap-xj4c": Phase="Running", Reason="", readiness=true. Elapsed: 20.117586582s
Jan 20 13:39:09.400: INFO: Pod "pod-subpath-test-configmap-xj4c": Phase="Running", Reason="", readiness=true. Elapsed: 22.128918482s
Jan 20 13:39:11.408: INFO: Pod "pod-subpath-test-configmap-xj4c": Phase="Running", Reason="", readiness=true. Elapsed: 24.136986413s
Jan 20 13:39:13.418: INFO: Pod "pod-subpath-test-configmap-xj4c": Phase="Running", Reason="", readiness=true. Elapsed: 26.146952821s
Jan 20 13:39:15.429: INFO: Pod "pod-subpath-test-configmap-xj4c": Phase="Running", Reason="", readiness=true. Elapsed: 28.158537613s
Jan 20 13:39:17.450: INFO: Pod "pod-subpath-test-configmap-xj4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.178961283s
STEP: Saw pod success
Jan 20 13:39:17.450: INFO: Pod "pod-subpath-test-configmap-xj4c" satisfied condition "success or failure"
Jan 20 13:39:17.461: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-xj4c container test-container-subpath-configmap-xj4c: 
STEP: delete the pod
Jan 20 13:39:17.538: INFO: Waiting for pod pod-subpath-test-configmap-xj4c to disappear
Jan 20 13:39:17.544: INFO: Pod pod-subpath-test-configmap-xj4c no longer exists
STEP: Deleting pod pod-subpath-test-configmap-xj4c
Jan 20 13:39:17.544: INFO: Deleting pod "pod-subpath-test-configmap-xj4c" in namespace "subpath-6247"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:39:17.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6247" for this suite.
Jan 20 13:39:23.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:39:23.679: INFO: namespace subpath-6247 deletion completed in 6.124256269s

• [SLOW TEST:36.571 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:39:23.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4179.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4179.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4179.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4179.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4179.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4179.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4179.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4179.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4179.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4179.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4179.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4179.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4179.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 46.250.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.250.46_udp@PTR;check="$$(dig +tcp +noall +answer +search 46.250.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.250.46_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4179.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4179.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4179.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4179.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4179.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4179.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4179.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4179.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4179.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4179.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4179.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4179.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4179.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 46.250.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.250.46_udp@PTR;check="$$(dig +tcp +noall +answer +search 46.250.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.250.46_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 20 13:39:36.053: INFO: Unable to read wheezy_udp@dns-test-service.dns-4179.svc.cluster.local from pod dns-4179/dns-test-abfe9928-3fa8-4850-809f-f15137cee417: the server could not find the requested resource (get pods dns-test-abfe9928-3fa8-4850-809f-f15137cee417)
Jan 20 13:39:36.060: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4179.svc.cluster.local from pod dns-4179/dns-test-abfe9928-3fa8-4850-809f-f15137cee417: the server could not find the requested resource (get pods dns-test-abfe9928-3fa8-4850-809f-f15137cee417)
Jan 20 13:39:36.069: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4179.svc.cluster.local from pod dns-4179/dns-test-abfe9928-3fa8-4850-809f-f15137cee417: the server could not find the requested resource (get pods dns-test-abfe9928-3fa8-4850-809f-f15137cee417)
Jan 20 13:39:36.082: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4179.svc.cluster.local from pod dns-4179/dns-test-abfe9928-3fa8-4850-809f-f15137cee417: the server could not find the requested resource (get pods dns-test-abfe9928-3fa8-4850-809f-f15137cee417)
Jan 20 13:39:36.095: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-4179.svc.cluster.local from pod dns-4179/dns-test-abfe9928-3fa8-4850-809f-f15137cee417: the server could not find the requested resource (get pods dns-test-abfe9928-3fa8-4850-809f-f15137cee417)
Jan 20 13:39:36.109: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-4179.svc.cluster.local from pod dns-4179/dns-test-abfe9928-3fa8-4850-809f-f15137cee417: the server could not find the requested resource (get pods dns-test-abfe9928-3fa8-4850-809f-f15137cee417)
Jan 20 13:39:36.115: INFO: Unable to read wheezy_udp@PodARecord from pod dns-4179/dns-test-abfe9928-3fa8-4850-809f-f15137cee417: the server could not find the requested resource (get pods dns-test-abfe9928-3fa8-4850-809f-f15137cee417)
Jan 20 13:39:36.123: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-4179/dns-test-abfe9928-3fa8-4850-809f-f15137cee417: the server could not find the requested resource (get pods dns-test-abfe9928-3fa8-4850-809f-f15137cee417)
Jan 20 13:39:36.133: INFO: Unable to read 10.110.250.46_udp@PTR from pod dns-4179/dns-test-abfe9928-3fa8-4850-809f-f15137cee417: the server could not find the requested resource (get pods dns-test-abfe9928-3fa8-4850-809f-f15137cee417)
Jan 20 13:39:36.144: INFO: Unable to read 10.110.250.46_tcp@PTR from pod dns-4179/dns-test-abfe9928-3fa8-4850-809f-f15137cee417: the server could not find the requested resource (get pods dns-test-abfe9928-3fa8-4850-809f-f15137cee417)
Jan 20 13:39:36.151: INFO: Unable to read jessie_udp@dns-test-service.dns-4179.svc.cluster.local from pod dns-4179/dns-test-abfe9928-3fa8-4850-809f-f15137cee417: the server could not find the requested resource (get pods dns-test-abfe9928-3fa8-4850-809f-f15137cee417)
Jan 20 13:39:36.162: INFO: Unable to read jessie_tcp@dns-test-service.dns-4179.svc.cluster.local from pod dns-4179/dns-test-abfe9928-3fa8-4850-809f-f15137cee417: the server could not find the requested resource (get pods dns-test-abfe9928-3fa8-4850-809f-f15137cee417)
Jan 20 13:39:36.179: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4179.svc.cluster.local from pod dns-4179/dns-test-abfe9928-3fa8-4850-809f-f15137cee417: the server could not find the requested resource (get pods dns-test-abfe9928-3fa8-4850-809f-f15137cee417)
Jan 20 13:39:36.188: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4179.svc.cluster.local from pod dns-4179/dns-test-abfe9928-3fa8-4850-809f-f15137cee417: the server could not find the requested resource (get pods dns-test-abfe9928-3fa8-4850-809f-f15137cee417)
Jan 20 13:39:36.192: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-4179.svc.cluster.local from pod dns-4179/dns-test-abfe9928-3fa8-4850-809f-f15137cee417: the server could not find the requested resource (get pods dns-test-abfe9928-3fa8-4850-809f-f15137cee417)
Jan 20 13:39:36.196: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-4179.svc.cluster.local from pod dns-4179/dns-test-abfe9928-3fa8-4850-809f-f15137cee417: the server could not find the requested resource (get pods dns-test-abfe9928-3fa8-4850-809f-f15137cee417)
Jan 20 13:39:36.201: INFO: Unable to read jessie_udp@PodARecord from pod dns-4179/dns-test-abfe9928-3fa8-4850-809f-f15137cee417: the server could not find the requested resource (get pods dns-test-abfe9928-3fa8-4850-809f-f15137cee417)
Jan 20 13:39:36.212: INFO: Unable to read jessie_tcp@PodARecord from pod dns-4179/dns-test-abfe9928-3fa8-4850-809f-f15137cee417: the server could not find the requested resource (get pods dns-test-abfe9928-3fa8-4850-809f-f15137cee417)
Jan 20 13:39:36.217: INFO: Unable to read 10.110.250.46_udp@PTR from pod dns-4179/dns-test-abfe9928-3fa8-4850-809f-f15137cee417: the server could not find the requested resource (get pods dns-test-abfe9928-3fa8-4850-809f-f15137cee417)
Jan 20 13:39:36.220: INFO: Unable to read 10.110.250.46_tcp@PTR from pod dns-4179/dns-test-abfe9928-3fa8-4850-809f-f15137cee417: the server could not find the requested resource (get pods dns-test-abfe9928-3fa8-4850-809f-f15137cee417)
Jan 20 13:39:36.220: INFO: Lookups using dns-4179/dns-test-abfe9928-3fa8-4850-809f-f15137cee417 failed for: [wheezy_udp@dns-test-service.dns-4179.svc.cluster.local wheezy_tcp@dns-test-service.dns-4179.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4179.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4179.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-4179.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-4179.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.110.250.46_udp@PTR 10.110.250.46_tcp@PTR jessie_udp@dns-test-service.dns-4179.svc.cluster.local jessie_tcp@dns-test-service.dns-4179.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4179.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4179.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-4179.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-4179.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.110.250.46_udp@PTR 10.110.250.46_tcp@PTR]

Jan 20 13:39:41.341: INFO: DNS probes using dns-4179/dns-test-abfe9928-3fa8-4850-809f-f15137cee417 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:39:41.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4179" for this suite.
Jan 20 13:39:47.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:39:47.852: INFO: namespace dns-4179 deletion completed in 6.201884288s

• [SLOW TEST:24.172 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:39:47.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Jan 20 13:39:47.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan 20 13:39:48.150: INFO: stderr: ""
Jan 20 13:39:48.150: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:39:48.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8961" for this suite.
Jan 20 13:39:54.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:39:54.360: INFO: namespace kubectl-8961 deletion completed in 6.200765076s

• [SLOW TEST:6.507 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:39:54.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 20 13:39:54.541: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"0181898a-0096-49ab-9ed1-945db65c8d4f", Controller:(*bool)(0xc002aa1a02), BlockOwnerDeletion:(*bool)(0xc002aa1a03)}}
Jan 20 13:39:54.558: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"e663ae6c-54aa-4d4b-89ab-ea67ad8526c4", Controller:(*bool)(0xc0025f57ca), BlockOwnerDeletion:(*bool)(0xc0025f57cb)}}
Jan 20 13:39:54.571: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"87e4af08-6a6d-4017-a0f1-6fa2ed8de0be", Controller:(*bool)(0xc002cdf1fa), BlockOwnerDeletion:(*bool)(0xc002cdf1fb)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:39:59.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-266" for this suite.
Jan 20 13:40:05.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:40:05.814: INFO: namespace gc-266 deletion completed in 6.185319628s

• [SLOW TEST:11.454 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:40:05.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 20 13:40:14.541: INFO: Successfully updated pod "pod-update-6ce0c5ab-a88a-463d-8ff0-bf4eb9427f9c"
STEP: verifying the updated pod is in kubernetes
Jan 20 13:40:14.591: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:40:14.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9438" for this suite.
Jan 20 13:40:36.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:40:36.717: INFO: namespace pods-9438 deletion completed in 22.121434043s

• [SLOW TEST:30.902 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:40:36.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0120 13:40:47.709006       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 20 13:40:47.709: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:40:47.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1543" for this suite.
Jan 20 13:41:03.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:41:03.899: INFO: namespace gc-1543 deletion completed in 16.182692261s

• [SLOW TEST:27.182 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:41:03.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-hlzpf in namespace proxy-5139
I0120 13:41:04.244061       8 runners.go:180] Created replication controller with name: proxy-service-hlzpf, namespace: proxy-5139, replica count: 1
I0120 13:41:05.295509       8 runners.go:180] proxy-service-hlzpf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 13:41:06.295976       8 runners.go:180] proxy-service-hlzpf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 13:41:07.296714       8 runners.go:180] proxy-service-hlzpf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 13:41:08.297363       8 runners.go:180] proxy-service-hlzpf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 13:41:09.298124       8 runners.go:180] proxy-service-hlzpf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 13:41:10.298899       8 runners.go:180] proxy-service-hlzpf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 13:41:11.299338       8 runners.go:180] proxy-service-hlzpf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0120 13:41:12.299601       8 runners.go:180] proxy-service-hlzpf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0120 13:41:13.299907       8 runners.go:180] proxy-service-hlzpf Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 20 13:41:13.305: INFO: setup took 9.244362149s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan 20 13:41:13.341: INFO: (0) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 35.347095ms)
Jan 20 13:41:13.341: INFO: (0) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 35.539687ms)
Jan 20 13:41:13.341: INFO: (0) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname2/proxy/: bar (200; 35.633237ms)
Jan 20 13:41:13.341: INFO: (0) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname1/proxy/: foo (200; 35.694152ms)
Jan 20 13:41:13.343: INFO: (0) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 37.125223ms)
Jan 20 13:41:13.343: INFO: (0) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname2/proxy/: bar (200; 37.168387ms)
Jan 20 13:41:13.344: INFO: (0) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:1080/proxy/: ... (200; 38.178141ms)
Jan 20 13:41:13.344: INFO: (0) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:1080/proxy/: test<... (200; 38.578551ms)
Jan 20 13:41:13.345: INFO: (0) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 39.733937ms)
Jan 20 13:41:13.346: INFO: (0) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname1/proxy/: foo (200; 40.32768ms)
Jan 20 13:41:13.350: INFO: (0) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb/proxy/: test (200; 44.036842ms)
Jan 20 13:41:13.365: INFO: (0) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:460/proxy/: tls baz (200; 59.407ms)
Jan 20 13:41:13.365: INFO: (0) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:462/proxy/: tls qux (200; 59.626219ms)
Jan 20 13:41:13.365: INFO: (0) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname2/proxy/: tls qux (200; 59.642679ms)
Jan 20 13:41:13.366: INFO: (0) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname1/proxy/: tls baz (200; 60.154323ms)
Jan 20 13:41:13.366: INFO: (0) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:443/proxy/: test (200; 14.730311ms)
Jan 20 13:41:13.382: INFO: (1) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 15.153913ms)
Jan 20 13:41:13.390: INFO: (1) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 23.246183ms)
Jan 20 13:41:13.392: INFO: (1) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 24.629749ms)
Jan 20 13:41:13.394: INFO: (1) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:1080/proxy/: ... (200; 26.700359ms)
Jan 20 13:41:13.394: INFO: (1) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname1/proxy/: foo (200; 27.125535ms)
Jan 20 13:41:13.396: INFO: (1) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 28.967761ms)
Jan 20 13:41:13.396: INFO: (1) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname2/proxy/: bar (200; 28.498399ms)
Jan 20 13:41:13.396: INFO: (1) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname2/proxy/: bar (200; 28.21822ms)
Jan 20 13:41:13.396: INFO: (1) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname2/proxy/: tls qux (200; 28.64211ms)
Jan 20 13:41:13.396: INFO: (1) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:443/proxy/: test<... (200; 29.254423ms)
Jan 20 13:41:13.397: INFO: (1) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname1/proxy/: tls baz (200; 29.456402ms)
Jan 20 13:41:13.397: INFO: (1) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname1/proxy/: foo (200; 29.205379ms)
Jan 20 13:41:13.397: INFO: (1) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:460/proxy/: tls baz (200; 29.696382ms)
Jan 20 13:41:13.412: INFO: (2) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:460/proxy/: tls baz (200; 15.209646ms)
Jan 20 13:41:13.412: INFO: (2) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:462/proxy/: tls qux (200; 14.996557ms)
Jan 20 13:41:13.413: INFO: (2) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 15.736206ms)
Jan 20 13:41:13.417: INFO: (2) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname2/proxy/: bar (200; 19.986135ms)
Jan 20 13:41:13.417: INFO: (2) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb/proxy/: test (200; 19.935899ms)
Jan 20 13:41:13.417: INFO: (2) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 20.117314ms)
Jan 20 13:41:13.418: INFO: (2) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname2/proxy/: tls qux (200; 20.974502ms)
Jan 20 13:41:13.418: INFO: (2) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname1/proxy/: foo (200; 21.086065ms)
Jan 20 13:41:13.419: INFO: (2) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 21.507007ms)
Jan 20 13:41:13.419: INFO: (2) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 21.79291ms)
Jan 20 13:41:13.419: INFO: (2) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname1/proxy/: tls baz (200; 22.174126ms)
Jan 20 13:41:13.420: INFO: (2) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:443/proxy/: ... (200; 22.684007ms)
Jan 20 13:41:13.420: INFO: (2) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname1/proxy/: foo (200; 23.524248ms)
Jan 20 13:41:13.422: INFO: (2) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname2/proxy/: bar (200; 25.39581ms)
Jan 20 13:41:13.424: INFO: (2) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:1080/proxy/: test<... (200; 27.282333ms)
Jan 20 13:41:13.432: INFO: (3) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:460/proxy/: tls baz (200; 7.779164ms)
Jan 20 13:41:13.436: INFO: (3) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:1080/proxy/: test<... (200; 11.246574ms)
Jan 20 13:41:13.436: INFO: (3) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:443/proxy/: test (200; 12.011683ms)
Jan 20 13:41:13.436: INFO: (3) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 11.831951ms)
Jan 20 13:41:13.437: INFO: (3) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 12.764869ms)
Jan 20 13:41:13.439: INFO: (3) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:462/proxy/: tls qux (200; 14.877793ms)
Jan 20 13:41:13.440: INFO: (3) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname1/proxy/: foo (200; 15.0342ms)
Jan 20 13:41:13.440: INFO: (3) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 15.264486ms)
Jan 20 13:41:13.440: INFO: (3) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:1080/proxy/: ... (200; 15.378018ms)
Jan 20 13:41:13.441: INFO: (3) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 16.46026ms)
Jan 20 13:41:13.441: INFO: (3) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname1/proxy/: foo (200; 16.454103ms)
Jan 20 13:41:13.441: INFO: (3) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname2/proxy/: bar (200; 16.717211ms)
Jan 20 13:41:13.442: INFO: (3) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname2/proxy/: tls qux (200; 17.174928ms)
Jan 20 13:41:13.443: INFO: (3) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname1/proxy/: tls baz (200; 18.119348ms)
Jan 20 13:41:13.454: INFO: (4) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:1080/proxy/: ... (200; 10.898692ms)
Jan 20 13:41:13.460: INFO: (4) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb/proxy/: test (200; 16.987948ms)
Jan 20 13:41:13.460: INFO: (4) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 16.983314ms)
Jan 20 13:41:13.469: INFO: (4) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 26.012417ms)
Jan 20 13:41:13.470: INFO: (4) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 26.802279ms)
Jan 20 13:41:13.470: INFO: (4) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname1/proxy/: foo (200; 27.377251ms)
Jan 20 13:41:13.470: INFO: (4) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 27.216738ms)
Jan 20 13:41:13.470: INFO: (4) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:1080/proxy/: test<... (200; 27.27657ms)
Jan 20 13:41:13.470: INFO: (4) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname2/proxy/: tls qux (200; 27.26001ms)
Jan 20 13:41:13.471: INFO: (4) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:460/proxy/: tls baz (200; 28.325585ms)
Jan 20 13:41:13.471: INFO: (4) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:462/proxy/: tls qux (200; 28.081966ms)
Jan 20 13:41:13.472: INFO: (4) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname2/proxy/: bar (200; 28.907959ms)
Jan 20 13:41:13.472: INFO: (4) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname1/proxy/: foo (200; 28.87985ms)
Jan 20 13:41:13.472: INFO: (4) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname2/proxy/: bar (200; 29.064182ms)
Jan 20 13:41:13.472: INFO: (4) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname1/proxy/: tls baz (200; 29.15325ms)
Jan 20 13:41:13.473: INFO: (4) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:443/proxy/: test (200; 12.440784ms)
Jan 20 13:41:13.487: INFO: (5) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:462/proxy/: tls qux (200; 13.648821ms)
Jan 20 13:41:13.487: INFO: (5) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname1/proxy/: tls baz (200; 13.329872ms)
Jan 20 13:41:13.487: INFO: (5) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 13.996855ms)
Jan 20 13:41:13.488: INFO: (5) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname1/proxy/: foo (200; 14.741729ms)
Jan 20 13:41:13.489: INFO: (5) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 14.81688ms)
Jan 20 13:41:13.493: INFO: (5) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname1/proxy/: foo (200; 20.118809ms)
Jan 20 13:41:13.494: INFO: (5) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:460/proxy/: tls baz (200; 20.473408ms)
Jan 20 13:41:13.494: INFO: (5) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname2/proxy/: tls qux (200; 20.182949ms)
Jan 20 13:41:13.494: INFO: (5) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname2/proxy/: bar (200; 20.045811ms)
Jan 20 13:41:13.494: INFO: (5) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:443/proxy/: test<... (200; 20.354566ms)
Jan 20 13:41:13.494: INFO: (5) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname2/proxy/: bar (200; 20.787738ms)
Jan 20 13:41:13.494: INFO: (5) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 20.83508ms)
Jan 20 13:41:13.494: INFO: (5) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:1080/proxy/: ... (200; 20.654274ms)
Jan 20 13:41:13.506: INFO: (6) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname2/proxy/: bar (200; 11.32337ms)
Jan 20 13:41:13.506: INFO: (6) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname1/proxy/: foo (200; 11.262267ms)
Jan 20 13:41:13.506: INFO: (6) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname1/proxy/: tls baz (200; 11.403412ms)
Jan 20 13:41:13.506: INFO: (6) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname2/proxy/: tls qux (200; 11.363135ms)
Jan 20 13:41:13.506: INFO: (6) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname2/proxy/: bar (200; 11.316734ms)
Jan 20 13:41:13.506: INFO: (6) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname1/proxy/: foo (200; 11.471931ms)
Jan 20 13:41:13.507: INFO: (6) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:1080/proxy/: ... (200; 13.078274ms)
Jan 20 13:41:13.508: INFO: (6) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb/proxy/: test (200; 13.175706ms)
Jan 20 13:41:13.509: INFO: (6) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:460/proxy/: tls baz (200; 14.17488ms)
Jan 20 13:41:13.509: INFO: (6) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:1080/proxy/: test<... (200; 14.431141ms)
Jan 20 13:41:13.509: INFO: (6) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 14.355825ms)
Jan 20 13:41:13.509: INFO: (6) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 14.641413ms)
Jan 20 13:41:13.509: INFO: (6) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:462/proxy/: tls qux (200; 14.816717ms)
Jan 20 13:41:13.510: INFO: (6) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 15.646176ms)
Jan 20 13:41:13.511: INFO: (6) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 16.207791ms)
Jan 20 13:41:13.511: INFO: (6) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:443/proxy/: ... (200; 19.825692ms)
Jan 20 13:41:13.533: INFO: (7) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 22.170105ms)
Jan 20 13:41:13.535: INFO: (7) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:443/proxy/: test (200; 24.429732ms)
Jan 20 13:41:13.541: INFO: (7) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:460/proxy/: tls baz (200; 29.840739ms)
Jan 20 13:41:13.542: INFO: (7) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:1080/proxy/: test<... (200; 30.04145ms)
Jan 20 13:41:13.542: INFO: (7) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname2/proxy/: bar (200; 30.992711ms)
Jan 20 13:41:13.543: INFO: (7) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname2/proxy/: bar (200; 31.878625ms)
Jan 20 13:41:13.543: INFO: (7) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname2/proxy/: tls qux (200; 31.819561ms)
Jan 20 13:41:13.543: INFO: (7) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname1/proxy/: tls baz (200; 31.807799ms)
Jan 20 13:41:13.543: INFO: (7) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname1/proxy/: foo (200; 32.24605ms)
Jan 20 13:41:13.544: INFO: (7) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname1/proxy/: foo (200; 32.908672ms)
Jan 20 13:41:13.550: INFO: (8) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:1080/proxy/: test<... (200; 6.058019ms)
Jan 20 13:41:13.551: INFO: (8) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb/proxy/: test (200; 6.906715ms)
Jan 20 13:41:13.553: INFO: (8) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname2/proxy/: bar (200; 8.793511ms)
Jan 20 13:41:13.553: INFO: (8) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 8.793648ms)
Jan 20 13:41:13.553: INFO: (8) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:1080/proxy/: ... (200; 8.683975ms)
Jan 20 13:41:13.553: INFO: (8) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname1/proxy/: tls baz (200; 8.901208ms)
Jan 20 13:41:13.554: INFO: (8) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname1/proxy/: foo (200; 9.241999ms)
Jan 20 13:41:13.554: INFO: (8) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:462/proxy/: tls qux (200; 9.337498ms)
Jan 20 13:41:13.554: INFO: (8) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:443/proxy/: test<... (200; 5.010244ms)
Jan 20 13:41:13.565: INFO: (9) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:462/proxy/: tls qux (200; 7.541318ms)
Jan 20 13:41:13.565: INFO: (9) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 7.627309ms)
Jan 20 13:41:13.566: INFO: (9) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb/proxy/: test (200; 7.780513ms)
Jan 20 13:41:13.566: INFO: (9) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:1080/proxy/: ... (200; 7.958656ms)
Jan 20 13:41:13.566: INFO: (9) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 8.137353ms)
Jan 20 13:41:13.566: INFO: (9) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 8.071372ms)
Jan 20 13:41:13.566: INFO: (9) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname2/proxy/: tls qux (200; 8.226447ms)
Jan 20 13:41:13.571: INFO: (9) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname2/proxy/: bar (200; 13.341326ms)
Jan 20 13:41:13.571: INFO: (9) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname1/proxy/: foo (200; 13.371251ms)
Jan 20 13:41:13.571: INFO: (9) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname1/proxy/: foo (200; 13.487884ms)
Jan 20 13:41:13.572: INFO: (9) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname1/proxy/: tls baz (200; 13.923716ms)
Jan 20 13:41:13.573: INFO: (9) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname2/proxy/: bar (200; 14.809536ms)
Jan 20 13:41:13.585: INFO: (10) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname2/proxy/: bar (200; 12.498952ms)
Jan 20 13:41:13.586: INFO: (10) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname2/proxy/: tls qux (200; 12.797466ms)
Jan 20 13:41:13.586: INFO: (10) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname1/proxy/: foo (200; 12.9074ms)
Jan 20 13:41:13.586: INFO: (10) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 13.371206ms)
Jan 20 13:41:13.588: INFO: (10) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:462/proxy/: tls qux (200; 15.492799ms)
Jan 20 13:41:13.588: INFO: (10) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname2/proxy/: bar (200; 15.563005ms)
Jan 20 13:41:13.589: INFO: (10) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname1/proxy/: foo (200; 15.867028ms)
Jan 20 13:41:13.591: INFO: (10) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname1/proxy/: tls baz (200; 18.083686ms)
Jan 20 13:41:13.591: INFO: (10) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:443/proxy/: test (200; 18.477979ms)
Jan 20 13:41:13.591: INFO: (10) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:1080/proxy/: ... (200; 18.441852ms)
Jan 20 13:41:13.591: INFO: (10) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:460/proxy/: tls baz (200; 18.439419ms)
Jan 20 13:41:13.591: INFO: (10) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 18.568033ms)
Jan 20 13:41:13.591: INFO: (10) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 18.548197ms)
Jan 20 13:41:13.591: INFO: (10) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 18.462282ms)
Jan 20 13:41:13.591: INFO: (10) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:1080/proxy/: test<... (200; 18.578308ms)
Jan 20 13:41:13.608: INFO: (11) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname2/proxy/: bar (200; 15.758337ms)
Jan 20 13:41:13.609: INFO: (11) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname2/proxy/: tls qux (200; 16.783882ms)
Jan 20 13:41:13.609: INFO: (11) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:1080/proxy/: ... (200; 16.889231ms)
Jan 20 13:41:13.609: INFO: (11) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 17.385497ms)
Jan 20 13:41:13.609: INFO: (11) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 17.570157ms)
Jan 20 13:41:13.609: INFO: (11) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:460/proxy/: tls baz (200; 17.547179ms)
Jan 20 13:41:13.609: INFO: (11) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname1/proxy/: tls baz (200; 17.797384ms)
Jan 20 13:41:13.610: INFO: (11) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname1/proxy/: foo (200; 18.446906ms)
Jan 20 13:41:13.611: INFO: (11) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 18.988015ms)
Jan 20 13:41:13.611: INFO: (11) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname2/proxy/: bar (200; 19.056928ms)
Jan 20 13:41:13.611: INFO: (11) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb/proxy/: test (200; 19.294294ms)
Jan 20 13:41:13.611: INFO: (11) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 19.260339ms)
Jan 20 13:41:13.611: INFO: (11) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:443/proxy/: test<... (200; 19.119753ms)
Jan 20 13:41:13.623: INFO: (12) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:1080/proxy/: test<... (200; 11.663757ms)
Jan 20 13:41:13.623: INFO: (12) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:462/proxy/: tls qux (200; 11.342877ms)
Jan 20 13:41:13.623: INFO: (12) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:1080/proxy/: ... (200; 11.15061ms)
Jan 20 13:41:13.623: INFO: (12) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 11.164382ms)
Jan 20 13:41:13.624: INFO: (12) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb/proxy/: test (200; 12.059946ms)
Jan 20 13:41:13.624: INFO: (12) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 12.51493ms)
Jan 20 13:41:13.624: INFO: (12) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 12.193133ms)
Jan 20 13:41:13.624: INFO: (12) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 12.338658ms)
Jan 20 13:41:13.624: INFO: (12) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:443/proxy/: test<... (200; 5.101216ms)
Jan 20 13:41:13.634: INFO: (13) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 5.590927ms)
Jan 20 13:41:13.636: INFO: (13) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 7.148895ms)
Jan 20 13:41:13.636: INFO: (13) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:443/proxy/: test (200; 10.209696ms)
Jan 20 13:41:13.639: INFO: (13) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:1080/proxy/: ... (200; 9.78569ms)
Jan 20 13:41:13.639: INFO: (13) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 9.923486ms)
Jan 20 13:41:13.639: INFO: (13) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:462/proxy/: tls qux (200; 10.642515ms)
Jan 20 13:41:13.639: INFO: (13) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname2/proxy/: bar (200; 10.88739ms)
Jan 20 13:41:13.640: INFO: (13) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname2/proxy/: bar (200; 10.990946ms)
Jan 20 13:41:13.640: INFO: (13) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname1/proxy/: foo (200; 11.247943ms)
Jan 20 13:41:13.640: INFO: (13) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname1/proxy/: foo (200; 11.296698ms)
Jan 20 13:41:13.640: INFO: (13) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname2/proxy/: tls qux (200; 11.541305ms)
Jan 20 13:41:13.640: INFO: (13) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname1/proxy/: tls baz (200; 11.673312ms)
Jan 20 13:41:13.646: INFO: (14) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 5.429151ms)
Jan 20 13:41:13.646: INFO: (14) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:1080/proxy/: test<... (200; 5.439717ms)
Jan 20 13:41:13.649: INFO: (14) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb/proxy/: test (200; 8.474785ms)
Jan 20 13:41:13.649: INFO: (14) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 8.774496ms)
Jan 20 13:41:13.649: INFO: (14) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 8.777694ms)
Jan 20 13:41:13.649: INFO: (14) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 8.779988ms)
Jan 20 13:41:13.649: INFO: (14) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:462/proxy/: tls qux (200; 8.985238ms)
Jan 20 13:41:13.649: INFO: (14) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:443/proxy/: ... (200; 9.338551ms)
Jan 20 13:41:13.650: INFO: (14) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname1/proxy/: foo (200; 9.428153ms)
Jan 20 13:41:13.652: INFO: (14) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname1/proxy/: tls baz (200; 11.420367ms)
Jan 20 13:41:13.652: INFO: (14) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname2/proxy/: bar (200; 11.370175ms)
Jan 20 13:41:13.652: INFO: (14) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname1/proxy/: foo (200; 11.480554ms)
Jan 20 13:41:13.652: INFO: (14) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname2/proxy/: bar (200; 11.79188ms)
Jan 20 13:41:13.654: INFO: (14) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname2/proxy/: tls qux (200; 13.192059ms)
Jan 20 13:41:13.662: INFO: (15) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 7.974971ms)
Jan 20 13:41:13.662: INFO: (15) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 8.270771ms)
Jan 20 13:41:13.662: INFO: (15) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:462/proxy/: tls qux (200; 8.369263ms)
Jan 20 13:41:13.663: INFO: (15) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname1/proxy/: tls baz (200; 9.090432ms)
Jan 20 13:41:13.663: INFO: (15) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:460/proxy/: tls baz (200; 9.187842ms)
Jan 20 13:41:13.663: INFO: (15) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 9.050061ms)
Jan 20 13:41:13.663: INFO: (15) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 9.202156ms)
Jan 20 13:41:13.663: INFO: (15) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:1080/proxy/: ... (200; 9.136947ms)
Jan 20 13:41:13.663: INFO: (15) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:1080/proxy/: test<... (200; 9.091338ms)
Jan 20 13:41:13.663: INFO: (15) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:443/proxy/: test (200; 9.150914ms)
Jan 20 13:41:13.664: INFO: (15) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname2/proxy/: tls qux (200; 9.965053ms)
Jan 20 13:41:13.664: INFO: (15) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname2/proxy/: bar (200; 10.107613ms)
Jan 20 13:41:13.664: INFO: (15) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname1/proxy/: foo (200; 10.454355ms)
Jan 20 13:41:13.664: INFO: (15) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname2/proxy/: bar (200; 10.386747ms)
Jan 20 13:41:13.664: INFO: (15) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname1/proxy/: foo (200; 10.375656ms)
Jan 20 13:41:13.670: INFO: (16) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 5.377701ms)
Jan 20 13:41:13.671: INFO: (16) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:462/proxy/: tls qux (200; 6.666517ms)
Jan 20 13:41:13.672: INFO: (16) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:1080/proxy/: ... (200; 7.319917ms)
Jan 20 13:41:13.672: INFO: (16) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 7.845596ms)
Jan 20 13:41:13.672: INFO: (16) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb/proxy/: test (200; 7.944712ms)
Jan 20 13:41:13.673: INFO: (16) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:443/proxy/: test<... (200; 8.880341ms)
Jan 20 13:41:13.676: INFO: (16) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname1/proxy/: foo (200; 11.289584ms)
Jan 20 13:41:13.676: INFO: (16) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname2/proxy/: bar (200; 11.562309ms)
Jan 20 13:41:13.676: INFO: (16) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname1/proxy/: foo (200; 11.55461ms)
Jan 20 13:41:13.676: INFO: (16) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname2/proxy/: bar (200; 11.60607ms)
Jan 20 13:41:13.676: INFO: (16) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname2/proxy/: tls qux (200; 11.918685ms)
Jan 20 13:41:13.676: INFO: (16) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname1/proxy/: tls baz (200; 11.996602ms)
Jan 20 13:41:13.686: INFO: (17) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:1080/proxy/: ... (200; 9.569057ms)
Jan 20 13:41:13.686: INFO: (17) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname2/proxy/: tls qux (200; 9.776804ms)
Jan 20 13:41:13.686: INFO: (17) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname2/proxy/: bar (200; 9.626718ms)
Jan 20 13:41:13.686: INFO: (17) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:462/proxy/: tls qux (200; 9.607879ms)
Jan 20 13:41:13.689: INFO: (17) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname1/proxy/: foo (200; 12.139296ms)
Jan 20 13:41:13.689: INFO: (17) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 12.382914ms)
Jan 20 13:41:13.689: INFO: (17) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname2/proxy/: bar (200; 12.775998ms)
Jan 20 13:41:13.690: INFO: (17) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:460/proxy/: tls baz (200; 13.137851ms)
Jan 20 13:41:13.690: INFO: (17) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 13.466393ms)
Jan 20 13:41:13.690: INFO: (17) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname1/proxy/: tls baz (200; 13.532587ms)
Jan 20 13:41:13.690: INFO: (17) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb/proxy/: test (200; 13.640589ms)
Jan 20 13:41:13.690: INFO: (17) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 13.863745ms)
Jan 20 13:41:13.690: INFO: (17) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname1/proxy/: foo (200; 13.863252ms)
Jan 20 13:41:13.690: INFO: (17) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 14.056323ms)
Jan 20 13:41:13.691: INFO: (17) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:1080/proxy/: test<... (200; 14.23029ms)
Jan 20 13:41:13.691: INFO: (17) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:443/proxy/: test<... (200; 4.807401ms)
Jan 20 13:41:13.696: INFO: (18) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:462/proxy/: tls qux (200; 5.756563ms)
Jan 20 13:41:13.697: INFO: (18) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb/proxy/: test (200; 6.088999ms)
Jan 20 13:41:13.698: INFO: (18) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 7.341001ms)
Jan 20 13:41:13.698: INFO: (18) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 7.661641ms)
Jan 20 13:41:13.699: INFO: (18) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 7.701504ms)
Jan 20 13:41:13.699: INFO: (18) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 7.91367ms)
Jan 20 13:41:13.699: INFO: (18) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:1080/proxy/: ... (200; 8.40726ms)
Jan 20 13:41:13.700: INFO: (18) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname1/proxy/: foo (200; 9.012399ms)
Jan 20 13:41:13.701: INFO: (18) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:460/proxy/: tls baz (200; 9.961672ms)
Jan 20 13:41:13.706: INFO: (18) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname1/proxy/: foo (200; 15.439291ms)
Jan 20 13:41:13.706: INFO: (18) /api/v1/namespaces/proxy-5139/services/http:proxy-service-hlzpf:portname2/proxy/: bar (200; 15.432988ms)
Jan 20 13:41:13.706: INFO: (18) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname1/proxy/: tls baz (200; 15.384258ms)
Jan 20 13:41:13.706: INFO: (18) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname2/proxy/: tls qux (200; 15.466663ms)
Jan 20 13:41:13.707: INFO: (18) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname2/proxy/: bar (200; 15.925071ms)
Jan 20 13:41:13.712: INFO: (19) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:1080/proxy/: test<... (200; 4.868217ms)
Jan 20 13:41:13.715: INFO: (19) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:462/proxy/: tls qux (200; 8.12356ms)
Jan 20 13:41:13.715: INFO: (19) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:460/proxy/: tls baz (200; 8.006389ms)
Jan 20 13:41:13.715: INFO: (19) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:1080/proxy/: ... (200; 8.033188ms)
Jan 20 13:41:13.716: INFO: (19) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb/proxy/: test (200; 8.52226ms)
Jan 20 13:41:13.717: INFO: (19) /api/v1/namespaces/proxy-5139/services/https:proxy-service-hlzpf:tlsportname1/proxy/: tls baz (200; 10.378351ms)
Jan 20 13:41:13.719: INFO: (19) /api/v1/namespaces/proxy-5139/pods/http:proxy-service-hlzpf-bl7mb:162/proxy/: bar (200; 11.199755ms)
Jan 20 13:41:13.719: INFO: (19) /api/v1/namespaces/proxy-5139/services/proxy-service-hlzpf:portname2/proxy/: bar (200; 11.337002ms)
Jan 20 13:41:13.719: INFO: (19) /api/v1/namespaces/proxy-5139/pods/proxy-service-hlzpf-bl7mb:160/proxy/: foo (200; 11.243776ms)
Jan 20 13:41:13.719: INFO: (19) /api/v1/namespaces/proxy-5139/pods/https:proxy-service-hlzpf-bl7mb:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 20 13:44:25.093: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:44:25.148: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:44:27.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:44:27.157: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:44:29.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:44:29.158: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:44:31.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:44:31.155: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:44:33.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:44:33.157: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:44:35.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:44:35.157: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:44:37.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:44:37.161: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:44:39.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:44:39.158: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:44:41.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:44:41.158: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:44:43.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:44:43.160: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:44:45.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:44:45.157: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:44:47.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:44:47.159: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:44:49.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:44:49.165: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:44:51.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:44:51.186: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:44:53.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:44:53.195: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:44:55.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:44:55.155: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:44:57.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:44:57.155: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:44:59.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:44:59.166: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:45:01.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:45:01.156: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:45:03.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:45:03.167: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:45:05.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:45:05.160: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:45:07.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:45:07.163: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:45:09.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:45:09.159: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:45:11.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:45:11.157: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:45:13.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:45:13.157: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:45:15.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:45:15.157: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:45:17.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:45:17.168: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:45:19.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:45:19.155: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:45:21.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:45:21.161: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:45:23.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:45:23.156: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:45:25.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:45:25.156: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:45:27.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:45:27.158: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:45:29.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:45:29.159: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:45:31.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:45:31.156: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:45:33.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:45:33.158: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:45:35.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:45:35.158: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:45:37.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:45:37.157: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:45:39.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:45:39.187: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:45:41.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:45:41.157: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:45:43.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:45:43.158: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:45:45.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:45:45.158: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:45:47.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:45:47.161: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:45:49.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:45:49.158: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:45:51.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:45:51.159: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:45:53.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:45:53.157: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:45:55.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:45:55.167: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:45:57.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:45:57.160: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:45:59.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:45:59.167: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:46:01.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:46:01.160: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:46:03.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:46:03.158: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:46:05.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:46:05.157: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:46:07.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:46:07.158: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:46:09.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:46:09.156: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:46:11.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:46:11.164: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:46:13.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:46:13.158: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 13:46:15.149: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 13:46:15.154: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:46:15.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7830" for this suite.
Jan 20 13:46:37.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:46:37.320: INFO: namespace container-lifecycle-hook-7830 deletion completed in 22.161320762s

• [SLOW TEST:311.724 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:46:37.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 20 13:46:37.465: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1f619ad5-537a-4cdc-92db-7efcc8d8832f" in namespace "downward-api-1153" to be "success or failure"
Jan 20 13:46:37.473: INFO: Pod "downwardapi-volume-1f619ad5-537a-4cdc-92db-7efcc8d8832f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.495479ms
Jan 20 13:46:39.487: INFO: Pod "downwardapi-volume-1f619ad5-537a-4cdc-92db-7efcc8d8832f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021940944s
Jan 20 13:46:41.502: INFO: Pod "downwardapi-volume-1f619ad5-537a-4cdc-92db-7efcc8d8832f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036786553s
Jan 20 13:46:43.514: INFO: Pod "downwardapi-volume-1f619ad5-537a-4cdc-92db-7efcc8d8832f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048963392s
Jan 20 13:46:45.522: INFO: Pod "downwardapi-volume-1f619ad5-537a-4cdc-92db-7efcc8d8832f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056484637s
Jan 20 13:46:47.531: INFO: Pod "downwardapi-volume-1f619ad5-537a-4cdc-92db-7efcc8d8832f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065131769s
STEP: Saw pod success
Jan 20 13:46:47.531: INFO: Pod "downwardapi-volume-1f619ad5-537a-4cdc-92db-7efcc8d8832f" satisfied condition "success or failure"
Jan 20 13:46:47.535: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-1f619ad5-537a-4cdc-92db-7efcc8d8832f container client-container: 
STEP: delete the pod
Jan 20 13:46:47.615: INFO: Waiting for pod downwardapi-volume-1f619ad5-537a-4cdc-92db-7efcc8d8832f to disappear
Jan 20 13:46:47.632: INFO: Pod downwardapi-volume-1f619ad5-537a-4cdc-92db-7efcc8d8832f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:46:47.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1153" for this suite.
Jan 20 13:46:53.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:46:53.852: INFO: namespace downward-api-1153 deletion completed in 6.208957284s

• [SLOW TEST:16.531 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:46:53.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:47:24.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-855" for this suite.
Jan 20 13:47:30.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:47:30.416: INFO: namespace namespaces-855 deletion completed in 6.140975796s
STEP: Destroying namespace "nsdeletetest-5209" for this suite.
Jan 20 13:47:30.418: INFO: Namespace nsdeletetest-5209 was already deleted
STEP: Destroying namespace "nsdeletetest-9544" for this suite.
Jan 20 13:47:36.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:47:36.570: INFO: namespace nsdeletetest-9544 deletion completed in 6.151875092s

• [SLOW TEST:42.717 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:47:36.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-2046fadd-ab2f-4200-8b57-65f0ccfb5fa0
STEP: Creating secret with name secret-projected-all-test-volume-f28f37d9-6505-482f-b4cc-66ac8c87e488
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan 20 13:47:36.705: INFO: Waiting up to 5m0s for pod "projected-volume-7a214c95-43d4-4ac8-a054-aa9f29e292e7" in namespace "projected-5988" to be "success or failure"
Jan 20 13:47:36.715: INFO: Pod "projected-volume-7a214c95-43d4-4ac8-a054-aa9f29e292e7": Phase="Pending", Reason="", readiness=false. Elapsed: 9.377155ms
Jan 20 13:47:38.725: INFO: Pod "projected-volume-7a214c95-43d4-4ac8-a054-aa9f29e292e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019666948s
Jan 20 13:47:40.756: INFO: Pod "projected-volume-7a214c95-43d4-4ac8-a054-aa9f29e292e7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050519528s
Jan 20 13:47:42.769: INFO: Pod "projected-volume-7a214c95-43d4-4ac8-a054-aa9f29e292e7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063232696s
Jan 20 13:47:44.778: INFO: Pod "projected-volume-7a214c95-43d4-4ac8-a054-aa9f29e292e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072339401s
STEP: Saw pod success
Jan 20 13:47:44.778: INFO: Pod "projected-volume-7a214c95-43d4-4ac8-a054-aa9f29e292e7" satisfied condition "success or failure"
Jan 20 13:47:44.782: INFO: Trying to get logs from node iruya-node pod projected-volume-7a214c95-43d4-4ac8-a054-aa9f29e292e7 container projected-all-volume-test: 
STEP: delete the pod
Jan 20 13:47:44.832: INFO: Waiting for pod projected-volume-7a214c95-43d4-4ac8-a054-aa9f29e292e7 to disappear
Jan 20 13:47:44.867: INFO: Pod projected-volume-7a214c95-43d4-4ac8-a054-aa9f29e292e7 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:47:44.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5988" for this suite.
Jan 20 13:47:50.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:47:50.995: INFO: namespace projected-5988 deletion completed in 6.115714932s

• [SLOW TEST:14.425 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:47:50.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-97e5e3c8-64d7-4e41-8f7f-426d3f9e5605
STEP: Creating a pod to test consume secrets
Jan 20 13:47:51.096: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-170c6f20-24a5-489b-9323-236391ef356e" in namespace "projected-5921" to be "success or failure"
Jan 20 13:47:51.105: INFO: Pod "pod-projected-secrets-170c6f20-24a5-489b-9323-236391ef356e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.032049ms
Jan 20 13:47:53.117: INFO: Pod "pod-projected-secrets-170c6f20-24a5-489b-9323-236391ef356e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020676144s
Jan 20 13:47:55.128: INFO: Pod "pod-projected-secrets-170c6f20-24a5-489b-9323-236391ef356e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032082178s
Jan 20 13:47:57.142: INFO: Pod "pod-projected-secrets-170c6f20-24a5-489b-9323-236391ef356e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046063889s
Jan 20 13:47:59.154: INFO: Pod "pod-projected-secrets-170c6f20-24a5-489b-9323-236391ef356e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058414985s
STEP: Saw pod success
Jan 20 13:47:59.155: INFO: Pod "pod-projected-secrets-170c6f20-24a5-489b-9323-236391ef356e" satisfied condition "success or failure"
Jan 20 13:47:59.161: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-170c6f20-24a5-489b-9323-236391ef356e container projected-secret-volume-test: 
STEP: delete the pod
Jan 20 13:47:59.223: INFO: Waiting for pod pod-projected-secrets-170c6f20-24a5-489b-9323-236391ef356e to disappear
Jan 20 13:47:59.244: INFO: Pod pod-projected-secrets-170c6f20-24a5-489b-9323-236391ef356e no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:47:59.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5921" for this suite.
Jan 20 13:48:05.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:48:05.534: INFO: namespace projected-5921 deletion completed in 6.282983344s

• [SLOW TEST:14.539 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:48:05.536: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 20 13:48:05.640: INFO: Creating deployment "test-recreate-deployment"
Jan 20 13:48:05.679: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan 20 13:48:05.696: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Jan 20 13:48:07.713: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan 20 13:48:07.718: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715124885, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715124885, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715124885, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715124885, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 20 13:48:09.723: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715124885, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715124885, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715124885, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715124885, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 20 13:48:11.727: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715124885, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715124885, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715124885, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715124885, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 20 13:48:13.854: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan 20 13:48:13.910: INFO: Updating deployment test-recreate-deployment
Jan 20 13:48:13.910: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 20 13:48:14.387: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-9489,SelfLink:/apis/apps/v1/namespaces/deployment-9489/deployments/test-recreate-deployment,UID:215f4426-9674-417f-bb9a-f22c9d116932,ResourceVersion:21185328,Generation:2,CreationTimestamp:2020-01-20 13:48:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-20 13:48:14 +0000 UTC 2020-01-20 13:48:14 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-20 13:48:14 +0000 UTC 2020-01-20 13:48:05 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan 20 13:48:14.426: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-9489,SelfLink:/apis/apps/v1/namespaces/deployment-9489/replicasets/test-recreate-deployment-5c8c9cc69d,UID:c1b89611-10e1-48a1-b445-7738071cfdd4,ResourceVersion:21185325,Generation:1,CreationTimestamp:2020-01-20 13:48:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 215f4426-9674-417f-bb9a-f22c9d116932 0xc002b92bc7 0xc002b92bc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 20 13:48:14.426: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan 20 13:48:14.426: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-9489,SelfLink:/apis/apps/v1/namespaces/deployment-9489/replicasets/test-recreate-deployment-6df85df6b9,UID:7e7248ef-5579-4199-b008-6601ff8558ea,ResourceVersion:21185315,Generation:2,CreationTimestamp:2020-01-20 13:48:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 215f4426-9674-417f-bb9a-f22c9d116932 0xc002b92c97 0xc002b92c98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 20 13:48:14.476: INFO: Pod "test-recreate-deployment-5c8c9cc69d-2zjp7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-2zjp7,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-9489,SelfLink:/api/v1/namespaces/deployment-9489/pods/test-recreate-deployment-5c8c9cc69d-2zjp7,UID:4cc0f744-ef8a-4173-a6b2-b232f0ab42f4,ResourceVersion:21185330,Generation:0,CreationTimestamp:2020-01-20 13:48:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d c1b89611-10e1-48a1-b445-7738071cfdd4 0xc0015a4f37 0xc0015a4f38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slvhf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slvhf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-slvhf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0015a4fc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0015a4fe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:48:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:48:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:48:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:48:14 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-20 13:48:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:48:14.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9489" for this suite.
Jan 20 13:48:22.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:48:22.655: INFO: namespace deployment-9489 deletion completed in 8.168830384s

• [SLOW TEST:17.119 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:48:22.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:48:22.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6694" for this suite.
Jan 20 13:48:28.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:48:29.059: INFO: namespace kubelet-test-6694 deletion completed in 6.231957669s

• [SLOW TEST:6.403 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:48:29.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 20 13:48:47.278: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 20 13:48:47.300: INFO: Pod pod-with-prestop-http-hook still exists
Jan 20 13:48:49.300: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 20 13:48:49.308: INFO: Pod pod-with-prestop-http-hook still exists
Jan 20 13:48:51.301: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 20 13:48:51.308: INFO: Pod pod-with-prestop-http-hook still exists
Jan 20 13:48:53.301: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 20 13:48:53.320: INFO: Pod pod-with-prestop-http-hook still exists
Jan 20 13:48:55.301: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 20 13:48:55.310: INFO: Pod pod-with-prestop-http-hook still exists
Jan 20 13:48:57.301: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 20 13:48:57.308: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:48:57.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5051" for this suite.
Jan 20 13:49:19.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:49:19.495: INFO: namespace container-lifecycle-hook-5051 deletion completed in 22.138283039s

• [SLOW TEST:50.436 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:49:19.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 20 13:49:30.165: INFO: Successfully updated pod "pod-update-activedeadlineseconds-7230604c-4531-40db-b10a-0fcfb6194051"
Jan 20 13:49:30.165: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-7230604c-4531-40db-b10a-0fcfb6194051" in namespace "pods-4022" to be "terminated due to deadline exceeded"
Jan 20 13:49:30.178: INFO: Pod "pod-update-activedeadlineseconds-7230604c-4531-40db-b10a-0fcfb6194051": Phase="Running", Reason="", readiness=true. Elapsed: 12.569498ms
Jan 20 13:49:32.186: INFO: Pod "pod-update-activedeadlineseconds-7230604c-4531-40db-b10a-0fcfb6194051": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.020575488s
Jan 20 13:49:32.186: INFO: Pod "pod-update-activedeadlineseconds-7230604c-4531-40db-b10a-0fcfb6194051" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:49:32.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4022" for this suite.
Jan 20 13:49:38.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:49:38.390: INFO: namespace pods-4022 deletion completed in 6.199042125s

• [SLOW TEST:18.895 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:49:38.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Jan 20 13:49:38.570: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:49:56.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2690" for this suite.
Jan 20 13:50:02.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:50:02.756: INFO: namespace pods-2690 deletion completed in 6.202430386s

• [SLOW TEST:24.365 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:50:02.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 20 13:50:02.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-7240'
Jan 20 13:50:04.894: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 20 13:50:04.894: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Jan 20 13:50:06.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-7240'
Jan 20 13:50:07.186: INFO: stderr: ""
Jan 20 13:50:07.186: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:50:07.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7240" for this suite.
Jan 20 13:50:13.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:50:13.417: INFO: namespace kubectl-7240 deletion completed in 6.222527049s

• [SLOW TEST:10.661 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:50:13.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-lkjh
STEP: Creating a pod to test atomic-volume-subpath
Jan 20 13:50:13.574: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-lkjh" in namespace "subpath-141" to be "success or failure"
Jan 20 13:50:13.625: INFO: Pod "pod-subpath-test-downwardapi-lkjh": Phase="Pending", Reason="", readiness=false. Elapsed: 50.839013ms
Jan 20 13:50:15.635: INFO: Pod "pod-subpath-test-downwardapi-lkjh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060856835s
Jan 20 13:50:17.642: INFO: Pod "pod-subpath-test-downwardapi-lkjh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067117819s
Jan 20 13:50:19.654: INFO: Pod "pod-subpath-test-downwardapi-lkjh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079959503s
Jan 20 13:50:21.667: INFO: Pod "pod-subpath-test-downwardapi-lkjh": Phase="Running", Reason="", readiness=true. Elapsed: 8.092800457s
Jan 20 13:50:23.677: INFO: Pod "pod-subpath-test-downwardapi-lkjh": Phase="Running", Reason="", readiness=true. Elapsed: 10.102638892s
Jan 20 13:50:25.686: INFO: Pod "pod-subpath-test-downwardapi-lkjh": Phase="Running", Reason="", readiness=true. Elapsed: 12.111787185s
Jan 20 13:50:27.699: INFO: Pod "pod-subpath-test-downwardapi-lkjh": Phase="Running", Reason="", readiness=true. Elapsed: 14.124914507s
Jan 20 13:50:29.710: INFO: Pod "pod-subpath-test-downwardapi-lkjh": Phase="Running", Reason="", readiness=true. Elapsed: 16.135912489s
Jan 20 13:50:31.720: INFO: Pod "pod-subpath-test-downwardapi-lkjh": Phase="Running", Reason="", readiness=true. Elapsed: 18.145123402s
Jan 20 13:50:33.753: INFO: Pod "pod-subpath-test-downwardapi-lkjh": Phase="Running", Reason="", readiness=true. Elapsed: 20.178800096s
Jan 20 13:50:35.765: INFO: Pod "pod-subpath-test-downwardapi-lkjh": Phase="Running", Reason="", readiness=true. Elapsed: 22.190092517s
Jan 20 13:50:37.776: INFO: Pod "pod-subpath-test-downwardapi-lkjh": Phase="Running", Reason="", readiness=true. Elapsed: 24.201679209s
Jan 20 13:50:39.785: INFO: Pod "pod-subpath-test-downwardapi-lkjh": Phase="Running", Reason="", readiness=true. Elapsed: 26.210092851s
Jan 20 13:50:41.797: INFO: Pod "pod-subpath-test-downwardapi-lkjh": Phase="Running", Reason="", readiness=true. Elapsed: 28.222576885s
Jan 20 13:50:43.804: INFO: Pod "pod-subpath-test-downwardapi-lkjh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.229711434s
STEP: Saw pod success
Jan 20 13:50:43.804: INFO: Pod "pod-subpath-test-downwardapi-lkjh" satisfied condition "success or failure"
Jan 20 13:50:43.808: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-lkjh container test-container-subpath-downwardapi-lkjh: 
STEP: delete the pod
Jan 20 13:50:43.967: INFO: Waiting for pod pod-subpath-test-downwardapi-lkjh to disappear
Jan 20 13:50:43.986: INFO: Pod pod-subpath-test-downwardapi-lkjh no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-lkjh
Jan 20 13:50:43.986: INFO: Deleting pod "pod-subpath-test-downwardapi-lkjh" in namespace "subpath-141"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:50:43.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-141" for this suite.
Jan 20 13:50:50.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:50:50.140: INFO: namespace subpath-141 deletion completed in 6.140197103s

• [SLOW TEST:36.723 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:50:50.140: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 20 13:50:50.290: INFO: Creating deployment "nginx-deployment"
Jan 20 13:50:50.297: INFO: Waiting for observed generation 1
Jan 20 13:50:52.452: INFO: Waiting for all required pods to come up
Jan 20 13:50:54.198: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan 20 13:51:22.708: INFO: Waiting for deployment "nginx-deployment" to complete
Jan 20 13:51:22.719: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jan 20 13:51:22.738: INFO: Updating deployment nginx-deployment
Jan 20 13:51:22.738: INFO: Waiting for observed generation 2
Jan 20 13:51:25.506: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan 20 13:51:25.991: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan 20 13:51:25.996: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan 20 13:51:26.410: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan 20 13:51:26.410: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan 20 13:51:26.415: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan 20 13:51:26.422: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jan 20 13:51:26.422: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jan 20 13:51:26.436: INFO: Updating deployment nginx-deployment
Jan 20 13:51:26.436: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jan 20 13:51:26.637: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan 20 13:51:26.828: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 20 13:51:30.911: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-805,SelfLink:/apis/apps/v1/namespaces/deployment-805/deployments/nginx-deployment,UID:6f5af22c-86e1-443f-a98f-8bfe1e1753c6,ResourceVersion:21185963,Generation:3,CreationTimestamp:2020-01-20 13:50:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-01-20 13:51:23 +0000 UTC 2020-01-20 13:50:50 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-01-20 13:51:26 +0000 UTC 2020-01-20 13:51:26 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Jan 20 13:51:32.339: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-805,SelfLink:/apis/apps/v1/namespaces/deployment-805/replicasets/nginx-deployment-55fb7cb77f,UID:718b9686-3ced-442d-bcec-50ad9cf902f9,ResourceVersion:21186009,Generation:3,CreationTimestamp:2020-01-20 13:51:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 6f5af22c-86e1-443f-a98f-8bfe1e1753c6 0xc0030d8ad7 0xc0030d8ad8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 20 13:51:32.339: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jan 20 13:51:32.340: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-805,SelfLink:/apis/apps/v1/namespaces/deployment-805/replicasets/nginx-deployment-7b8c6f4498,UID:c3f1312b-2d2a-432f-b179-6d5e085285c6,ResourceVersion:21186000,Generation:3,CreationTimestamp:2020-01-20 13:50:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 6f5af22c-86e1-443f-a98f-8bfe1e1753c6 0xc0030d8ba7 0xc0030d8ba8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jan 20 13:51:34.215: INFO: Pod "nginx-deployment-55fb7cb77f-b9lvn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-b9lvn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-55fb7cb77f-b9lvn,UID:610c5da3-3334-4ade-a693-7fdc443c38a4,ResourceVersion:21185937,Generation:0,CreationTimestamp:2020-01-20 13:51:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 718b9686-3ced-442d-bcec-50ad9cf902f9 0xc00299f6b7 0xc00299f6b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00299f730} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00299f750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:23 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-20 13:51:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.215: INFO: Pod "nginx-deployment-55fb7cb77f-f28mk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-f28mk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-55fb7cb77f-f28mk,UID:c5663632-ccb2-4db2-b84d-1c6ca7b56302,ResourceVersion:21185916,Generation:0,CreationTimestamp:2020-01-20 13:51:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 718b9686-3ced-442d-bcec-50ad9cf902f9 0xc00299f827 0xc00299f828}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00299f890} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00299f8b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:22 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-20 13:51:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.215: INFO: Pod "nginx-deployment-55fb7cb77f-gkmnt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gkmnt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-55fb7cb77f-gkmnt,UID:f6b6d51a-1967-479d-a210-927ccc0c1cdb,ResourceVersion:21185988,Generation:0,CreationTimestamp:2020-01-20 13:51:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 718b9686-3ced-442d-bcec-50ad9cf902f9 0xc00299f987 0xc00299f988}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00299fa10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00299fa30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:28 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.215: INFO: Pod "nginx-deployment-55fb7cb77f-gr2sn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gr2sn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-55fb7cb77f-gr2sn,UID:c1abadc9-8f0f-469d-ba55-10f2c92f6be3,ResourceVersion:21185981,Generation:0,CreationTimestamp:2020-01-20 13:51:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 718b9686-3ced-442d-bcec-50ad9cf902f9 0xc00299fab7 0xc00299fab8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00299fb20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00299fb40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:26 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.215: INFO: Pod "nginx-deployment-55fb7cb77f-lcnhm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lcnhm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-55fb7cb77f-lcnhm,UID:02f8733a-cb1e-4f10-8414-ccb79cc776e1,ResourceVersion:21185910,Generation:0,CreationTimestamp:2020-01-20 13:51:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 718b9686-3ced-442d-bcec-50ad9cf902f9 0xc00299fbc7 0xc00299fbc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00299fc40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00299fc60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:22 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-20 13:51:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.215: INFO: Pod "nginx-deployment-55fb7cb77f-mk279" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mk279,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-55fb7cb77f-mk279,UID:3dd0262e-3278-439f-a2eb-192817aabd98,ResourceVersion:21185991,Generation:0,CreationTimestamp:2020-01-20 13:51:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 718b9686-3ced-442d-bcec-50ad9cf902f9 0xc00299fd37 0xc00299fd38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00299fda0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00299fdc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:28 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.215: INFO: Pod "nginx-deployment-55fb7cb77f-plscx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-plscx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-55fb7cb77f-plscx,UID:6f7f00b5-790e-4a9b-a45d-f3e76352d420,ResourceVersion:21185977,Generation:0,CreationTimestamp:2020-01-20 13:51:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 718b9686-3ced-442d-bcec-50ad9cf902f9 0xc00299fe47 0xc00299fe48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00299fec0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00299fee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:26 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.215: INFO: Pod "nginx-deployment-55fb7cb77f-rpmms" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rpmms,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-55fb7cb77f-rpmms,UID:b07d8c80-e211-46fd-8d35-5f07f9343f70,ResourceVersion:21186003,Generation:0,CreationTimestamp:2020-01-20 13:51:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 718b9686-3ced-442d-bcec-50ad9cf902f9 0xc00299ff67 0xc00299ff68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00299ffd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00299fff0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:29 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.216: INFO: Pod "nginx-deployment-55fb7cb77f-rtbgt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rtbgt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-55fb7cb77f-rtbgt,UID:fa058785-4663-4065-957e-c5f8db7c5f8f,ResourceVersion:21185936,Generation:0,CreationTimestamp:2020-01-20 13:51:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 718b9686-3ced-442d-bcec-50ad9cf902f9 0xc002bea077 0xc002bea078}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bea0e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bea100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:23 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-20 13:51:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.216: INFO: Pod "nginx-deployment-55fb7cb77f-w7nbc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-w7nbc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-55fb7cb77f-w7nbc,UID:194b5f79-3de9-4497-a3c6-a8c2b9470022,ResourceVersion:21185920,Generation:0,CreationTimestamp:2020-01-20 13:51:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 718b9686-3ced-442d-bcec-50ad9cf902f9 0xc002bea1d7 0xc002bea1d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bea250} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bea270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:22 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-20 13:51:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.216: INFO: Pod "nginx-deployment-55fb7cb77f-w9jcp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-w9jcp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-55fb7cb77f-w9jcp,UID:30de94f2-363b-48d5-b8b3-7ddf63a5ae8c,ResourceVersion:21185966,Generation:0,CreationTimestamp:2020-01-20 13:51:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 718b9686-3ced-442d-bcec-50ad9cf902f9 0xc002bea347 0xc002bea348}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bea3b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bea3d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:26 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.216: INFO: Pod "nginx-deployment-55fb7cb77f-z7876" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-z7876,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-55fb7cb77f-z7876,UID:2d2cd0ac-c792-44ff-9881-ed65cacda458,ResourceVersion:21185990,Generation:0,CreationTimestamp:2020-01-20 13:51:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 718b9686-3ced-442d-bcec-50ad9cf902f9 0xc002bea457 0xc002bea458}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bea4d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bea4f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:28 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.216: INFO: Pod "nginx-deployment-55fb7cb77f-zh8kd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zh8kd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-55fb7cb77f-zh8kd,UID:54772a0a-b3b2-4b32-8102-65fc7836adf1,ResourceVersion:21185993,Generation:0,CreationTimestamp:2020-01-20 13:51:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 718b9686-3ced-442d-bcec-50ad9cf902f9 0xc002bea577 0xc002bea578}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bea5f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bea610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:28 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.217: INFO: Pod "nginx-deployment-7b8c6f4498-48454" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-48454,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-7b8c6f4498-48454,UID:8773ba77-5d81-4395-99b7-411632fb6dd7,ResourceVersion:21185980,Generation:0,CreationTimestamp:2020-01-20 13:51:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c3f1312b-2d2a-432f-b179-6d5e085285c6 0xc002bea697 0xc002bea698}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bea710} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bea730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:26 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.217: INFO: Pod "nginx-deployment-7b8c6f4498-55gtt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-55gtt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-7b8c6f4498-55gtt,UID:75df3578-55c4-4573-aa52-e22867022033,ResourceVersion:21185965,Generation:0,CreationTimestamp:2020-01-20 13:51:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c3f1312b-2d2a-432f-b179-6d5e085285c6 0xc002bea7b7 0xc002bea7b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bea820} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bea840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:26 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.217: INFO: Pod "nginx-deployment-7b8c6f4498-5q7td" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5q7td,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-7b8c6f4498-5q7td,UID:0f1903c9-010e-42c3-9ade-97ba428e9ebf,ResourceVersion:21185976,Generation:0,CreationTimestamp:2020-01-20 13:51:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c3f1312b-2d2a-432f-b179-6d5e085285c6 0xc002bea8d7 0xc002bea8d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bea940} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bea960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:26 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.217: INFO: Pod "nginx-deployment-7b8c6f4498-6xjx4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6xjx4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-7b8c6f4498-6xjx4,UID:b9864026-b679-42dc-9efe-a1dd42d6332e,ResourceVersion:21185992,Generation:0,CreationTimestamp:2020-01-20 13:51:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c3f1312b-2d2a-432f-b179-6d5e085285c6 0xc002bea9f7 0xc002bea9f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002beaa60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002beaa80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:28 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.217: INFO: Pod "nginx-deployment-7b8c6f4498-6z2td" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6z2td,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-7b8c6f4498-6z2td,UID:fffdd447-8f98-4949-8009-b757c55e9578,ResourceVersion:21185877,Generation:0,CreationTimestamp:2020-01-20 13:50:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c3f1312b-2d2a-432f-b179-6d5e085285c6 0xc002beab17 0xc002beab18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002beab80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002beaba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:50:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:50:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-01-20 13:50:51 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-20 13:51:19 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9dd60520dc5dcd0fcda3ef85d033073b90115dfa0b0bdfd5733d8a72177ef999}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.217: INFO: Pod "nginx-deployment-7b8c6f4498-7lfsc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7lfsc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-7b8c6f4498-7lfsc,UID:5c30e466-1d2d-4978-9a02-075475276a5e,ResourceVersion:21185987,Generation:0,CreationTimestamp:2020-01-20 13:51:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c3f1312b-2d2a-432f-b179-6d5e085285c6 0xc002beac77 0xc002beac78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bead20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bead60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:28 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.218: INFO: Pod "nginx-deployment-7b8c6f4498-bbwtw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bbwtw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-7b8c6f4498-bbwtw,UID:094727a2-267b-41d8-a9b4-84c5cd699830,ResourceVersion:21186002,Generation:0,CreationTimestamp:2020-01-20 13:51:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c3f1312b-2d2a-432f-b179-6d5e085285c6 0xc002beae37 0xc002beae38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002beaea0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002beaec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:26 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-20 13:51:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.218: INFO: Pod "nginx-deployment-7b8c6f4498-bwbc8" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bwbc8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-7b8c6f4498-bwbc8,UID:e39b0839-cf6f-4ea6-afcf-29beee363451,ResourceVersion:21185841,Generation:0,CreationTimestamp:2020-01-20 13:50:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c3f1312b-2d2a-432f-b179-6d5e085285c6 0xc002beaf97 0xc002beaf98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002beb020} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002beb040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:50:50 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:16 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:16 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:50:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-20 13:50:50 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-20 13:51:15 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d13f21950845882a6f4a32d3e854e6006b01156156a190db7593813737276889}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.218: INFO: Pod "nginx-deployment-7b8c6f4498-ccx5x" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ccx5x,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-7b8c6f4498-ccx5x,UID:77e0442e-c376-496e-96bd-5a8e61183a1b,ResourceVersion:21185848,Generation:0,CreationTimestamp:2020-01-20 13:50:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c3f1312b-2d2a-432f-b179-6d5e085285c6 0xc002beb127 0xc002beb128}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002beb1c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002beb1e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:50:50 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:16 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:16 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:50:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-20 13:50:50 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-20 13:51:12 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://610d7d2660497027491cb00c20cb9e983625fa550ebbf932f8be14ba77209b12}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.218: INFO: Pod "nginx-deployment-7b8c6f4498-df84c" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-df84c,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-7b8c6f4498-df84c,UID:ded7500d-22be-44a7-a2dc-acde7a5b21e3,ResourceVersion:21185986,Generation:0,CreationTimestamp:2020-01-20 13:51:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c3f1312b-2d2a-432f-b179-6d5e085285c6 0xc002beb2b7 0xc002beb2b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002beb340} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002beb360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:28 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.218: INFO: Pod "nginx-deployment-7b8c6f4498-fl4h2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fl4h2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-7b8c6f4498-fl4h2,UID:03c3726e-bdf8-4e91-b932-7385623d8af6,ResourceVersion:21185995,Generation:0,CreationTimestamp:2020-01-20 13:51:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c3f1312b-2d2a-432f-b179-6d5e085285c6 0xc002beb3e7 0xc002beb3e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002beb450} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002beb470}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:28 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.219: INFO: Pod "nginx-deployment-7b8c6f4498-jdrqz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jdrqz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-7b8c6f4498-jdrqz,UID:2797347c-aa39-47fd-aadb-7d36a79d0458,ResourceVersion:21185994,Generation:0,CreationTimestamp:2020-01-20 13:51:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c3f1312b-2d2a-432f-b179-6d5e085285c6 0xc002beb4f7 0xc002beb4f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002beb560} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002beb580}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:28 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.219: INFO: Pod "nginx-deployment-7b8c6f4498-krjtk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-krjtk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-7b8c6f4498-krjtk,UID:24c366d3-8304-42d6-a864-24fe417bf649,ResourceVersion:21186001,Generation:0,CreationTimestamp:2020-01-20 13:51:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c3f1312b-2d2a-432f-b179-6d5e085285c6 0xc002beb607 0xc002beb608}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002beb680} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002beb6a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:26 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:26 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-20 13:51:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.219: INFO: Pod "nginx-deployment-7b8c6f4498-l6xml" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-l6xml,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-7b8c6f4498-l6xml,UID:6276c31f-4e1f-4d78-b37d-c58ebbd25287,ResourceVersion:21185871,Generation:0,CreationTimestamp:2020-01-20 13:50:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c3f1312b-2d2a-432f-b179-6d5e085285c6 0xc002beb767 0xc002beb768}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002beb7d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002beb7f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:50:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:50:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-01-20 13:50:51 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-20 13:51:20 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://86a2e135a535dcf42df115d7631dbee04cd87b51f7d668241175e983bda4cd13}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.219: INFO: Pod "nginx-deployment-7b8c6f4498-mb68b" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mb68b,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-7b8c6f4498-mb68b,UID:cf987e26-5ac4-41a3-a9b1-50209a5378ac,ResourceVersion:21185859,Generation:0,CreationTimestamp:2020-01-20 13:50:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c3f1312b-2d2a-432f-b179-6d5e085285c6 0xc002beb8c7 0xc002beb8c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002beb940} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002beb960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:50:50 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:18 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:18 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:50:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-01-20 13:50:50 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-20 13:51:16 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://5774a1864fd3508ba469478e3c4a147062afd2f276800e5c641a1261f2e7456e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.219: INFO: Pod "nginx-deployment-7b8c6f4498-mdbqz" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mdbqz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-7b8c6f4498-mdbqz,UID:c6f24c40-91cb-40c9-a1b6-8830f2aa0460,ResourceVersion:21185884,Generation:0,CreationTimestamp:2020-01-20 13:50:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c3f1312b-2d2a-432f-b179-6d5e085285c6 0xc002beba37 0xc002beba38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bebaa0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bebac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:50:50 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:50:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-01-20 13:50:50 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-20 13:51:17 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://5aa46361abb01749db3e236904e5f5a0a5e7ace1ecd80be80301a49a5b6d6b27}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.220: INFO: Pod "nginx-deployment-7b8c6f4498-wbdhm" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wbdhm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-7b8c6f4498-wbdhm,UID:a8e421d6-fb50-43c9-8bad-56214d7c2801,ResourceVersion:21185856,Generation:0,CreationTimestamp:2020-01-20 13:50:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c3f1312b-2d2a-432f-b179-6d5e085285c6 0xc002bebb97 0xc002bebb98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bebc10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bebc30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:50:50 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:18 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:18 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:50:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2020-01-20 13:50:50 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-20 13:51:16 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://10cb5121b156fd14f8f3c077e7b447eabc6933613784ac8b09d362ffdf63c725}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.221: INFO: Pod "nginx-deployment-7b8c6f4498-xk2s8" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xk2s8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-7b8c6f4498-xk2s8,UID:64dd14cf-7a76-4f2a-b4b2-c03c03ce3d1d,ResourceVersion:21185852,Generation:0,CreationTimestamp:2020-01-20 13:50:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c3f1312b-2d2a-432f-b179-6d5e085285c6 0xc002bebd07 0xc002bebd08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bebd80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bebda0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:50:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:17 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:50:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-01-20 13:50:51 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-20 13:51:16 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a59aa1dfeeef30cd021b70f3f43b6a4545be17f32b5c8148205fc2a49aac1566}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.221: INFO: Pod "nginx-deployment-7b8c6f4498-z5kjn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-z5kjn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-7b8c6f4498-z5kjn,UID:7f9e9d16-bab6-4453-a916-a90809068e6c,ResourceVersion:21185979,Generation:0,CreationTimestamp:2020-01-20 13:51:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c3f1312b-2d2a-432f-b179-6d5e085285c6 0xc002bebe77 0xc002bebe78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bebef0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bebf10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:26 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 20 13:51:34.221: INFO: Pod "nginx-deployment-7b8c6f4498-zsnkj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zsnkj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-805,SelfLink:/api/v1/namespaces/deployment-805/pods/nginx-deployment-7b8c6f4498-zsnkj,UID:d4b637e1-f800-432e-942d-9006abe0fade,ResourceVersion:21185978,Generation:0,CreationTimestamp:2020-01-20 13:51:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c3f1312b-2d2a-432f-b179-6d5e085285c6 0xc002bebf97 0xc002bebf98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vzdg6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vzdg6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vzdg6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00333a000} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00333a020}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 13:51:26 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:51:34.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-805" for this suite.
Jan 20 13:52:27.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:52:27.597: INFO: namespace deployment-805 deletion completed in 51.666073105s

• [SLOW TEST:97.457 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:52:27.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 20 13:52:27.826: INFO: Waiting up to 5m0s for pod "pod-e8e7cc0d-7478-4ab1-a4e9-685adbd961c8" in namespace "emptydir-5333" to be "success or failure"
Jan 20 13:52:27.856: INFO: Pod "pod-e8e7cc0d-7478-4ab1-a4e9-685adbd961c8": Phase="Pending", Reason="", readiness=false. Elapsed: 29.714564ms
Jan 20 13:52:29.879: INFO: Pod "pod-e8e7cc0d-7478-4ab1-a4e9-685adbd961c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052604129s
Jan 20 13:52:31.897: INFO: Pod "pod-e8e7cc0d-7478-4ab1-a4e9-685adbd961c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070485939s
Jan 20 13:52:33.983: INFO: Pod "pod-e8e7cc0d-7478-4ab1-a4e9-685adbd961c8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.156863088s
Jan 20 13:52:35.994: INFO: Pod "pod-e8e7cc0d-7478-4ab1-a4e9-685adbd961c8": Phase="Running", Reason="", readiness=true. Elapsed: 8.168169722s
Jan 20 13:52:38.005: INFO: Pod "pod-e8e7cc0d-7478-4ab1-a4e9-685adbd961c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.178612829s
STEP: Saw pod success
Jan 20 13:52:38.005: INFO: Pod "pod-e8e7cc0d-7478-4ab1-a4e9-685adbd961c8" satisfied condition "success or failure"
Jan 20 13:52:38.016: INFO: Trying to get logs from node iruya-node pod pod-e8e7cc0d-7478-4ab1-a4e9-685adbd961c8 container test-container: 
STEP: delete the pod
Jan 20 13:52:38.074: INFO: Waiting for pod pod-e8e7cc0d-7478-4ab1-a4e9-685adbd961c8 to disappear
Jan 20 13:52:38.140: INFO: Pod pod-e8e7cc0d-7478-4ab1-a4e9-685adbd961c8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:52:38.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5333" for this suite.
Jan 20 13:52:44.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:52:44.342: INFO: namespace emptydir-5333 deletion completed in 6.180450911s

• [SLOW TEST:16.743 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:52:44.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 20 13:52:44.478: INFO: Waiting up to 5m0s for pod "downwardapi-volume-102ea3cc-eadb-4acb-af42-9042f9a747ea" in namespace "projected-4638" to be "success or failure"
Jan 20 13:52:44.507: INFO: Pod "downwardapi-volume-102ea3cc-eadb-4acb-af42-9042f9a747ea": Phase="Pending", Reason="", readiness=false. Elapsed: 28.348691ms
Jan 20 13:52:46.523: INFO: Pod "downwardapi-volume-102ea3cc-eadb-4acb-af42-9042f9a747ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043906141s
Jan 20 13:52:48.539: INFO: Pod "downwardapi-volume-102ea3cc-eadb-4acb-af42-9042f9a747ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060194143s
Jan 20 13:52:50.556: INFO: Pod "downwardapi-volume-102ea3cc-eadb-4acb-af42-9042f9a747ea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077400629s
Jan 20 13:52:52.570: INFO: Pod "downwardapi-volume-102ea3cc-eadb-4acb-af42-9042f9a747ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.091513367s
STEP: Saw pod success
Jan 20 13:52:52.570: INFO: Pod "downwardapi-volume-102ea3cc-eadb-4acb-af42-9042f9a747ea" satisfied condition "success or failure"
Jan 20 13:52:52.576: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-102ea3cc-eadb-4acb-af42-9042f9a747ea container client-container: 
STEP: delete the pod
Jan 20 13:52:52.710: INFO: Waiting for pod downwardapi-volume-102ea3cc-eadb-4acb-af42-9042f9a747ea to disappear
Jan 20 13:52:52.722: INFO: Pod downwardapi-volume-102ea3cc-eadb-4acb-af42-9042f9a747ea no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:52:52.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4638" for this suite.
Jan 20 13:53:00.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:53:00.908: INFO: namespace projected-4638 deletion completed in 8.179978535s

• [SLOW TEST:16.565 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:53:00.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan 20 13:53:00.996: INFO: Waiting up to 5m0s for pod "pod-68baf151-a6b2-4d99-a677-1bee330c9edf" in namespace "emptydir-3979" to be "success or failure"
Jan 20 13:53:01.000: INFO: Pod "pod-68baf151-a6b2-4d99-a677-1bee330c9edf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.416495ms
Jan 20 13:53:03.010: INFO: Pod "pod-68baf151-a6b2-4d99-a677-1bee330c9edf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013583414s
Jan 20 13:53:05.018: INFO: Pod "pod-68baf151-a6b2-4d99-a677-1bee330c9edf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021515175s
Jan 20 13:53:07.027: INFO: Pod "pod-68baf151-a6b2-4d99-a677-1bee330c9edf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031188534s
Jan 20 13:53:09.034: INFO: Pod "pod-68baf151-a6b2-4d99-a677-1bee330c9edf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.0379614s
STEP: Saw pod success
Jan 20 13:53:09.034: INFO: Pod "pod-68baf151-a6b2-4d99-a677-1bee330c9edf" satisfied condition "success or failure"
Jan 20 13:53:09.039: INFO: Trying to get logs from node iruya-node pod pod-68baf151-a6b2-4d99-a677-1bee330c9edf container test-container: 
STEP: delete the pod
Jan 20 13:53:09.166: INFO: Waiting for pod pod-68baf151-a6b2-4d99-a677-1bee330c9edf to disappear
Jan 20 13:53:09.174: INFO: Pod pod-68baf151-a6b2-4d99-a677-1bee330c9edf no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:53:09.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3979" for this suite.
Jan 20 13:53:15.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:53:15.329: INFO: namespace emptydir-3979 deletion completed in 6.147888233s

• [SLOW TEST:14.421 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:53:15.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 20 13:53:15.450: INFO: Waiting up to 5m0s for pod "pod-307d1483-92eb-473b-9279-5284c4478cd2" in namespace "emptydir-4271" to be "success or failure"
Jan 20 13:53:15.544: INFO: Pod "pod-307d1483-92eb-473b-9279-5284c4478cd2": Phase="Pending", Reason="", readiness=false. Elapsed: 93.325663ms
Jan 20 13:53:17.554: INFO: Pod "pod-307d1483-92eb-473b-9279-5284c4478cd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103015878s
Jan 20 13:53:19.576: INFO: Pod "pod-307d1483-92eb-473b-9279-5284c4478cd2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125684541s
Jan 20 13:53:21.586: INFO: Pod "pod-307d1483-92eb-473b-9279-5284c4478cd2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135216563s
Jan 20 13:53:23.596: INFO: Pod "pod-307d1483-92eb-473b-9279-5284c4478cd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.144795088s
STEP: Saw pod success
Jan 20 13:53:23.596: INFO: Pod "pod-307d1483-92eb-473b-9279-5284c4478cd2" satisfied condition "success or failure"
Jan 20 13:53:23.600: INFO: Trying to get logs from node iruya-node pod pod-307d1483-92eb-473b-9279-5284c4478cd2 container test-container: 
STEP: delete the pod
Jan 20 13:53:23.721: INFO: Waiting for pod pod-307d1483-92eb-473b-9279-5284c4478cd2 to disappear
Jan 20 13:53:23.731: INFO: Pod pod-307d1483-92eb-473b-9279-5284c4478cd2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:53:23.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4271" for this suite.
Jan 20 13:53:31.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:53:31.929: INFO: namespace emptydir-4271 deletion completed in 8.193489457s

• [SLOW TEST:16.600 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:53:31.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan 20 13:53:32.183: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-4001,SelfLink:/api/v1/namespaces/watch-4001/configmaps/e2e-watch-test-resource-version,UID:9fd382fe-9c22-40f1-b9e2-17b9547e441a,ResourceVersion:21186439,Generation:0,CreationTimestamp:2020-01-20 13:53:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 20 13:53:32.183: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-4001,SelfLink:/api/v1/namespaces/watch-4001/configmaps/e2e-watch-test-resource-version,UID:9fd382fe-9c22-40f1-b9e2-17b9547e441a,ResourceVersion:21186440,Generation:0,CreationTimestamp:2020-01-20 13:53:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:53:32.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4001" for this suite.
Jan 20 13:53:38.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:53:38.320: INFO: namespace watch-4001 deletion completed in 6.132941966s

• [SLOW TEST:6.390 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:53:38.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:53:46.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-4987" for this suite.
Jan 20 13:53:52.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:53:52.816: INFO: namespace emptydir-wrapper-4987 deletion completed in 6.166264471s

• [SLOW TEST:14.495 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:53:52.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-1ebcb8fc-7d32-4ffd-b3db-cf745f6cb2e1
STEP: Creating a pod to test consume secrets
Jan 20 13:53:52.940: INFO: Waiting up to 5m0s for pod "pod-secrets-03ad9d43-072a-4c28-987d-d43d02a90266" in namespace "secrets-2623" to be "success or failure"
Jan 20 13:53:52.960: INFO: Pod "pod-secrets-03ad9d43-072a-4c28-987d-d43d02a90266": Phase="Pending", Reason="", readiness=false. Elapsed: 19.075937ms
Jan 20 13:53:54.968: INFO: Pod "pod-secrets-03ad9d43-072a-4c28-987d-d43d02a90266": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027794426s
Jan 20 13:53:56.978: INFO: Pod "pod-secrets-03ad9d43-072a-4c28-987d-d43d02a90266": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037162672s
Jan 20 13:53:58.989: INFO: Pod "pod-secrets-03ad9d43-072a-4c28-987d-d43d02a90266": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048020979s
Jan 20 13:54:00.999: INFO: Pod "pod-secrets-03ad9d43-072a-4c28-987d-d43d02a90266": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058372489s
STEP: Saw pod success
Jan 20 13:54:00.999: INFO: Pod "pod-secrets-03ad9d43-072a-4c28-987d-d43d02a90266" satisfied condition "success or failure"
Jan 20 13:54:01.005: INFO: Trying to get logs from node iruya-node pod pod-secrets-03ad9d43-072a-4c28-987d-d43d02a90266 container secret-volume-test: 
STEP: delete the pod
Jan 20 13:54:02.140: INFO: Waiting for pod pod-secrets-03ad9d43-072a-4c28-987d-d43d02a90266 to disappear
Jan 20 13:54:02.161: INFO: Pod pod-secrets-03ad9d43-072a-4c28-987d-d43d02a90266 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:54:02.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2623" for this suite.
Jan 20 13:54:08.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:54:08.381: INFO: namespace secrets-2623 deletion completed in 6.209353531s

• [SLOW TEST:15.565 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:54:08.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:54:13.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1344" for this suite.
Jan 20 13:54:20.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:54:20.157: INFO: namespace watch-1344 deletion completed in 6.228262071s

• [SLOW TEST:11.775 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:54:20.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Jan 20 13:54:20.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8214'
Jan 20 13:54:20.913: INFO: stderr: ""
Jan 20 13:54:20.914: INFO: stdout: "pod/pause created\n"
Jan 20 13:54:20.914: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan 20 13:54:20.914: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8214" to be "running and ready"
Jan 20 13:54:20.941: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 26.743138ms
Jan 20 13:54:22.961: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046773522s
Jan 20 13:54:24.983: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069522455s
Jan 20 13:54:26.994: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079772594s
Jan 20 13:54:29.001: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.087048596s
Jan 20 13:54:29.001: INFO: Pod "pause" satisfied condition "running and ready"
Jan 20 13:54:29.001: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Jan 20 13:54:29.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-8214'
Jan 20 13:54:29.164: INFO: stderr: ""
Jan 20 13:54:29.164: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan 20 13:54:29.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8214'
Jan 20 13:54:29.299: INFO: stderr: ""
Jan 20 13:54:29.299: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan 20 13:54:29.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-8214'
Jan 20 13:54:29.445: INFO: stderr: ""
Jan 20 13:54:29.445: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan 20 13:54:29.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8214'
Jan 20 13:54:29.568: INFO: stderr: ""
Jan 20 13:54:29.568: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Jan 20 13:54:29.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8214'
Jan 20 13:54:29.702: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 20 13:54:29.702: INFO: stdout: "pod \"pause\" force deleted\n"
Jan 20 13:54:29.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-8214'
Jan 20 13:54:29.836: INFO: stderr: "No resources found.\n"
Jan 20 13:54:29.836: INFO: stdout: ""
Jan 20 13:54:29.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-8214 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 20 13:54:29.931: INFO: stderr: ""
Jan 20 13:54:29.931: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:54:29.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8214" for this suite.
Jan 20 13:54:35.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:54:36.137: INFO: namespace kubectl-8214 deletion completed in 6.194116994s

• [SLOW TEST:15.980 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:54:36.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-5552
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Jan 20 13:54:36.614: INFO: Found 0 stateful pods, waiting for 3
Jan 20 13:54:46.630: INFO: Found 2 stateful pods, waiting for 3
Jan 20 13:54:56.637: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 20 13:54:56.637: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 20 13:54:56.637: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 20 13:55:06.628: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 20 13:55:06.628: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 20 13:55:06.628: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan 20 13:55:06.663: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan 20 13:55:16.725: INFO: Updating stateful set ss2
Jan 20 13:55:16.748: INFO: Waiting for Pod statefulset-5552/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jan 20 13:55:27.059: INFO: Found 2 stateful pods, waiting for 3
Jan 20 13:55:37.085: INFO: Found 2 stateful pods, waiting for 3
Jan 20 13:55:47.070: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 20 13:55:47.070: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 20 13:55:47.070: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan 20 13:55:47.106: INFO: Updating stateful set ss2
Jan 20 13:55:47.118: INFO: Waiting for Pod statefulset-5552/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 20 13:55:57.132: INFO: Waiting for Pod statefulset-5552/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 20 13:56:07.165: INFO: Updating stateful set ss2
Jan 20 13:56:07.180: INFO: Waiting for StatefulSet statefulset-5552/ss2 to complete update
Jan 20 13:56:07.180: INFO: Waiting for Pod statefulset-5552/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 20 13:56:17.200: INFO: Waiting for StatefulSet statefulset-5552/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 20 13:56:27.197: INFO: Deleting all statefulset in ns statefulset-5552
Jan 20 13:56:27.202: INFO: Scaling statefulset ss2 to 0
Jan 20 13:56:57.230: INFO: Waiting for statefulset status.replicas updated to 0
Jan 20 13:56:57.235: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:56:57.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5552" for this suite.
Jan 20 13:57:03.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:57:03.480: INFO: namespace statefulset-5552 deletion completed in 6.213760687s

• [SLOW TEST:147.339 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:57:03.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Jan 20 13:57:03.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3876'
Jan 20 13:57:04.232: INFO: stderr: ""
Jan 20 13:57:04.232: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 20 13:57:05.242: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 13:57:05.242: INFO: Found 0 / 1
Jan 20 13:57:06.239: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 13:57:06.239: INFO: Found 0 / 1
Jan 20 13:57:07.244: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 13:57:07.244: INFO: Found 0 / 1
Jan 20 13:57:08.247: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 13:57:08.247: INFO: Found 0 / 1
Jan 20 13:57:09.242: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 13:57:09.242: INFO: Found 0 / 1
Jan 20 13:57:10.239: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 13:57:10.239: INFO: Found 0 / 1
Jan 20 13:57:11.245: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 13:57:11.245: INFO: Found 0 / 1
Jan 20 13:57:12.239: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 13:57:12.239: INFO: Found 1 / 1
Jan 20 13:57:12.239: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan 20 13:57:12.242: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 13:57:12.242: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 20 13:57:12.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-6p4z2 --namespace=kubectl-3876 -p {"metadata":{"annotations":{"x":"y"}}}'
Jan 20 13:57:12.414: INFO: stderr: ""
Jan 20 13:57:12.414: INFO: stdout: "pod/redis-master-6p4z2 patched\n"
STEP: checking annotations
Jan 20 13:57:12.422: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 13:57:12.422: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:57:12.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3876" for this suite.
Jan 20 13:57:34.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:57:34.598: INFO: namespace kubectl-3876 deletion completed in 22.16998409s

• [SLOW TEST:31.118 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:57:34.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Jan 20 13:57:34.692: INFO: Waiting up to 5m0s for pod "var-expansion-eba47d38-b0a7-447a-b4cc-09ced4d83925" in namespace "var-expansion-3819" to be "success or failure"
Jan 20 13:57:34.696: INFO: Pod "var-expansion-eba47d38-b0a7-447a-b4cc-09ced4d83925": Phase="Pending", Reason="", readiness=false. Elapsed: 3.950514ms
Jan 20 13:57:36.709: INFO: Pod "var-expansion-eba47d38-b0a7-447a-b4cc-09ced4d83925": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017338447s
Jan 20 13:57:38.719: INFO: Pod "var-expansion-eba47d38-b0a7-447a-b4cc-09ced4d83925": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027211878s
Jan 20 13:57:40.864: INFO: Pod "var-expansion-eba47d38-b0a7-447a-b4cc-09ced4d83925": Phase="Pending", Reason="", readiness=false. Elapsed: 6.171845606s
Jan 20 13:57:42.886: INFO: Pod "var-expansion-eba47d38-b0a7-447a-b4cc-09ced4d83925": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.193921301s
STEP: Saw pod success
Jan 20 13:57:42.886: INFO: Pod "var-expansion-eba47d38-b0a7-447a-b4cc-09ced4d83925" satisfied condition "success or failure"
Jan 20 13:57:42.896: INFO: Trying to get logs from node iruya-node pod var-expansion-eba47d38-b0a7-447a-b4cc-09ced4d83925 container dapi-container: 
STEP: delete the pod
Jan 20 13:57:43.026: INFO: Waiting for pod var-expansion-eba47d38-b0a7-447a-b4cc-09ced4d83925 to disappear
Jan 20 13:57:43.031: INFO: Pod var-expansion-eba47d38-b0a7-447a-b4cc-09ced4d83925 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:57:43.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3819" for this suite.
Jan 20 13:57:49.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:57:49.209: INFO: namespace var-expansion-3819 deletion completed in 6.17342402s

• [SLOW TEST:14.609 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:57:49.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan 20 13:57:57.974: INFO: Successfully updated pod "annotationupdate9b90ae27-54ca-474d-8857-ee1947b2ed9d"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:58:00.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7197" for this suite.
Jan 20 13:58:20.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:58:20.211: INFO: namespace downward-api-7197 deletion completed in 20.162853424s

• [SLOW TEST:31.001 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:58:20.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan 20 13:58:20.374: INFO: Pod name pod-release: Found 0 pods out of 1
Jan 20 13:58:25.384: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:58:26.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3479" for this suite.
Jan 20 13:58:32.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:58:32.565: INFO: namespace replication-controller-3479 deletion completed in 6.123077227s

• [SLOW TEST:12.353 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:58:32.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-05f2dcf0-f270-4dd5-95f4-cb494204caf0
STEP: Creating a pod to test consume configMaps
Jan 20 13:58:32.799: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2a22bede-1e65-49df-82cd-ddc045186880" in namespace "projected-3034" to be "success or failure"
Jan 20 13:58:32.807: INFO: Pod "pod-projected-configmaps-2a22bede-1e65-49df-82cd-ddc045186880": Phase="Pending", Reason="", readiness=false. Elapsed: 8.20951ms
Jan 20 13:58:34.816: INFO: Pod "pod-projected-configmaps-2a22bede-1e65-49df-82cd-ddc045186880": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01700086s
Jan 20 13:58:36.826: INFO: Pod "pod-projected-configmaps-2a22bede-1e65-49df-82cd-ddc045186880": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027404195s
Jan 20 13:58:38.837: INFO: Pod "pod-projected-configmaps-2a22bede-1e65-49df-82cd-ddc045186880": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038387462s
Jan 20 13:58:40.851: INFO: Pod "pod-projected-configmaps-2a22bede-1e65-49df-82cd-ddc045186880": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052467349s
Jan 20 13:58:42.867: INFO: Pod "pod-projected-configmaps-2a22bede-1e65-49df-82cd-ddc045186880": Phase="Pending", Reason="", readiness=false. Elapsed: 10.067923791s
Jan 20 13:58:44.878: INFO: Pod "pod-projected-configmaps-2a22bede-1e65-49df-82cd-ddc045186880": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.078655044s
STEP: Saw pod success
Jan 20 13:58:44.878: INFO: Pod "pod-projected-configmaps-2a22bede-1e65-49df-82cd-ddc045186880" satisfied condition "success or failure"
Jan 20 13:58:44.884: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-2a22bede-1e65-49df-82cd-ddc045186880 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 20 13:58:44.987: INFO: Waiting for pod pod-projected-configmaps-2a22bede-1e65-49df-82cd-ddc045186880 to disappear
Jan 20 13:58:45.005: INFO: Pod pod-projected-configmaps-2a22bede-1e65-49df-82cd-ddc045186880 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:58:45.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3034" for this suite.
Jan 20 13:58:51.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:58:51.191: INFO: namespace projected-3034 deletion completed in 6.178583694s

• [SLOW TEST:18.626 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:58:51.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 20 13:58:59.674: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:58:59.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5784" for this suite.
Jan 20 13:59:05.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:59:05.923: INFO: namespace container-runtime-5784 deletion completed in 6.208757497s

• [SLOW TEST:14.732 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:59:05.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 20 13:59:06.009: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Jan 20 13:59:08.078: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:59:09.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6605" for this suite.
Jan 20 13:59:17.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:59:17.461: INFO: namespace replication-controller-6605 deletion completed in 8.365202593s

• [SLOW TEST:11.537 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 13:59:17.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-f099f15d-de4f-4b14-b2da-ffced42a2408 in namespace container-probe-2644
Jan 20 13:59:28.779: INFO: Started pod liveness-f099f15d-de4f-4b14-b2da-ffced42a2408 in namespace container-probe-2644
STEP: checking the pod's current state and verifying that restartCount is present
Jan 20 13:59:28.784: INFO: Initial restart count of pod liveness-f099f15d-de4f-4b14-b2da-ffced42a2408 is 0
Jan 20 13:59:55.682: INFO: Restart count of pod container-probe-2644/liveness-f099f15d-de4f-4b14-b2da-ffced42a2408 is now 1 (26.898473362s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 13:59:55.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2644" for this suite.
Jan 20 14:00:01.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:00:01.960: INFO: namespace container-probe-2644 deletion completed in 6.231030465s

• [SLOW TEST:44.499 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:00:01.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6448
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-6448
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6448
Jan 20 14:00:02.110: INFO: Found 0 stateful pods, waiting for 1
Jan 20 14:00:12.224: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan 20 14:00:12.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 20 14:00:14.895: INFO: stderr: "I0120 14:00:14.535155    1256 log.go:172] (0xc000ae0420) (0xc000aaa820) Create stream\nI0120 14:00:14.535372    1256 log.go:172] (0xc000ae0420) (0xc000aaa820) Stream added, broadcasting: 1\nI0120 14:00:14.545657    1256 log.go:172] (0xc000ae0420) Reply frame received for 1\nI0120 14:00:14.545780    1256 log.go:172] (0xc000ae0420) (0xc000ac40a0) Create stream\nI0120 14:00:14.545795    1256 log.go:172] (0xc000ae0420) (0xc000ac40a0) Stream added, broadcasting: 3\nI0120 14:00:14.547369    1256 log.go:172] (0xc000ae0420) Reply frame received for 3\nI0120 14:00:14.547404    1256 log.go:172] (0xc000ae0420) (0xc000ac4140) Create stream\nI0120 14:00:14.547419    1256 log.go:172] (0xc000ae0420) (0xc000ac4140) Stream added, broadcasting: 5\nI0120 14:00:14.548915    1256 log.go:172] (0xc000ae0420) Reply frame received for 5\nI0120 14:00:14.672179    1256 log.go:172] (0xc000ae0420) Data frame received for 5\nI0120 14:00:14.672256    1256 log.go:172] (0xc000ac4140) (5) Data frame handling\nI0120 14:00:14.672287    1256 log.go:172] (0xc000ac4140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0120 14:00:14.737384    1256 log.go:172] (0xc000ae0420) Data frame received for 3\nI0120 14:00:14.737589    1256 log.go:172] (0xc000ac40a0) (3) Data frame handling\nI0120 14:00:14.737657    1256 log.go:172] (0xc000ac40a0) (3) Data frame sent\nI0120 14:00:14.882605    1256 log.go:172] (0xc000ae0420) (0xc000ac40a0) Stream removed, broadcasting: 3\nI0120 14:00:14.882971    1256 log.go:172] (0xc000ae0420) Data frame received for 1\nI0120 14:00:14.883057    1256 log.go:172] (0xc000ae0420) (0xc000ac4140) Stream removed, broadcasting: 5\nI0120 14:00:14.883145    1256 log.go:172] (0xc000aaa820) (1) Data frame handling\nI0120 14:00:14.883183    1256 log.go:172] (0xc000aaa820) (1) Data frame sent\nI0120 14:00:14.883200    1256 log.go:172] (0xc000ae0420) (0xc000aaa820) Stream removed, broadcasting: 1\nI0120 14:00:14.883225    1256 log.go:172] (0xc000ae0420) Go away received\nI0120 14:00:14.884876    1256 log.go:172] (0xc000ae0420) (0xc000aaa820) Stream removed, broadcasting: 1\nI0120 14:00:14.884892    1256 log.go:172] (0xc000ae0420) (0xc000ac40a0) Stream removed, broadcasting: 3\nI0120 14:00:14.884899    1256 log.go:172] (0xc000ae0420) (0xc000ac4140) Stream removed, broadcasting: 5\n"
Jan 20 14:00:14.896: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 20 14:00:14.896: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 20 14:00:14.904: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 20 14:00:24.919: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 20 14:00:24.919: INFO: Waiting for statefulset status.replicas updated to 0
Jan 20 14:00:24.954: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 20 14:00:24.954: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:02 +0000 UTC  }]
Jan 20 14:00:24.954: INFO: 
Jan 20 14:00:24.954: INFO: StatefulSet ss has not reached scale 3, at 1
Jan 20 14:00:26.203: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.984189794s
Jan 20 14:00:27.521: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.735437547s
Jan 20 14:00:28.542: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.417446381s
Jan 20 14:00:29.560: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.396835658s
Jan 20 14:00:30.581: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.378914388s
Jan 20 14:00:31.589: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.357360932s
Jan 20 14:00:32.703: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.349450026s
Jan 20 14:00:33.718: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.235887856s
Jan 20 14:00:34.728: INFO: Verifying statefulset ss doesn't scale past 3 for another 220.412375ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6448
Jan 20 14:00:35.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:00:36.350: INFO: stderr: "I0120 14:00:36.030004    1281 log.go:172] (0xc0009b0420) (0xc000626b40) Create stream\nI0120 14:00:36.030375    1281 log.go:172] (0xc0009b0420) (0xc000626b40) Stream added, broadcasting: 1\nI0120 14:00:36.036317    1281 log.go:172] (0xc0009b0420) Reply frame received for 1\nI0120 14:00:36.036464    1281 log.go:172] (0xc0009b0420) (0xc000626be0) Create stream\nI0120 14:00:36.036480    1281 log.go:172] (0xc0009b0420) (0xc000626be0) Stream added, broadcasting: 3\nI0120 14:00:36.039593    1281 log.go:172] (0xc0009b0420) Reply frame received for 3\nI0120 14:00:36.039633    1281 log.go:172] (0xc0009b0420) (0xc000954000) Create stream\nI0120 14:00:36.039653    1281 log.go:172] (0xc0009b0420) (0xc000954000) Stream added, broadcasting: 5\nI0120 14:00:36.041715    1281 log.go:172] (0xc0009b0420) Reply frame received for 5\nI0120 14:00:36.169827    1281 log.go:172] (0xc0009b0420) Data frame received for 5\nI0120 14:00:36.169962    1281 log.go:172] (0xc000954000) (5) Data frame handling\nI0120 14:00:36.169984    1281 log.go:172] (0xc000954000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0120 14:00:36.169995    1281 log.go:172] (0xc0009b0420) Data frame received for 3\nI0120 14:00:36.170044    1281 log.go:172] (0xc000626be0) (3) Data frame handling\nI0120 14:00:36.170073    1281 log.go:172] (0xc000626be0) (3) Data frame sent\nI0120 14:00:36.335897    1281 log.go:172] (0xc0009b0420) (0xc000954000) Stream removed, broadcasting: 5\nI0120 14:00:36.336128    1281 log.go:172] (0xc0009b0420) Data frame received for 1\nI0120 14:00:36.336175    1281 log.go:172] (0xc0009b0420) (0xc000626be0) Stream removed, broadcasting: 3\nI0120 14:00:36.336279    1281 log.go:172] (0xc000626b40) (1) Data frame handling\nI0120 14:00:36.336327    1281 log.go:172] (0xc000626b40) (1) Data frame sent\nI0120 14:00:36.336343    1281 log.go:172] (0xc0009b0420) (0xc000626b40) Stream removed, broadcasting: 1\nI0120 14:00:36.336362    1281 log.go:172] (0xc0009b0420) Go away received\nI0120 14:00:36.337604    1281 log.go:172] (0xc0009b0420) (0xc000626b40) Stream removed, broadcasting: 1\nI0120 14:00:36.337614    1281 log.go:172] (0xc0009b0420) (0xc000626be0) Stream removed, broadcasting: 3\nI0120 14:00:36.337619    1281 log.go:172] (0xc0009b0420) (0xc000954000) Stream removed, broadcasting: 5\n"
Jan 20 14:00:36.350: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 20 14:00:36.351: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 20 14:00:36.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:00:37.006: INFO: stderr: "I0120 14:00:36.526228    1303 log.go:172] (0xc0008506e0) (0xc000834c80) Create stream\nI0120 14:00:36.526823    1303 log.go:172] (0xc0008506e0) (0xc000834c80) Stream added, broadcasting: 1\nI0120 14:00:36.538567    1303 log.go:172] (0xc0008506e0) Reply frame received for 1\nI0120 14:00:36.538720    1303 log.go:172] (0xc0008506e0) (0xc0008e6000) Create stream\nI0120 14:00:36.538748    1303 log.go:172] (0xc0008506e0) (0xc0008e6000) Stream added, broadcasting: 3\nI0120 14:00:36.540137    1303 log.go:172] (0xc0008506e0) Reply frame received for 3\nI0120 14:00:36.540171    1303 log.go:172] (0xc0008506e0) (0xc00002fa40) Create stream\nI0120 14:00:36.540182    1303 log.go:172] (0xc0008506e0) (0xc00002fa40) Stream added, broadcasting: 5\nI0120 14:00:36.541561    1303 log.go:172] (0xc0008506e0) Reply frame received for 5\nI0120 14:00:36.746493    1303 log.go:172] (0xc0008506e0) Data frame received for 5\nI0120 14:00:36.746695    1303 log.go:172] (0xc00002fa40) (5) Data frame handling\nI0120 14:00:36.746733    1303 log.go:172] (0xc00002fa40) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0120 14:00:36.815414    1303 log.go:172] (0xc0008506e0) Data frame received for 3\nI0120 14:00:36.815674    1303 log.go:172] (0xc0008e6000) (3) Data frame handling\nI0120 14:00:36.815710    1303 log.go:172] (0xc0008e6000) (3) Data frame sent\nI0120 14:00:36.815786    1303 log.go:172] (0xc0008506e0) Data frame received for 5\nI0120 14:00:36.815792    1303 log.go:172] (0xc00002fa40) (5) Data frame handling\nI0120 14:00:36.815799    1303 log.go:172] (0xc00002fa40) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0120 14:00:36.998148    1303 log.go:172] (0xc0008506e0) (0xc0008e6000) Stream removed, broadcasting: 3\nI0120 14:00:36.998442    1303 log.go:172] (0xc0008506e0) Data frame received for 1\nI0120 14:00:36.998452    1303 log.go:172] (0xc000834c80) (1) Data frame handling\nI0120 14:00:36.998473    1303 log.go:172] (0xc000834c80) (1) Data frame sent\nI0120 14:00:36.998481    1303 log.go:172] (0xc0008506e0) (0xc000834c80) Stream removed, broadcasting: 1\nI0120 14:00:36.999492    1303 log.go:172] (0xc0008506e0) (0xc00002fa40) Stream removed, broadcasting: 5\nI0120 14:00:36.999534    1303 log.go:172] (0xc0008506e0) (0xc000834c80) Stream removed, broadcasting: 1\nI0120 14:00:36.999539    1303 log.go:172] (0xc0008506e0) (0xc0008e6000) Stream removed, broadcasting: 3\nI0120 14:00:36.999544    1303 log.go:172] (0xc0008506e0) (0xc00002fa40) Stream removed, broadcasting: 5\n"
Jan 20 14:00:37.006: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 20 14:00:37.007: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 20 14:00:37.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:00:37.492: INFO: stderr: "I0120 14:00:37.239575    1316 log.go:172] (0xc00086e420) (0xc0004186e0) Create stream\nI0120 14:00:37.239924    1316 log.go:172] (0xc00086e420) (0xc0004186e0) Stream added, broadcasting: 1\nI0120 14:00:37.246478    1316 log.go:172] (0xc00086e420) Reply frame received for 1\nI0120 14:00:37.246524    1316 log.go:172] (0xc00086e420) (0xc000816820) Create stream\nI0120 14:00:37.246543    1316 log.go:172] (0xc00086e420) (0xc000816820) Stream added, broadcasting: 3\nI0120 14:00:37.248736    1316 log.go:172] (0xc00086e420) Reply frame received for 3\nI0120 14:00:37.248757    1316 log.go:172] (0xc00086e420) (0xc000418780) Create stream\nI0120 14:00:37.248764    1316 log.go:172] (0xc00086e420) (0xc000418780) Stream added, broadcasting: 5\nI0120 14:00:37.250028    1316 log.go:172] (0xc00086e420) Reply frame received for 5\nI0120 14:00:37.352671    1316 log.go:172] (0xc00086e420) Data frame received for 5\nI0120 14:00:37.352861    1316 log.go:172] (0xc000418780) (5) Data frame handling\nI0120 14:00:37.352954    1316 log.go:172] (0xc000418780) (5) Data frame sent\nI0120 14:00:37.353088    1316 log.go:172] (0xc00086e420) Data frame received for 3\nI0120 14:00:37.353141    1316 log.go:172] (0xc000816820) (3) Data frame handling\nI0120 14:00:37.353212    1316 log.go:172] (0xc000816820) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0120 14:00:37.473352    1316 log.go:172] (0xc00086e420) Data frame received for 1\nI0120 14:00:37.473496    1316 log.go:172] (0xc00086e420) (0xc000816820) Stream removed, broadcasting: 3\nI0120 14:00:37.473622    1316 log.go:172] (0xc00086e420) (0xc000418780) Stream removed, broadcasting: 5\nI0120 14:00:37.473654    1316 log.go:172] (0xc0004186e0) (1) Data frame handling\nI0120 14:00:37.473698    1316 log.go:172] (0xc0004186e0) (1) Data frame sent\nI0120 14:00:37.473718    1316 log.go:172] (0xc00086e420) (0xc0004186e0) Stream removed, broadcasting: 1\nI0120 14:00:37.473745    1316 log.go:172] (0xc00086e420) Go away received\nI0120 14:00:37.483883    1316 log.go:172] (0xc00086e420) (0xc0004186e0) Stream removed, broadcasting: 1\nI0120 14:00:37.483942    1316 log.go:172] (0xc00086e420) (0xc000816820) Stream removed, broadcasting: 3\nI0120 14:00:37.483993    1316 log.go:172] (0xc00086e420) (0xc000418780) Stream removed, broadcasting: 5\n"
Jan 20 14:00:37.493: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 20 14:00:37.493: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 20 14:00:37.501: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 20 14:00:37.501: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 20 14:00:37.501: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan 20 14:00:37.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 20 14:00:37.996: INFO: stderr: "I0120 14:00:37.710004    1339 log.go:172] (0xc000a70420) (0xc0007e2640) Create stream\nI0120 14:00:37.710199    1339 log.go:172] (0xc000a70420) (0xc0007e2640) Stream added, broadcasting: 1\nI0120 14:00:37.718298    1339 log.go:172] (0xc000a70420) Reply frame received for 1\nI0120 14:00:37.718445    1339 log.go:172] (0xc000a70420) (0xc000898000) Create stream\nI0120 14:00:37.718462    1339 log.go:172] (0xc000a70420) (0xc000898000) Stream added, broadcasting: 3\nI0120 14:00:37.720923    1339 log.go:172] (0xc000a70420) Reply frame received for 3\nI0120 14:00:37.720952    1339 log.go:172] (0xc000a70420) (0xc0005701e0) Create stream\nI0120 14:00:37.720960    1339 log.go:172] (0xc000a70420) (0xc0005701e0) Stream added, broadcasting: 5\nI0120 14:00:37.722623    1339 log.go:172] (0xc000a70420) Reply frame received for 5\nI0120 14:00:37.843129    1339 log.go:172] (0xc000a70420) Data frame received for 5\nI0120 14:00:37.843274    1339 log.go:172] (0xc0005701e0) (5) Data frame handling\nI0120 14:00:37.843305    1339 log.go:172] (0xc0005701e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0120 14:00:37.843349    1339 log.go:172] (0xc000a70420) Data frame received for 3\nI0120 14:00:37.843384    1339 log.go:172] (0xc000898000) (3) Data frame handling\nI0120 14:00:37.843426    1339 log.go:172] (0xc000898000) (3) Data frame sent\nI0120 14:00:37.983563    1339 log.go:172] (0xc000a70420) (0xc000898000) Stream removed, broadcasting: 3\nI0120 14:00:37.984066    1339 log.go:172] (0xc000a70420) Data frame received for 1\nI0120 14:00:37.984130    1339 log.go:172] (0xc000a70420) (0xc0005701e0) Stream removed, broadcasting: 5\nI0120 14:00:37.984221    1339 log.go:172] (0xc0007e2640) (1) Data frame handling\nI0120 14:00:37.984281    1339 log.go:172] (0xc0007e2640) (1) Data frame sent\nI0120 14:00:37.984302    1339 log.go:172] (0xc000a70420) (0xc0007e2640) Stream removed, broadcasting: 1\nI0120 14:00:37.984335    1339 log.go:172] (0xc000a70420) Go away received\nI0120 14:00:37.985939    1339 log.go:172] (0xc000a70420) (0xc0007e2640) Stream removed, broadcasting: 1\nI0120 14:00:37.985955    1339 log.go:172] (0xc000a70420) (0xc000898000) Stream removed, broadcasting: 3\nI0120 14:00:37.985965    1339 log.go:172] (0xc000a70420) (0xc0005701e0) Stream removed, broadcasting: 5\n"
Jan 20 14:00:37.996: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 20 14:00:37.996: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 20 14:00:37.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 20 14:00:38.477: INFO: stderr: "I0120 14:00:38.196586    1360 log.go:172] (0xc000a90420) (0xc000324820) Create stream\nI0120 14:00:38.196793    1360 log.go:172] (0xc000a90420) (0xc000324820) Stream added, broadcasting: 1\nI0120 14:00:38.206385    1360 log.go:172] (0xc000a90420) Reply frame received for 1\nI0120 14:00:38.206455    1360 log.go:172] (0xc000a90420) (0xc000324000) Create stream\nI0120 14:00:38.206484    1360 log.go:172] (0xc000a90420) (0xc000324000) Stream added, broadcasting: 3\nI0120 14:00:38.208172    1360 log.go:172] (0xc000a90420) Reply frame received for 3\nI0120 14:00:38.208261    1360 log.go:172] (0xc000a90420) (0xc0006b8460) Create stream\nI0120 14:00:38.208277    1360 log.go:172] (0xc000a90420) (0xc0006b8460) Stream added, broadcasting: 5\nI0120 14:00:38.209869    1360 log.go:172] (0xc000a90420) Reply frame received for 5\nI0120 14:00:38.294912    1360 log.go:172] (0xc000a90420) Data frame received for 5\nI0120 14:00:38.294928    1360 log.go:172] (0xc0006b8460) (5) Data frame handling\nI0120 14:00:38.294944    1360 log.go:172] (0xc0006b8460) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0120 14:00:38.352193    1360 log.go:172] (0xc000a90420) Data frame received for 3\nI0120 14:00:38.352254    1360 log.go:172] (0xc000324000) (3) Data frame handling\nI0120 14:00:38.352273    1360 log.go:172] (0xc000324000) (3) Data frame sent\nI0120 14:00:38.453434    1360 log.go:172] (0xc000a90420) Data frame received for 1\nI0120 14:00:38.453772    1360 log.go:172] (0xc000a90420) (0xc0006b8460) Stream removed, broadcasting: 5\nI0120 14:00:38.454176    1360 log.go:172] (0xc000324820) (1) Data frame handling\nI0120 14:00:38.454526    1360 log.go:172] (0xc000324820) (1) Data frame sent\nI0120 14:00:38.454728    1360 log.go:172] (0xc000a90420) (0xc000324000) Stream removed, broadcasting: 3\nI0120 14:00:38.454821    1360 log.go:172] (0xc000a90420) (0xc000324820) Stream removed, broadcasting: 1\nI0120 14:00:38.454904    1360 log.go:172] (0xc000a90420) Go away received\nI0120 14:00:38.457850    1360 log.go:172] (0xc000a90420) (0xc000324820) Stream removed, broadcasting: 1\nI0120 14:00:38.457905    1360 log.go:172] (0xc000a90420) (0xc000324000) Stream removed, broadcasting: 3\nI0120 14:00:38.457950    1360 log.go:172] (0xc000a90420) (0xc0006b8460) Stream removed, broadcasting: 5\n"
Jan 20 14:00:38.478: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 20 14:00:38.478: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 20 14:00:38.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 20 14:00:39.309: INFO: stderr: "I0120 14:00:38.751438    1381 log.go:172] (0xc000138e70) (0xc00087a640) Create stream\nI0120 14:00:38.752063    1381 log.go:172] (0xc000138e70) (0xc00087a640) Stream added, broadcasting: 1\nI0120 14:00:38.763216    1381 log.go:172] (0xc000138e70) Reply frame received for 1\nI0120 14:00:38.763339    1381 log.go:172] (0xc000138e70) (0xc0009bc000) Create stream\nI0120 14:00:38.763399    1381 log.go:172] (0xc000138e70) (0xc0009bc000) Stream added, broadcasting: 3\nI0120 14:00:38.765222    1381 log.go:172] (0xc000138e70) Reply frame received for 3\nI0120 14:00:38.765293    1381 log.go:172] (0xc000138e70) (0xc00087a6e0) Create stream\nI0120 14:00:38.765366    1381 log.go:172] (0xc000138e70) (0xc00087a6e0) Stream added, broadcasting: 5\nI0120 14:00:38.772518    1381 log.go:172] (0xc000138e70) Reply frame received for 5\nI0120 14:00:39.004637    1381 log.go:172] (0xc000138e70) Data frame received for 5\nI0120 14:00:39.004792    1381 log.go:172] (0xc00087a6e0) (5) Data frame handling\nI0120 14:00:39.004838    1381 log.go:172] (0xc00087a6e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0120 14:00:39.049314    1381 log.go:172] (0xc000138e70) Data frame received for 3\nI0120 14:00:39.049671    1381 log.go:172] (0xc0009bc000) (3) Data frame handling\nI0120 14:00:39.049716    1381 log.go:172] (0xc0009bc000) (3) Data frame sent\nI0120 14:00:39.283122    1381 log.go:172] (0xc000138e70) (0xc0009bc000) Stream removed, broadcasting: 3\nI0120 14:00:39.283705    1381 log.go:172] (0xc000138e70) Data frame received for 1\nI0120 14:00:39.283726    1381 log.go:172] (0xc00087a640) (1) Data frame handling\nI0120 14:00:39.283765    1381 log.go:172] (0xc00087a640) (1) Data frame sent\nI0120 14:00:39.284095    1381 log.go:172] (0xc000138e70) (0xc00087a640) Stream removed, broadcasting: 1\nI0120 14:00:39.284569    1381 log.go:172] (0xc000138e70) (0xc00087a6e0) Stream removed, broadcasting: 5\nI0120 14:00:39.284669    1381 log.go:172] (0xc000138e70) Go away received\nI0120 14:00:39.286801    1381 log.go:172] (0xc000138e70) (0xc00087a640) Stream removed, broadcasting: 1\nI0120 14:00:39.286856    1381 log.go:172] (0xc000138e70) (0xc0009bc000) Stream removed, broadcasting: 3\nI0120 14:00:39.286881    1381 log.go:172] (0xc000138e70) (0xc00087a6e0) Stream removed, broadcasting: 5\n"
Jan 20 14:00:39.310: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 20 14:00:39.310: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 20 14:00:39.310: INFO: Waiting for statefulset status.replicas updated to 0
Jan 20 14:00:39.323: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan 20 14:00:49.340: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 20 14:00:49.340: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 20 14:00:49.340: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 20 14:00:49.394: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 20 14:00:49.394: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:02 +0000 UTC  }]
Jan 20 14:00:49.394: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:24 +0000 UTC  }]
Jan 20 14:00:49.394: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:24 +0000 UTC  }]
Jan 20 14:00:49.394: INFO: 
Jan 20 14:00:49.394: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 20 14:00:51.835: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 20 14:00:51.835: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:02 +0000 UTC  }]
Jan 20 14:00:51.835: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:24 +0000 UTC  }]
Jan 20 14:00:51.835: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:24 +0000 UTC  }]
Jan 20 14:00:51.836: INFO: 
Jan 20 14:00:51.836: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 20 14:00:52.851: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 20 14:00:52.851: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:02 +0000 UTC  }]
Jan 20 14:00:52.851: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:24 +0000 UTC  }]
Jan 20 14:00:52.851: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:24 +0000 UTC  }]
Jan 20 14:00:52.851: INFO: 
Jan 20 14:00:52.851: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 20 14:00:53.866: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 20 14:00:53.866: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:02 +0000 UTC  }]
Jan 20 14:00:53.867: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:24 +0000 UTC  }]
Jan 20 14:00:53.867: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:24 +0000 UTC  }]
Jan 20 14:00:53.867: INFO: 
Jan 20 14:00:53.867: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 20 14:00:54.884: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 20 14:00:54.884: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:02 +0000 UTC  }]
Jan 20 14:00:54.884: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:24 +0000 UTC  }]
Jan 20 14:00:54.884: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:24 +0000 UTC  }]
Jan 20 14:00:54.884: INFO: 
Jan 20 14:00:54.884: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 20 14:00:55.901: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 20 14:00:55.901: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:02 +0000 UTC  }]
Jan 20 14:00:55.901: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:24 +0000 UTC  }]
Jan 20 14:00:55.901: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:24 +0000 UTC  }]
Jan 20 14:00:55.901: INFO: 
Jan 20 14:00:55.901: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 20 14:00:56.909: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 20 14:00:56.909: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:02 +0000 UTC  }]
Jan 20 14:00:56.909: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:24 +0000 UTC  }]
Jan 20 14:00:56.909: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:24 +0000 UTC  }]
Jan 20 14:00:56.909: INFO: 
Jan 20 14:00:56.909: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 20 14:00:57.979: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 20 14:00:57.979: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:02 +0000 UTC  }]
Jan 20 14:00:57.979: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:24 +0000 UTC  }]
Jan 20 14:00:57.979: INFO: 
Jan 20 14:00:57.979: INFO: StatefulSet ss has not reached scale 0, at 2
Jan 20 14:00:59.001: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 20 14:00:59.001: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:02 +0000 UTC  }]
Jan 20 14:00:59.001: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 14:00:24 +0000 UTC  }]
Jan 20 14:00:59.002: INFO: 
Jan 20 14:00:59.002: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6448
Jan 20 14:01:00.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:01:00.209: INFO: rc: 1
Jan 20 14:01:00.209: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc00212c150 exit status 1   true [0xc00086aea0 0xc00086af10 0xc00086af80] [0xc00086aea0 0xc00086af10 0xc00086af80] [0xc00086aef0 0xc00086af58] [0xba6c50 0xba6c50] 0xc00170ba40 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Jan 20 14:01:10.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:01:10.410: INFO: rc: 1
Jan 20 14:01:10.411: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001c7f890 exit status 1   true [0xc0034f2638 0xc0034f2650 0xc0034f2668] [0xc0034f2638 0xc0034f2650 0xc0034f2668] [0xc0034f2648 0xc0034f2660] [0xba6c50 0xba6c50] 0xc002b78420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 20 14:01:20.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:01:20.717: INFO: rc: 1
Jan 20 14:01:20.717: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00212c210 exit status 1   true [0xc00086afb0 0xc00086afc8 0xc00086b008] [0xc00086afb0 0xc00086afc8 0xc00086b008] [0xc00086afc0 0xc00086afe8] [0xba6c50 0xba6c50] 0xc002326900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 20 14:01:30.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:01:30.844: INFO: rc: 1
Jan 20 14:01:30.844: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0031acff0 exit status 1   true [0xc00304c710 0xc00304c728 0xc00304c740] [0xc00304c710 0xc00304c728 0xc00304c740] [0xc00304c720 0xc00304c738] [0xba6c50 0xba6c50] 0xc002632120 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 20 14:01:40.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:01:41.022: INFO: rc: 1
Jan 20 14:01:41.022: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0031ad0e0 exit status 1   true [0xc00304c748 0xc00304c760 0xc00304c778] [0xc00304c748 0xc00304c760 0xc00304c778] [0xc00304c758 0xc00304c770] [0xba6c50 0xba6c50] 0xc002633500 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 20 14:01:51.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:01:51.283: INFO: rc: 1
Jan 20 14:01:51.283: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00067dad0 exit status 1   true [0xc0005cccc8 0xc0005cd0f0 0xc0005cd3a8] [0xc0005cccc8 0xc0005cd0f0 0xc0005cd3a8] [0xc0005cd060 0xc0005cd330] [0xba6c50 0xba6c50] 0xc001399bc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 20 14:02:01.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:02:01.472: INFO: rc: 1
Jan 20 14:02:01.472: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002cb8090 exit status 1   true [0xc00053e340 0xc002510010 0xc002510028] [0xc00053e340 0xc002510010 0xc002510028] [0xc002510008 0xc002510020] [0xba6c50 0xba6c50] 0xc0020c2c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 20 14:02:11.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:02:11.616: INFO: rc: 1
Jan 20 14:02:11.616: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00067dbc0 exit status 1   true [0xc0005cd3e0 0xc0005cd760 0xc0005cd908] [0xc0005cd3e0 0xc0005cd760 0xc0005cd908] [0xc0005cd650 0xc0005cd828] [0xba6c50 0xba6c50] 0xc001fef860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 20 14:02:21.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:02:21.765: INFO: rc: 1
Jan 20 14:02:21.765: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002cb8180 exit status 1   true [0xc002510030 0xc002510048 0xc002510060] [0xc002510030 0xc002510048 0xc002510060] [0xc002510040 0xc002510058] [0xba6c50 0xba6c50] 0xc0020c3c20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 20 14:02:31.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:02:31.927: INFO: rc: 1
Jan 20 14:02:31.927: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0016b6090 exit status 1   true [0xc0015e8000 0xc0015e8018 0xc0015e8030] [0xc0015e8000 0xc0015e8018 0xc0015e8030] [0xc0015e8010 0xc0015e8028] [0xba6c50 0xba6c50] 0xc002128c00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 20 14:02:41.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:02:42.057: INFO: rc: 1
Jan 20 14:02:42.057: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00067dc80 exit status 1   true [0xc0005cdad0 0xc0005cdc10 0xc0005cdd90] [0xc0005cdad0 0xc0005cdc10 0xc0005cdd90] [0xc0005cdb90 0xc0005cdcd8] [0xba6c50 0xba6c50] 0xc00170b0e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 20 14:02:52.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:02:52.215: INFO: rc: 1
Jan 20 14:02:52.215: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0016b6180 exit status 1   true [0xc0015e8038 0xc0015e8050 0xc0015e8068] [0xc0015e8038 0xc0015e8050 0xc0015e8068] [0xc0015e8048 0xc0015e8060] [0xba6c50 0xba6c50] 0xc0019322a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 20 14:03:02.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:03:02.418: INFO: rc: 1
Jan 20 14:03:02.418: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002cb82a0 exit status 1   true [0xc002510068 0xc002510080 0xc002510098] [0xc002510068 0xc002510080 0xc002510098] [0xc002510078 0xc002510090] [0xba6c50 0xba6c50] 0xc001c64960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 20 14:03:12.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:03:12.676: INFO: rc: 1
Jan 20 14:03:12.677: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0016b6270 exit status 1   true [0xc0015e8070 0xc0015e8088 0xc0015e80a0] [0xc0015e8070 0xc0015e8088 0xc0015e80a0] [0xc0015e8080 0xc0015e8098] [0xba6c50 0xba6c50] 0xc001ca69c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 20 14:03:22.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:03:22.828: INFO: rc: 1
Jan 20 14:03:22.828: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002cb83c0 exit status 1   true [0xc0025100a0 0xc0025100b8 0xc0025100d0] [0xc0025100a0 0xc0025100b8 0xc0025100d0] [0xc0025100b0 0xc0025100c8] [0xba6c50 0xba6c50] 0xc00127e000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 20 14:03:32.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:03:33.073: INFO: rc: 1
Jan 20 14:03:33.074: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002cb84b0 exit status 1   true [0xc0025100d8 0xc0025100f0 0xc002510108] [0xc0025100d8 0xc0025100f0 0xc002510108] [0xc0025100e8 0xc002510100] [0xba6c50 0xba6c50] 0xc002678a80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 20 14:03:43.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:03:43.236: INFO: rc: 1
Jan 20 14:03:43.237: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0016b6360 exit status 1   true [0xc0015e80a8 0xc0015e80c0 0xc0015e80d8] [0xc0015e80a8 0xc0015e80c0 0xc0015e80d8] [0xc0015e80b8 0xc0015e80d0] [0xba6c50 0xba6c50] 0xc002c4f620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 20 14:03:53.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:03:53.415: INFO: rc: 1
Jan 20 14:03:53.415: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0016b60c0 exit status 1   true [0xc0015e8000 0xc0015e8018 0xc0015e8030] [0xc0015e8000 0xc0015e8018 0xc0015e8030] [0xc0015e8010 0xc0015e8028] [0xba6c50 0xba6c50] 0xc00176ef00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 20 14:04:03.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:04:03.568: INFO: rc: 1
Jan 20 14:04:03.568: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001ff40c0 exit status 1   true [0xc002510000 0xc002510018 0xc002510030] [0xc002510000 0xc002510018 0xc002510030] [0xc002510010 0xc002510028] [0xba6c50 0xba6c50] 0xc001c659e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 20 14:04:13.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:04:13.737: INFO: rc: 1
Jan 20 14:04:13.738: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001ff4180 exit status 1   true [0xc002510038 0xc002510050 0xc002510068] [0xc002510038 0xc002510050 0xc002510068] [0xc002510048 0xc002510060] [0xba6c50 0xba6c50] 0xc001933080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 20 14:04:23.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:04:23.907: INFO: rc: 1
Jan 20 14:04:23.907: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00067db30 exit status 1   true [0xc0005ccb88 0xc0005cd060 0xc0005cd330] [0xc0005ccb88 0xc0005cd060 0xc0005cd330] [0xc0005ccee0 0xc0005cd178] [0xba6c50 0xba6c50] 0xc002128c00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 20 14:04:33.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:04:34.055: INFO: rc: 1
Jan 20 14:04:34.055: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0016b61b0 exit status 1   true [0xc0015e8038 0xc0015e8050 0xc0015e8068] [0xc0015e8038 0xc0015e8050 0xc0015e8068] [0xc0015e8048 0xc0015e8060] [0xba6c50 0xba6c50] 0xc001fefe60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 20 14:04:44.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:04:44.219: INFO: rc: 1
Jan 20 14:04:44.220: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001ff4270 exit status 1   true [0xc002510070 0xc002510088 0xc0025100a0] [0xc002510070 0xc002510088 0xc0025100a0] [0xc002510080 0xc002510098] [0xba6c50 0xba6c50] 0xc001399680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 20 14:04:54.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:04:54.380: INFO: rc: 1
Jan 20 14:04:54.380: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001ff4330 exit status 1   true [0xc0025100a8 0xc0025100c0 0xc0025100d8] [0xc0025100a8 0xc0025100c0 0xc0025100d8] [0xc0025100b8 0xc0025100d0] [0xba6c50 0xba6c50] 0xc002c4f4a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 20 14:05:04.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:05:04.539: INFO: rc: 1
Jan 20 14:05:04.540: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001ff43f0 exit status 1   true [0xc0025100e0 0xc0025100f8 0xc002510110] [0xc0025100e0 0xc0025100f8 0xc002510110] [0xc0025100f0 0xc002510108] [0xba6c50 0xba6c50] 0xc002c4fc80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 20 14:05:14.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:05:14.684: INFO: rc: 1
Jan 20 14:05:14.684: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001ff44e0 exit status 1   true [0xc002510118 0xc002510130 0xc002510148] [0xc002510118 0xc002510130 0xc002510148] [0xc002510128 0xc002510140] [0xba6c50 0xba6c50] 0xc0026789c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 20 14:05:24.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:05:24.782: INFO: rc: 1
Jan 20 14:05:24.782: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00067dce0 exit status 1   true [0xc0005cd3a8 0xc0005cd650 0xc0005cd828] [0xc0005cd3a8 0xc0005cd650 0xc0005cd828] [0xc0005cd538 0xc0005cd7c8] [0xba6c50 0xba6c50] 0xc00170a3c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 20 14:05:34.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:05:34.932: INFO: rc: 1
Jan 20 14:05:34.932: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00067ddd0 exit status 1   true [0xc0005cd908 0xc0005cdb90 0xc0005cdcd8] [0xc0005cd908 0xc0005cdb90 0xc0005cdcd8] [0xc0005cdb48 0xc0005cdca8] [0xba6c50 0xba6c50] 0xc00170b320 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 20 14:05:44.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:05:45.076: INFO: rc: 1
Jan 20 14:05:45.076: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001ff45a0 exit status 1   true [0xc002510150 0xc002510168 0xc002510188] [0xc002510150 0xc002510168 0xc002510188] [0xc002510160 0xc002510180] [0xba6c50 0xba6c50] 0xc0020cc0c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 20 14:05:55.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:05:55.291: INFO: rc: 1
Jan 20 14:05:55.291: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002cb8090 exit status 1   true [0xc0005ccb88 0xc0005cd060 0xc0005cd330] [0xc0005ccb88 0xc0005cd060 0xc0005cd330] [0xc0005ccee0 0xc0005cd178] [0xba6c50 0xba6c50] 0xc002678b40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 20 14:06:05.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:06:05.494: INFO: rc: 1
Jan 20 14:06:05.494: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Jan 20 14:06:05.494: INFO: Scaling statefulset ss to 0
Jan 20 14:06:05.505: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 20 14:06:05.507: INFO: Deleting all statefulset in ns statefulset-6448
Jan 20 14:06:05.509: INFO: Scaling statefulset ss to 0
Jan 20 14:06:05.517: INFO: Waiting for statefulset status.replicas updated to 0
Jan 20 14:06:05.520: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:06:05.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6448" for this suite.
Jan 20 14:06:11.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:06:11.730: INFO: namespace statefulset-6448 deletion completed in 6.159693472s

• [SLOW TEST:369.768 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:06:11.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan 20 14:06:11.908: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:06:28.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1404" for this suite.
Jan 20 14:06:34.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:06:34.729: INFO: namespace init-container-1404 deletion completed in 6.266176757s

• [SLOW TEST:22.999 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:06:34.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Jan 20 14:06:34.848: INFO: Waiting up to 5m0s for pod "var-expansion-472ccd13-f525-415c-a8a2-d99dc7fa4acf" in namespace "var-expansion-2386" to be "success or failure"
Jan 20 14:06:34.869: INFO: Pod "var-expansion-472ccd13-f525-415c-a8a2-d99dc7fa4acf": Phase="Pending", Reason="", readiness=false. Elapsed: 21.133725ms
Jan 20 14:06:36.884: INFO: Pod "var-expansion-472ccd13-f525-415c-a8a2-d99dc7fa4acf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035759366s
Jan 20 14:06:38.898: INFO: Pod "var-expansion-472ccd13-f525-415c-a8a2-d99dc7fa4acf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050258988s
Jan 20 14:06:40.908: INFO: Pod "var-expansion-472ccd13-f525-415c-a8a2-d99dc7fa4acf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059917595s
Jan 20 14:06:42.915: INFO: Pod "var-expansion-472ccd13-f525-415c-a8a2-d99dc7fa4acf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06668363s
STEP: Saw pod success
Jan 20 14:06:42.915: INFO: Pod "var-expansion-472ccd13-f525-415c-a8a2-d99dc7fa4acf" satisfied condition "success or failure"
Jan 20 14:06:42.917: INFO: Trying to get logs from node iruya-node pod var-expansion-472ccd13-f525-415c-a8a2-d99dc7fa4acf container dapi-container: 
STEP: delete the pod
Jan 20 14:06:42.985: INFO: Waiting for pod var-expansion-472ccd13-f525-415c-a8a2-d99dc7fa4acf to disappear
Jan 20 14:06:42.992: INFO: Pod var-expansion-472ccd13-f525-415c-a8a2-d99dc7fa4acf no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:06:42.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2386" for this suite.
Jan 20 14:06:49.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:06:49.259: INFO: namespace var-expansion-2386 deletion completed in 6.261938144s

• [SLOW TEST:14.529 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:06:49.260: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan 20 14:06:58.471: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:06:59.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9887" for this suite.
Jan 20 14:09:41.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:09:41.743: INFO: namespace replicaset-9887 deletion completed in 2m42.158033274s

• [SLOW TEST:172.483 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:09:41.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 20 14:09:41.882: INFO: Waiting up to 5m0s for pod "downward-api-5c3562b4-75ec-48a6-967e-def7a3f5c1a5" in namespace "downward-api-2748" to be "success or failure"
Jan 20 14:09:41.888: INFO: Pod "downward-api-5c3562b4-75ec-48a6-967e-def7a3f5c1a5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.293181ms
Jan 20 14:09:43.901: INFO: Pod "downward-api-5c3562b4-75ec-48a6-967e-def7a3f5c1a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018156102s
Jan 20 14:09:45.912: INFO: Pod "downward-api-5c3562b4-75ec-48a6-967e-def7a3f5c1a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029086088s
Jan 20 14:09:47.921: INFO: Pod "downward-api-5c3562b4-75ec-48a6-967e-def7a3f5c1a5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03850807s
Jan 20 14:09:49.940: INFO: Pod "downward-api-5c3562b4-75ec-48a6-967e-def7a3f5c1a5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057074257s
Jan 20 14:09:51.950: INFO: Pod "downward-api-5c3562b4-75ec-48a6-967e-def7a3f5c1a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067317072s
STEP: Saw pod success
Jan 20 14:09:51.950: INFO: Pod "downward-api-5c3562b4-75ec-48a6-967e-def7a3f5c1a5" satisfied condition "success or failure"
Jan 20 14:09:51.955: INFO: Trying to get logs from node iruya-node pod downward-api-5c3562b4-75ec-48a6-967e-def7a3f5c1a5 container dapi-container: 
STEP: delete the pod
Jan 20 14:09:52.032: INFO: Waiting for pod downward-api-5c3562b4-75ec-48a6-967e-def7a3f5c1a5 to disappear
Jan 20 14:09:52.044: INFO: Pod downward-api-5c3562b4-75ec-48a6-967e-def7a3f5c1a5 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:09:52.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2748" for this suite.
Jan 20 14:09:58.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:09:58.235: INFO: namespace downward-api-2748 deletion completed in 6.186006638s

• [SLOW TEST:16.492 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:09:58.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-9162
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-9162
STEP: Deleting pre-stop pod
Jan 20 14:10:19.483: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:10:19.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-9162" for this suite.
Jan 20 14:10:57.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:10:57.743: INFO: namespace prestop-9162 deletion completed in 38.227748446s

• [SLOW TEST:59.508 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:10:57.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Jan 20 14:10:57.834: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:10:57.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3448" for this suite.
Jan 20 14:11:03.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:11:04.179: INFO: namespace kubectl-3448 deletion completed in 6.224971132s

• [SLOW TEST:6.435 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:11:04.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-f537dda7-bfac-4808-892f-1856f00eb417
STEP: Creating a pod to test consume secrets
Jan 20 14:11:04.317: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ba8e1f04-7583-41fc-b058-babd6935db7a" in namespace "projected-8554" to be "success or failure"
Jan 20 14:11:04.344: INFO: Pod "pod-projected-secrets-ba8e1f04-7583-41fc-b058-babd6935db7a": Phase="Pending", Reason="", readiness=false. Elapsed: 27.491767ms
Jan 20 14:11:06.353: INFO: Pod "pod-projected-secrets-ba8e1f04-7583-41fc-b058-babd6935db7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036140887s
Jan 20 14:11:08.362: INFO: Pod "pod-projected-secrets-ba8e1f04-7583-41fc-b058-babd6935db7a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045306902s
Jan 20 14:11:10.371: INFO: Pod "pod-projected-secrets-ba8e1f04-7583-41fc-b058-babd6935db7a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053731226s
Jan 20 14:11:12.378: INFO: Pod "pod-projected-secrets-ba8e1f04-7583-41fc-b058-babd6935db7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060658402s
STEP: Saw pod success
Jan 20 14:11:12.378: INFO: Pod "pod-projected-secrets-ba8e1f04-7583-41fc-b058-babd6935db7a" satisfied condition "success or failure"
Jan 20 14:11:12.382: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-ba8e1f04-7583-41fc-b058-babd6935db7a container secret-volume-test: 
STEP: delete the pod
Jan 20 14:11:12.545: INFO: Waiting for pod pod-projected-secrets-ba8e1f04-7583-41fc-b058-babd6935db7a to disappear
Jan 20 14:11:12.554: INFO: Pod pod-projected-secrets-ba8e1f04-7583-41fc-b058-babd6935db7a no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:11:12.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8554" for this suite.
Jan 20 14:11:18.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:11:18.746: INFO: namespace projected-8554 deletion completed in 6.18446047s

• [SLOW TEST:14.566 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:11:18.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-d345a56f-2355-412f-9aca-7ed7a1fc2a17
STEP: Creating a pod to test consume secrets
Jan 20 14:11:18.853: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-62fd169b-6c15-4737-a1ac-ad0e30781699" in namespace "projected-7719" to be "success or failure"
Jan 20 14:11:18.886: INFO: Pod "pod-projected-secrets-62fd169b-6c15-4737-a1ac-ad0e30781699": Phase="Pending", Reason="", readiness=false. Elapsed: 32.45823ms
Jan 20 14:11:20.992: INFO: Pod "pod-projected-secrets-62fd169b-6c15-4737-a1ac-ad0e30781699": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138732128s
Jan 20 14:11:23.000: INFO: Pod "pod-projected-secrets-62fd169b-6c15-4737-a1ac-ad0e30781699": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146723876s
Jan 20 14:11:25.012: INFO: Pod "pod-projected-secrets-62fd169b-6c15-4737-a1ac-ad0e30781699": Phase="Pending", Reason="", readiness=false. Elapsed: 6.159146434s
Jan 20 14:11:27.021: INFO: Pod "pod-projected-secrets-62fd169b-6c15-4737-a1ac-ad0e30781699": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.168047407s
STEP: Saw pod success
Jan 20 14:11:27.021: INFO: Pod "pod-projected-secrets-62fd169b-6c15-4737-a1ac-ad0e30781699" satisfied condition "success or failure"
Jan 20 14:11:27.025: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-62fd169b-6c15-4737-a1ac-ad0e30781699 container projected-secret-volume-test: 
STEP: delete the pod
Jan 20 14:11:27.158: INFO: Waiting for pod pod-projected-secrets-62fd169b-6c15-4737-a1ac-ad0e30781699 to disappear
Jan 20 14:11:27.165: INFO: Pod pod-projected-secrets-62fd169b-6c15-4737-a1ac-ad0e30781699 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:11:27.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7719" for this suite.
Jan 20 14:11:33.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:11:33.383: INFO: namespace projected-7719 deletion completed in 6.213658433s

• [SLOW TEST:14.636 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:11:33.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-1424/secret-test-27e0a9c8-6989-48c9-8bc6-0e654da3c24e
STEP: Creating a pod to test consume secrets
Jan 20 14:11:33.550: INFO: Waiting up to 5m0s for pod "pod-configmaps-456761ea-77a8-4f57-897c-23b6b3a06f0a" in namespace "secrets-1424" to be "success or failure"
Jan 20 14:11:33.648: INFO: Pod "pod-configmaps-456761ea-77a8-4f57-897c-23b6b3a06f0a": Phase="Pending", Reason="", readiness=false. Elapsed: 97.63161ms
Jan 20 14:11:35.657: INFO: Pod "pod-configmaps-456761ea-77a8-4f57-897c-23b6b3a06f0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10671139s
Jan 20 14:11:37.664: INFO: Pod "pod-configmaps-456761ea-77a8-4f57-897c-23b6b3a06f0a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114261837s
Jan 20 14:11:39.676: INFO: Pod "pod-configmaps-456761ea-77a8-4f57-897c-23b6b3a06f0a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125747269s
Jan 20 14:11:41.684: INFO: Pod "pod-configmaps-456761ea-77a8-4f57-897c-23b6b3a06f0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.13438633s
STEP: Saw pod success
Jan 20 14:11:41.685: INFO: Pod "pod-configmaps-456761ea-77a8-4f57-897c-23b6b3a06f0a" satisfied condition "success or failure"
Jan 20 14:11:41.689: INFO: Trying to get logs from node iruya-node pod pod-configmaps-456761ea-77a8-4f57-897c-23b6b3a06f0a container env-test: 
STEP: delete the pod
Jan 20 14:11:41.783: INFO: Waiting for pod pod-configmaps-456761ea-77a8-4f57-897c-23b6b3a06f0a to disappear
Jan 20 14:11:41.806: INFO: Pod pod-configmaps-456761ea-77a8-4f57-897c-23b6b3a06f0a no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:11:41.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1424" for this suite.
Jan 20 14:11:47.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:11:48.010: INFO: namespace secrets-1424 deletion completed in 6.167342371s

• [SLOW TEST:14.626 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:11:48.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4324.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4324.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 20 14:12:00.139: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-4324/dns-test-b51ff5f0-00cf-4c49-bc6a-89ce70f3c456: the server could not find the requested resource (get pods dns-test-b51ff5f0-00cf-4c49-bc6a-89ce70f3c456)
Jan 20 14:12:00.152: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-4324/dns-test-b51ff5f0-00cf-4c49-bc6a-89ce70f3c456: the server could not find the requested resource (get pods dns-test-b51ff5f0-00cf-4c49-bc6a-89ce70f3c456)
Jan 20 14:12:00.161: INFO: Unable to read wheezy_udp@PodARecord from pod dns-4324/dns-test-b51ff5f0-00cf-4c49-bc6a-89ce70f3c456: the server could not find the requested resource (get pods dns-test-b51ff5f0-00cf-4c49-bc6a-89ce70f3c456)
Jan 20 14:12:00.168: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-4324/dns-test-b51ff5f0-00cf-4c49-bc6a-89ce70f3c456: the server could not find the requested resource (get pods dns-test-b51ff5f0-00cf-4c49-bc6a-89ce70f3c456)
Jan 20 14:12:00.175: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-4324/dns-test-b51ff5f0-00cf-4c49-bc6a-89ce70f3c456: the server could not find the requested resource (get pods dns-test-b51ff5f0-00cf-4c49-bc6a-89ce70f3c456)
Jan 20 14:12:00.185: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-4324/dns-test-b51ff5f0-00cf-4c49-bc6a-89ce70f3c456: the server could not find the requested resource (get pods dns-test-b51ff5f0-00cf-4c49-bc6a-89ce70f3c456)
Jan 20 14:12:00.197: INFO: Unable to read jessie_udp@PodARecord from pod dns-4324/dns-test-b51ff5f0-00cf-4c49-bc6a-89ce70f3c456: the server could not find the requested resource (get pods dns-test-b51ff5f0-00cf-4c49-bc6a-89ce70f3c456)
Jan 20 14:12:00.202: INFO: Unable to read jessie_tcp@PodARecord from pod dns-4324/dns-test-b51ff5f0-00cf-4c49-bc6a-89ce70f3c456: the server could not find the requested resource (get pods dns-test-b51ff5f0-00cf-4c49-bc6a-89ce70f3c456)
Jan 20 14:12:00.202: INFO: Lookups using dns-4324/dns-test-b51ff5f0-00cf-4c49-bc6a-89ce70f3c456 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan 20 14:12:05.278: INFO: DNS probes using dns-4324/dns-test-b51ff5f0-00cf-4c49-bc6a-89ce70f3c456 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:12:05.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4324" for this suite.
Jan 20 14:12:11.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:12:11.592: INFO: namespace dns-4324 deletion completed in 6.187706741s

• [SLOW TEST:23.582 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:12:11.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-b43783a5-f58e-4633-be21-18032199f1bf
STEP: Creating a pod to test consume secrets
Jan 20 14:12:11.719: INFO: Waiting up to 5m0s for pod "pod-secrets-2e97ece5-7953-4857-8ee1-dfc8981164e7" in namespace "secrets-5429" to be "success or failure"
Jan 20 14:12:11.727: INFO: Pod "pod-secrets-2e97ece5-7953-4857-8ee1-dfc8981164e7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.379677ms
Jan 20 14:12:13.738: INFO: Pod "pod-secrets-2e97ece5-7953-4857-8ee1-dfc8981164e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019023697s
Jan 20 14:12:15.747: INFO: Pod "pod-secrets-2e97ece5-7953-4857-8ee1-dfc8981164e7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028082789s
Jan 20 14:12:17.755: INFO: Pod "pod-secrets-2e97ece5-7953-4857-8ee1-dfc8981164e7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036237391s
Jan 20 14:12:19.763: INFO: Pod "pod-secrets-2e97ece5-7953-4857-8ee1-dfc8981164e7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044156043s
Jan 20 14:12:21.773: INFO: Pod "pod-secrets-2e97ece5-7953-4857-8ee1-dfc8981164e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.053698143s
STEP: Saw pod success
Jan 20 14:12:21.773: INFO: Pod "pod-secrets-2e97ece5-7953-4857-8ee1-dfc8981164e7" satisfied condition "success or failure"
Jan 20 14:12:21.786: INFO: Trying to get logs from node iruya-node pod pod-secrets-2e97ece5-7953-4857-8ee1-dfc8981164e7 container secret-volume-test: 
STEP: delete the pod
Jan 20 14:12:22.049: INFO: Waiting for pod pod-secrets-2e97ece5-7953-4857-8ee1-dfc8981164e7 to disappear
Jan 20 14:12:22.057: INFO: Pod pod-secrets-2e97ece5-7953-4857-8ee1-dfc8981164e7 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:12:22.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5429" for this suite.
Jan 20 14:12:28.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:12:28.253: INFO: namespace secrets-5429 deletion completed in 6.1907118s

• [SLOW TEST:16.660 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:12:28.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 20 14:12:28.357: INFO: Waiting up to 5m0s for pod "pod-3ab2ab19-93f8-4cb2-8f4c-6c357ce7bded" in namespace "emptydir-918" to be "success or failure"
Jan 20 14:12:28.366: INFO: Pod "pod-3ab2ab19-93f8-4cb2-8f4c-6c357ce7bded": Phase="Pending", Reason="", readiness=false. Elapsed: 8.10365ms
Jan 20 14:12:30.374: INFO: Pod "pod-3ab2ab19-93f8-4cb2-8f4c-6c357ce7bded": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01607539s
Jan 20 14:12:32.758: INFO: Pod "pod-3ab2ab19-93f8-4cb2-8f4c-6c357ce7bded": Phase="Pending", Reason="", readiness=false. Elapsed: 4.400048082s
Jan 20 14:12:34.768: INFO: Pod "pod-3ab2ab19-93f8-4cb2-8f4c-6c357ce7bded": Phase="Pending", Reason="", readiness=false. Elapsed: 6.410794585s
Jan 20 14:12:36.792: INFO: Pod "pod-3ab2ab19-93f8-4cb2-8f4c-6c357ce7bded": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.434192837s
STEP: Saw pod success
Jan 20 14:12:36.792: INFO: Pod "pod-3ab2ab19-93f8-4cb2-8f4c-6c357ce7bded" satisfied condition "success or failure"
Jan 20 14:12:36.798: INFO: Trying to get logs from node iruya-node pod pod-3ab2ab19-93f8-4cb2-8f4c-6c357ce7bded container test-container: 
STEP: delete the pod
Jan 20 14:12:36.923: INFO: Waiting for pod pod-3ab2ab19-93f8-4cb2-8f4c-6c357ce7bded to disappear
Jan 20 14:12:36.929: INFO: Pod pod-3ab2ab19-93f8-4cb2-8f4c-6c357ce7bded no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:12:36.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-918" for this suite.
Jan 20 14:12:42.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:12:43.112: INFO: namespace emptydir-918 deletion completed in 6.175499968s

• [SLOW TEST:14.858 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:12:43.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7304.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7304.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7304.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7304.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7304.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7304.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 20 14:12:55.365: INFO: Unable to read wheezy_udp@PodARecord from pod dns-7304/dns-test-8786eca3-ba2f-48be-9f7f-62fba4f02455: the server could not find the requested resource (get pods dns-test-8786eca3-ba2f-48be-9f7f-62fba4f02455)
Jan 20 14:12:55.371: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7304/dns-test-8786eca3-ba2f-48be-9f7f-62fba4f02455: the server could not find the requested resource (get pods dns-test-8786eca3-ba2f-48be-9f7f-62fba4f02455)
Jan 20 14:12:55.378: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-7304.svc.cluster.local from pod dns-7304/dns-test-8786eca3-ba2f-48be-9f7f-62fba4f02455: the server could not find the requested resource (get pods dns-test-8786eca3-ba2f-48be-9f7f-62fba4f02455)
Jan 20 14:12:55.382: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-7304/dns-test-8786eca3-ba2f-48be-9f7f-62fba4f02455: the server could not find the requested resource (get pods dns-test-8786eca3-ba2f-48be-9f7f-62fba4f02455)
Jan 20 14:12:55.387: INFO: Unable to read jessie_udp@PodARecord from pod dns-7304/dns-test-8786eca3-ba2f-48be-9f7f-62fba4f02455: the server could not find the requested resource (get pods dns-test-8786eca3-ba2f-48be-9f7f-62fba4f02455)
Jan 20 14:12:55.392: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7304/dns-test-8786eca3-ba2f-48be-9f7f-62fba4f02455: the server could not find the requested resource (get pods dns-test-8786eca3-ba2f-48be-9f7f-62fba4f02455)
Jan 20 14:12:55.392: INFO: Lookups using dns-7304/dns-test-8786eca3-ba2f-48be-9f7f-62fba4f02455 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-7304.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan 20 14:13:00.443: INFO: DNS probes using dns-7304/dns-test-8786eca3-ba2f-48be-9f7f-62fba4f02455 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:13:00.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7304" for this suite.
Jan 20 14:13:06.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:13:06.915: INFO: namespace dns-7304 deletion completed in 6.179092007s

• [SLOW TEST:23.803 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:13:06.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-1734a0ac-ba6e-4edc-88d7-4df14b2c0862
STEP: Creating a pod to test consume secrets
Jan 20 14:13:07.054: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3053f17d-8002-477a-80b1-e5a46727ec13" in namespace "projected-8045" to be "success or failure"
Jan 20 14:13:07.076: INFO: Pod "pod-projected-secrets-3053f17d-8002-477a-80b1-e5a46727ec13": Phase="Pending", Reason="", readiness=false. Elapsed: 21.921423ms
Jan 20 14:13:09.349: INFO: Pod "pod-projected-secrets-3053f17d-8002-477a-80b1-e5a46727ec13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.294867583s
Jan 20 14:13:11.356: INFO: Pod "pod-projected-secrets-3053f17d-8002-477a-80b1-e5a46727ec13": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30200612s
Jan 20 14:13:13.364: INFO: Pod "pod-projected-secrets-3053f17d-8002-477a-80b1-e5a46727ec13": Phase="Pending", Reason="", readiness=false. Elapsed: 6.31015698s
Jan 20 14:13:15.373: INFO: Pod "pod-projected-secrets-3053f17d-8002-477a-80b1-e5a46727ec13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.318497793s
STEP: Saw pod success
Jan 20 14:13:15.373: INFO: Pod "pod-projected-secrets-3053f17d-8002-477a-80b1-e5a46727ec13" satisfied condition "success or failure"
Jan 20 14:13:15.378: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-3053f17d-8002-477a-80b1-e5a46727ec13 container projected-secret-volume-test: 
STEP: delete the pod
Jan 20 14:13:15.545: INFO: Waiting for pod pod-projected-secrets-3053f17d-8002-477a-80b1-e5a46727ec13 to disappear
Jan 20 14:13:15.557: INFO: Pod pod-projected-secrets-3053f17d-8002-477a-80b1-e5a46727ec13 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:13:15.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8045" for this suite.
Jan 20 14:13:21.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:13:21.729: INFO: namespace projected-8045 deletion completed in 6.163430892s

• [SLOW TEST:14.814 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:13:21.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 20 14:13:21.848: INFO: Waiting up to 5m0s for pod "downward-api-a1233505-69b0-40d2-b235-9904cf7471e9" in namespace "downward-api-2972" to be "success or failure"
Jan 20 14:13:21.879: INFO: Pod "downward-api-a1233505-69b0-40d2-b235-9904cf7471e9": Phase="Pending", Reason="", readiness=false. Elapsed: 30.880356ms
Jan 20 14:13:23.893: INFO: Pod "downward-api-a1233505-69b0-40d2-b235-9904cf7471e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044696009s
Jan 20 14:13:25.901: INFO: Pod "downward-api-a1233505-69b0-40d2-b235-9904cf7471e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05301114s
Jan 20 14:13:27.914: INFO: Pod "downward-api-a1233505-69b0-40d2-b235-9904cf7471e9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065676127s
Jan 20 14:13:29.923: INFO: Pod "downward-api-a1233505-69b0-40d2-b235-9904cf7471e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.07459903s
STEP: Saw pod success
Jan 20 14:13:29.923: INFO: Pod "downward-api-a1233505-69b0-40d2-b235-9904cf7471e9" satisfied condition "success or failure"
Jan 20 14:13:29.928: INFO: Trying to get logs from node iruya-node pod downward-api-a1233505-69b0-40d2-b235-9904cf7471e9 container dapi-container: 
STEP: delete the pod
Jan 20 14:13:30.299: INFO: Waiting for pod downward-api-a1233505-69b0-40d2-b235-9904cf7471e9 to disappear
Jan 20 14:13:30.359: INFO: Pod downward-api-a1233505-69b0-40d2-b235-9904cf7471e9 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:13:30.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2972" for this suite.
Jan 20 14:13:36.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:13:36.591: INFO: namespace downward-api-2972 deletion completed in 6.223806165s

• [SLOW TEST:14.861 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:13:36.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 20 14:13:36.728: INFO: Waiting up to 5m0s for pod "pod-44678e7c-9293-4c7c-96c3-9cf466011d67" in namespace "emptydir-6013" to be "success or failure"
Jan 20 14:13:36.738: INFO: Pod "pod-44678e7c-9293-4c7c-96c3-9cf466011d67": Phase="Pending", Reason="", readiness=false. Elapsed: 9.915372ms
Jan 20 14:13:38.758: INFO: Pod "pod-44678e7c-9293-4c7c-96c3-9cf466011d67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02925505s
Jan 20 14:13:40.767: INFO: Pod "pod-44678e7c-9293-4c7c-96c3-9cf466011d67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038553956s
Jan 20 14:13:42.776: INFO: Pod "pod-44678e7c-9293-4c7c-96c3-9cf466011d67": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047712515s
Jan 20 14:13:44.783: INFO: Pod "pod-44678e7c-9293-4c7c-96c3-9cf466011d67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054674893s
STEP: Saw pod success
Jan 20 14:13:44.783: INFO: Pod "pod-44678e7c-9293-4c7c-96c3-9cf466011d67" satisfied condition "success or failure"
Jan 20 14:13:44.787: INFO: Trying to get logs from node iruya-node pod pod-44678e7c-9293-4c7c-96c3-9cf466011d67 container test-container: 
STEP: delete the pod
Jan 20 14:13:44.820: INFO: Waiting for pod pod-44678e7c-9293-4c7c-96c3-9cf466011d67 to disappear
Jan 20 14:13:44.827: INFO: Pod pod-44678e7c-9293-4c7c-96c3-9cf466011d67 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:13:44.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6013" for this suite.
Jan 20 14:13:50.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:13:50.979: INFO: namespace emptydir-6013 deletion completed in 6.147257002s

• [SLOW TEST:14.388 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:13:50.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-b1ffe303-a1dd-4174-8346-fe5b488a9224
STEP: Creating secret with name s-test-opt-upd-bec4ebec-d230-4b97-8d33-1807cd63cf41
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-b1ffe303-a1dd-4174-8346-fe5b488a9224
STEP: Updating secret s-test-opt-upd-bec4ebec-d230-4b97-8d33-1807cd63cf41
STEP: Creating secret with name s-test-opt-create-0f303e43-6d33-4893-b5bf-5957866c2143
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:15:29.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7031" for this suite.
Jan 20 14:15:53.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:15:53.409: INFO: namespace projected-7031 deletion completed in 24.224537855s

• [SLOW TEST:122.430 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:15:53.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Jan 20 14:15:53.553: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jan 20 14:15:53.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7243'
Jan 20 14:15:55.953: INFO: stderr: ""
Jan 20 14:15:55.953: INFO: stdout: "service/redis-slave created\n"
Jan 20 14:15:55.954: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jan 20 14:15:55.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7243'
Jan 20 14:15:56.530: INFO: stderr: ""
Jan 20 14:15:56.530: INFO: stdout: "service/redis-master created\n"
Jan 20 14:15:56.531: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan 20 14:15:56.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7243'
Jan 20 14:15:56.986: INFO: stderr: ""
Jan 20 14:15:56.987: INFO: stdout: "service/frontend created\n"
Jan 20 14:15:56.987: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jan 20 14:15:56.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7243'
Jan 20 14:15:57.438: INFO: stderr: ""
Jan 20 14:15:57.439: INFO: stdout: "deployment.apps/frontend created\n"
Jan 20 14:15:57.440: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 20 14:15:57.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7243'
Jan 20 14:15:57.863: INFO: stderr: ""
Jan 20 14:15:57.863: INFO: stdout: "deployment.apps/redis-master created\n"
Jan 20 14:15:57.864: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jan 20 14:15:57.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7243'
Jan 20 14:15:59.516: INFO: stderr: ""
Jan 20 14:15:59.517: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Jan 20 14:15:59.517: INFO: Waiting for all frontend pods to be Running.
Jan 20 14:16:24.569: INFO: Waiting for frontend to serve content.
Jan 20 14:16:24.698: INFO: Trying to add a new entry to the guestbook.
Jan 20 14:16:24.727: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jan 20 14:16:24.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7243'
Jan 20 14:16:25.036: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 20 14:16:25.036: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan 20 14:16:25.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7243'
Jan 20 14:16:25.278: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 20 14:16:25.279: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 20 14:16:25.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7243'
Jan 20 14:16:25.455: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 20 14:16:25.456: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 20 14:16:25.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7243'
Jan 20 14:16:25.577: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 20 14:16:25.577: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 20 14:16:25.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7243'
Jan 20 14:16:25.725: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 20 14:16:25.725: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 20 14:16:25.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7243'
Jan 20 14:16:26.010: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 20 14:16:26.010: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:16:26.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7243" for this suite.
Jan 20 14:17:08.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:17:08.297: INFO: namespace kubectl-7243 deletion completed in 42.262279174s

• [SLOW TEST:74.887 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:17:08.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan 20 14:17:08.402: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:17:25.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5007" for this suite.
Jan 20 14:17:47.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:17:47.994: INFO: namespace init-container-5007 deletion completed in 22.118261827s

• [SLOW TEST:39.697 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:17:47.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Jan 20 14:17:48.064: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Jan 20 14:17:48.717: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Jan 20 14:17:50.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715126668, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715126668, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715126668, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715126668, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 20 14:17:52.956: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715126668, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715126668, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715126668, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715126668, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 20 14:17:54.963: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715126668, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715126668, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715126668, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715126668, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 20 14:17:56.961: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715126668, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715126668, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715126668, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715126668, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 20 14:17:58.965: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715126668, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715126668, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715126668, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715126668, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 20 14:18:04.989: INFO: Waited 4.016513462s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:18:05.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-3320" for this suite.
Jan 20 14:18:11.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:18:11.738: INFO: namespace aggregator-3320 deletion completed in 6.25065266s

• [SLOW TEST:23.744 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:18:11.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-5e1f556b-79e5-4e03-940d-b7a78b28fe7d
STEP: Creating a pod to test consume secrets
Jan 20 14:18:11.954: INFO: Waiting up to 5m0s for pod "pod-secrets-f4caf3ec-e039-41e5-af5d-4ee01d2460e4" in namespace "secrets-2820" to be "success or failure"
Jan 20 14:18:11.964: INFO: Pod "pod-secrets-f4caf3ec-e039-41e5-af5d-4ee01d2460e4": Phase="Pending", Reason="", readiness=false. Elapsed: 9.138953ms
Jan 20 14:18:13.980: INFO: Pod "pod-secrets-f4caf3ec-e039-41e5-af5d-4ee01d2460e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025625842s
Jan 20 14:18:15.998: INFO: Pod "pod-secrets-f4caf3ec-e039-41e5-af5d-4ee01d2460e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043402066s
Jan 20 14:18:18.008: INFO: Pod "pod-secrets-f4caf3ec-e039-41e5-af5d-4ee01d2460e4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053065008s
Jan 20 14:18:20.029: INFO: Pod "pod-secrets-f4caf3ec-e039-41e5-af5d-4ee01d2460e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074178008s
STEP: Saw pod success
Jan 20 14:18:20.029: INFO: Pod "pod-secrets-f4caf3ec-e039-41e5-af5d-4ee01d2460e4" satisfied condition "success or failure"
Jan 20 14:18:20.034: INFO: Trying to get logs from node iruya-node pod pod-secrets-f4caf3ec-e039-41e5-af5d-4ee01d2460e4 container secret-volume-test: 
STEP: delete the pod
Jan 20 14:18:20.106: INFO: Waiting for pod pod-secrets-f4caf3ec-e039-41e5-af5d-4ee01d2460e4 to disappear
Jan 20 14:18:20.112: INFO: Pod pod-secrets-f4caf3ec-e039-41e5-af5d-4ee01d2460e4 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:18:20.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2820" for this suite.
Jan 20 14:18:26.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:18:26.345: INFO: namespace secrets-2820 deletion completed in 6.224297783s

• [SLOW TEST:14.606 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:18:26.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 20 14:18:26.421: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dd58c1b7-17ab-447b-876c-a7d27ad46b62" in namespace "downward-api-8947" to be "success or failure"
Jan 20 14:18:26.429: INFO: Pod "downwardapi-volume-dd58c1b7-17ab-447b-876c-a7d27ad46b62": Phase="Pending", Reason="", readiness=false. Elapsed: 7.806814ms
Jan 20 14:18:28.438: INFO: Pod "downwardapi-volume-dd58c1b7-17ab-447b-876c-a7d27ad46b62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01691577s
Jan 20 14:18:30.446: INFO: Pod "downwardapi-volume-dd58c1b7-17ab-447b-876c-a7d27ad46b62": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025647171s
Jan 20 14:18:32.458: INFO: Pod "downwardapi-volume-dd58c1b7-17ab-447b-876c-a7d27ad46b62": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037135374s
Jan 20 14:18:34.508: INFO: Pod "downwardapi-volume-dd58c1b7-17ab-447b-876c-a7d27ad46b62": Phase="Pending", Reason="", readiness=false. Elapsed: 8.087074801s
Jan 20 14:18:36.523: INFO: Pod "downwardapi-volume-dd58c1b7-17ab-447b-876c-a7d27ad46b62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.102501264s
STEP: Saw pod success
Jan 20 14:18:36.524: INFO: Pod "downwardapi-volume-dd58c1b7-17ab-447b-876c-a7d27ad46b62" satisfied condition "success or failure"
Jan 20 14:18:36.530: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-dd58c1b7-17ab-447b-876c-a7d27ad46b62 container client-container: 
STEP: delete the pod
Jan 20 14:18:36.611: INFO: Waiting for pod downwardapi-volume-dd58c1b7-17ab-447b-876c-a7d27ad46b62 to disappear
Jan 20 14:18:36.621: INFO: Pod downwardapi-volume-dd58c1b7-17ab-447b-876c-a7d27ad46b62 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:18:36.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8947" for this suite.
Jan 20 14:18:42.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:18:42.888: INFO: namespace downward-api-8947 deletion completed in 6.254631677s

• [SLOW TEST:16.543 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:18:42.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 20 14:18:43.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-8101'
Jan 20 14:18:43.207: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 20 14:18:43.207: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jan 20 14:18:43.358: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-bv7rd]
Jan 20 14:18:43.358: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-bv7rd" in namespace "kubectl-8101" to be "running and ready"
Jan 20 14:18:43.369: INFO: Pod "e2e-test-nginx-rc-bv7rd": Phase="Pending", Reason="", readiness=false. Elapsed: 11.148315ms
Jan 20 14:18:45.381: INFO: Pod "e2e-test-nginx-rc-bv7rd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022967034s
Jan 20 14:18:47.390: INFO: Pod "e2e-test-nginx-rc-bv7rd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032318299s
Jan 20 14:18:49.396: INFO: Pod "e2e-test-nginx-rc-bv7rd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038755089s
Jan 20 14:18:51.403: INFO: Pod "e2e-test-nginx-rc-bv7rd": Phase="Running", Reason="", readiness=true. Elapsed: 8.045797624s
Jan 20 14:18:51.404: INFO: Pod "e2e-test-nginx-rc-bv7rd" satisfied condition "running and ready"
Jan 20 14:18:51.404: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-bv7rd]
Jan 20 14:18:51.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-8101'
Jan 20 14:18:51.599: INFO: stderr: ""
Jan 20 14:18:51.599: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Jan 20 14:18:51.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-8101'
Jan 20 14:18:51.784: INFO: stderr: ""
Jan 20 14:18:51.784: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:18:51.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8101" for this suite.
Jan 20 14:19:13.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:19:13.985: INFO: namespace kubectl-8101 deletion completed in 22.181683028s

• [SLOW TEST:31.096 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:19:13.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-1872/configmap-test-65721d9c-6606-4ea7-bbf5-e2345c6fed72
STEP: Creating a pod to test consume configMaps
Jan 20 14:19:14.237: INFO: Waiting up to 5m0s for pod "pod-configmaps-96554569-da44-443c-842a-71a415d11688" in namespace "configmap-1872" to be "success or failure"
Jan 20 14:19:14.371: INFO: Pod "pod-configmaps-96554569-da44-443c-842a-71a415d11688": Phase="Pending", Reason="", readiness=false. Elapsed: 133.938764ms
Jan 20 14:19:16.380: INFO: Pod "pod-configmaps-96554569-da44-443c-842a-71a415d11688": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142252078s
Jan 20 14:19:18.389: INFO: Pod "pod-configmaps-96554569-da44-443c-842a-71a415d11688": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151178428s
Jan 20 14:19:20.402: INFO: Pod "pod-configmaps-96554569-da44-443c-842a-71a415d11688": Phase="Pending", Reason="", readiness=false. Elapsed: 6.164173649s
Jan 20 14:19:22.409: INFO: Pod "pod-configmaps-96554569-da44-443c-842a-71a415d11688": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.171021437s
STEP: Saw pod success
Jan 20 14:19:22.409: INFO: Pod "pod-configmaps-96554569-da44-443c-842a-71a415d11688" satisfied condition "success or failure"
Jan 20 14:19:22.413: INFO: Trying to get logs from node iruya-node pod pod-configmaps-96554569-da44-443c-842a-71a415d11688 container env-test: 
STEP: delete the pod
Jan 20 14:19:22.533: INFO: Waiting for pod pod-configmaps-96554569-da44-443c-842a-71a415d11688 to disappear
Jan 20 14:19:22.546: INFO: Pod pod-configmaps-96554569-da44-443c-842a-71a415d11688 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:19:22.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1872" for this suite.
Jan 20 14:19:28.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:19:28.775: INFO: namespace configmap-1872 deletion completed in 6.188405805s

• [SLOW TEST:14.790 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:19:28.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Jan 20 14:19:28.881: INFO: Waiting up to 5m0s for pod "client-containers-bce18151-51e4-457d-b660-7ac4951c97c1" in namespace "containers-2737" to be "success or failure"
Jan 20 14:19:28.888: INFO: Pod "client-containers-bce18151-51e4-457d-b660-7ac4951c97c1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.16193ms
Jan 20 14:19:30.899: INFO: Pod "client-containers-bce18151-51e4-457d-b660-7ac4951c97c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017972595s
Jan 20 14:19:32.910: INFO: Pod "client-containers-bce18151-51e4-457d-b660-7ac4951c97c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028470499s
Jan 20 14:19:34.928: INFO: Pod "client-containers-bce18151-51e4-457d-b660-7ac4951c97c1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046243722s
Jan 20 14:19:36.940: INFO: Pod "client-containers-bce18151-51e4-457d-b660-7ac4951c97c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058278932s
STEP: Saw pod success
Jan 20 14:19:36.940: INFO: Pod "client-containers-bce18151-51e4-457d-b660-7ac4951c97c1" satisfied condition "success or failure"
Jan 20 14:19:36.945: INFO: Trying to get logs from node iruya-node pod client-containers-bce18151-51e4-457d-b660-7ac4951c97c1 container test-container: 
STEP: delete the pod
Jan 20 14:19:37.014: INFO: Waiting for pod client-containers-bce18151-51e4-457d-b660-7ac4951c97c1 to disappear
Jan 20 14:19:37.098: INFO: Pod client-containers-bce18151-51e4-457d-b660-7ac4951c97c1 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:19:37.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2737" for this suite.
Jan 20 14:19:43.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:19:43.273: INFO: namespace containers-2737 deletion completed in 6.168532796s

• [SLOW TEST:14.498 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:19:43.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Jan 20 14:19:43.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-285 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan 20 14:19:53.957: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0120 14:19:52.618012    2291 log.go:172] (0xc00083a160) (0xc0006d61e0) Create stream\nI0120 14:19:52.618673    2291 log.go:172] (0xc00083a160) (0xc0006d61e0) Stream added, broadcasting: 1\nI0120 14:19:52.625760    2291 log.go:172] (0xc00083a160) Reply frame received for 1\nI0120 14:19:52.625893    2291 log.go:172] (0xc00083a160) (0xc000a0e8c0) Create stream\nI0120 14:19:52.625904    2291 log.go:172] (0xc00083a160) (0xc000a0e8c0) Stream added, broadcasting: 3\nI0120 14:19:52.628701    2291 log.go:172] (0xc00083a160) Reply frame received for 3\nI0120 14:19:52.628749    2291 log.go:172] (0xc00083a160) (0xc00057e1e0) Create stream\nI0120 14:19:52.628761    2291 log.go:172] (0xc00083a160) (0xc00057e1e0) Stream added, broadcasting: 5\nI0120 14:19:52.630080    2291 log.go:172] (0xc00083a160) Reply frame received for 5\nI0120 14:19:52.630106    2291 log.go:172] (0xc00083a160) (0xc0006d6280) Create stream\nI0120 14:19:52.630121    2291 log.go:172] (0xc00083a160) (0xc0006d6280) Stream added, broadcasting: 7\nI0120 14:19:52.633708    2291 log.go:172] (0xc00083a160) Reply frame received for 7\nI0120 14:19:52.634339    2291 log.go:172] (0xc000a0e8c0) (3) Writing data frame\nI0120 14:19:52.634995    2291 log.go:172] (0xc000a0e8c0) (3) Writing data frame\nI0120 14:19:52.647874    2291 log.go:172] (0xc00083a160) Data frame received for 5\nI0120 14:19:52.647917    2291 log.go:172] (0xc00057e1e0) (5) Data frame handling\nI0120 14:19:52.647943    2291 log.go:172] (0xc00057e1e0) (5) Data frame sent\nI0120 14:19:52.650328    2291 log.go:172] (0xc00083a160) Data frame received for 5\nI0120 14:19:52.650338    2291 log.go:172] (0xc00057e1e0) (5) Data frame handling\nI0120 14:19:52.650348    2291 log.go:172] (0xc00057e1e0) (5) Data frame sent\nI0120 14:19:53.875606    2291 log.go:172] (0xc00083a160) (0xc000a0e8c0) Stream removed, broadcasting: 3\nI0120 14:19:53.876609    2291 log.go:172] (0xc00083a160) Data frame received for 1\nI0120 14:19:53.877032    2291 log.go:172] (0xc00083a160) (0xc00057e1e0) Stream removed, broadcasting: 5\nI0120 14:19:53.877623    2291 log.go:172] (0xc0006d61e0) (1) Data frame handling\nI0120 14:19:53.877718    2291 log.go:172] (0xc0006d61e0) (1) Data frame sent\nI0120 14:19:53.877852    2291 log.go:172] (0xc00083a160) (0xc0006d61e0) Stream removed, broadcasting: 1\nI0120 14:19:53.878052    2291 log.go:172] (0xc00083a160) (0xc0006d6280) Stream removed, broadcasting: 7\nI0120 14:19:53.878157    2291 log.go:172] (0xc00083a160) Go away received\nI0120 14:19:53.879013    2291 log.go:172] (0xc00083a160) (0xc0006d61e0) Stream removed, broadcasting: 1\nI0120 14:19:53.879069    2291 log.go:172] (0xc00083a160) (0xc000a0e8c0) Stream removed, broadcasting: 3\nI0120 14:19:53.879092    2291 log.go:172] (0xc00083a160) (0xc00057e1e0) Stream removed, broadcasting: 5\nI0120 14:19:53.879116    2291 log.go:172] (0xc00083a160) (0xc0006d6280) Stream removed, broadcasting: 7\n"
Jan 20 14:19:53.957: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:19:55.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-285" for this suite.
Jan 20 14:20:02.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:20:02.152: INFO: namespace kubectl-285 deletion completed in 6.159812815s

• [SLOW TEST:18.878 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:20:02.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan 20 14:20:22.488: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3395 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 14:20:22.489: INFO: >>> kubeConfig: /root/.kube/config
I0120 14:20:22.582657       8 log.go:172] (0xc000e28370) (0xc002b96e60) Create stream
I0120 14:20:22.582755       8 log.go:172] (0xc000e28370) (0xc002b96e60) Stream added, broadcasting: 1
I0120 14:20:22.597308       8 log.go:172] (0xc000e28370) Reply frame received for 1
I0120 14:20:22.597392       8 log.go:172] (0xc000e28370) (0xc002756f00) Create stream
I0120 14:20:22.597410       8 log.go:172] (0xc000e28370) (0xc002756f00) Stream added, broadcasting: 3
I0120 14:20:22.600335       8 log.go:172] (0xc000e28370) Reply frame received for 3
I0120 14:20:22.600375       8 log.go:172] (0xc000e28370) (0xc002b96f00) Create stream
I0120 14:20:22.600394       8 log.go:172] (0xc000e28370) (0xc002b96f00) Stream added, broadcasting: 5
I0120 14:20:22.603822       8 log.go:172] (0xc000e28370) Reply frame received for 5
I0120 14:20:22.705832       8 log.go:172] (0xc000e28370) Data frame received for 3
I0120 14:20:22.705879       8 log.go:172] (0xc002756f00) (3) Data frame handling
I0120 14:20:22.705897       8 log.go:172] (0xc002756f00) (3) Data frame sent
I0120 14:20:22.864539       8 log.go:172] (0xc000e28370) Data frame received for 1
I0120 14:20:22.864876       8 log.go:172] (0xc000e28370) (0xc002b96f00) Stream removed, broadcasting: 5
I0120 14:20:22.865022       8 log.go:172] (0xc002b96e60) (1) Data frame handling
I0120 14:20:22.865070       8 log.go:172] (0xc002b96e60) (1) Data frame sent
I0120 14:20:22.865104       8 log.go:172] (0xc000e28370) (0xc002b96e60) Stream removed, broadcasting: 1
I0120 14:20:22.865597       8 log.go:172] (0xc000e28370) (0xc002756f00) Stream removed, broadcasting: 3
I0120 14:20:22.865632       8 log.go:172] (0xc000e28370) Go away received
I0120 14:20:22.866119       8 log.go:172] (0xc000e28370) (0xc002b96e60) Stream removed, broadcasting: 1
I0120 14:20:22.866165       8 log.go:172] (0xc000e28370) (0xc002756f00) Stream removed, broadcasting: 3
I0120 14:20:22.866192       8 log.go:172] (0xc000e28370) (0xc002b96f00) Stream removed, broadcasting: 5
Jan 20 14:20:22.866: INFO: Exec stderr: ""
Jan 20 14:20:22.866: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3395 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 14:20:22.866: INFO: >>> kubeConfig: /root/.kube/config
I0120 14:20:22.950907       8 log.go:172] (0xc002ad0fd0) (0xc002757220) Create stream
I0120 14:20:22.951099       8 log.go:172] (0xc002ad0fd0) (0xc002757220) Stream added, broadcasting: 1
I0120 14:20:22.962450       8 log.go:172] (0xc002ad0fd0) Reply frame received for 1
I0120 14:20:22.962505       8 log.go:172] (0xc002ad0fd0) (0xc002645b80) Create stream
I0120 14:20:22.962519       8 log.go:172] (0xc002ad0fd0) (0xc002645b80) Stream added, broadcasting: 3
I0120 14:20:22.965606       8 log.go:172] (0xc002ad0fd0) Reply frame received for 3
I0120 14:20:22.965647       8 log.go:172] (0xc002ad0fd0) (0xc002645cc0) Create stream
I0120 14:20:22.965662       8 log.go:172] (0xc002ad0fd0) (0xc002645cc0) Stream added, broadcasting: 5
I0120 14:20:22.971090       8 log.go:172] (0xc002ad0fd0) Reply frame received for 5
I0120 14:20:23.113464       8 log.go:172] (0xc002ad0fd0) Data frame received for 3
I0120 14:20:23.113537       8 log.go:172] (0xc002645b80) (3) Data frame handling
I0120 14:20:23.113565       8 log.go:172] (0xc002645b80) (3) Data frame sent
I0120 14:20:23.245162       8 log.go:172] (0xc002ad0fd0) (0xc002645b80) Stream removed, broadcasting: 3
I0120 14:20:23.245508       8 log.go:172] (0xc002ad0fd0) Data frame received for 1
I0120 14:20:23.245541       8 log.go:172] (0xc002757220) (1) Data frame handling
I0120 14:20:23.245589       8 log.go:172] (0xc002757220) (1) Data frame sent
I0120 14:20:23.245612       8 log.go:172] (0xc002ad0fd0) (0xc002757220) Stream removed, broadcasting: 1
I0120 14:20:23.245838       8 log.go:172] (0xc002ad0fd0) (0xc002645cc0) Stream removed, broadcasting: 5
I0120 14:20:23.245873       8 log.go:172] (0xc002ad0fd0) Go away received
I0120 14:20:23.245899       8 log.go:172] (0xc002ad0fd0) (0xc002757220) Stream removed, broadcasting: 1
I0120 14:20:23.245920       8 log.go:172] (0xc002ad0fd0) (0xc002645b80) Stream removed, broadcasting: 3
I0120 14:20:23.245934       8 log.go:172] (0xc002ad0fd0) (0xc002645cc0) Stream removed, broadcasting: 5
Jan 20 14:20:23.245: INFO: Exec stderr: ""
Jan 20 14:20:23.246: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3395 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 14:20:23.246: INFO: >>> kubeConfig: /root/.kube/config
I0120 14:20:23.317298       8 log.go:172] (0xc0009de8f0) (0xc003330280) Create stream
I0120 14:20:23.317365       8 log.go:172] (0xc0009de8f0) (0xc003330280) Stream added, broadcasting: 1
I0120 14:20:23.325712       8 log.go:172] (0xc0009de8f0) Reply frame received for 1
I0120 14:20:23.325838       8 log.go:172] (0xc0009de8f0) (0xc0027572c0) Create stream
I0120 14:20:23.325848       8 log.go:172] (0xc0009de8f0) (0xc0027572c0) Stream added, broadcasting: 3
I0120 14:20:23.327277       8 log.go:172] (0xc0009de8f0) Reply frame received for 3
I0120 14:20:23.327307       8 log.go:172] (0xc0009de8f0) (0xc001dad540) Create stream
I0120 14:20:23.327323       8 log.go:172] (0xc0009de8f0) (0xc001dad540) Stream added, broadcasting: 5
I0120 14:20:23.328400       8 log.go:172] (0xc0009de8f0) Reply frame received for 5
I0120 14:20:23.426806       8 log.go:172] (0xc0009de8f0) Data frame received for 3
I0120 14:20:23.426927       8 log.go:172] (0xc0027572c0) (3) Data frame handling
I0120 14:20:23.426952       8 log.go:172] (0xc0027572c0) (3) Data frame sent
I0120 14:20:23.556801       8 log.go:172] (0xc0009de8f0) Data frame received for 1
I0120 14:20:23.557099       8 log.go:172] (0xc0009de8f0) (0xc0027572c0) Stream removed, broadcasting: 3
I0120 14:20:23.557258       8 log.go:172] (0xc003330280) (1) Data frame handling
I0120 14:20:23.557288       8 log.go:172] (0xc003330280) (1) Data frame sent
I0120 14:20:23.557315       8 log.go:172] (0xc0009de8f0) (0xc001dad540) Stream removed, broadcasting: 5
I0120 14:20:23.557351       8 log.go:172] (0xc0009de8f0) (0xc003330280) Stream removed, broadcasting: 1
I0120 14:20:23.557383       8 log.go:172] (0xc0009de8f0) Go away received
I0120 14:20:23.557587       8 log.go:172] (0xc0009de8f0) (0xc003330280) Stream removed, broadcasting: 1
I0120 14:20:23.557606       8 log.go:172] (0xc0009de8f0) (0xc0027572c0) Stream removed, broadcasting: 3
I0120 14:20:23.557611       8 log.go:172] (0xc0009de8f0) (0xc001dad540) Stream removed, broadcasting: 5
Jan 20 14:20:23.557: INFO: Exec stderr: ""
Jan 20 14:20:23.557: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3395 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 14:20:23.557: INFO: >>> kubeConfig: /root/.kube/config
I0120 14:20:23.659117       8 log.go:172] (0xc002ad1d90) (0xc0027575e0) Create stream
I0120 14:20:23.659284       8 log.go:172] (0xc002ad1d90) (0xc0027575e0) Stream added, broadcasting: 1
I0120 14:20:23.672003       8 log.go:172] (0xc002ad1d90) Reply frame received for 1
I0120 14:20:23.672060       8 log.go:172] (0xc002ad1d90) (0xc001dad5e0) Create stream
I0120 14:20:23.672074       8 log.go:172] (0xc002ad1d90) (0xc001dad5e0) Stream added, broadcasting: 3
I0120 14:20:23.673688       8 log.go:172] (0xc002ad1d90) Reply frame received for 3
I0120 14:20:23.673714       8 log.go:172] (0xc002ad1d90) (0xc002b96fa0) Create stream
I0120 14:20:23.673725       8 log.go:172] (0xc002ad1d90) (0xc002b96fa0) Stream added, broadcasting: 5
I0120 14:20:23.675050       8 log.go:172] (0xc002ad1d90) Reply frame received for 5
I0120 14:20:23.792010       8 log.go:172] (0xc002ad1d90) Data frame received for 3
I0120 14:20:23.792180       8 log.go:172] (0xc001dad5e0) (3) Data frame handling
I0120 14:20:23.792207       8 log.go:172] (0xc001dad5e0) (3) Data frame sent
I0120 14:20:24.099800       8 log.go:172] (0xc002ad1d90) (0xc001dad5e0) Stream removed, broadcasting: 3
I0120 14:20:24.099914       8 log.go:172] (0xc002ad1d90) Data frame received for 1
I0120 14:20:24.099941       8 log.go:172] (0xc0027575e0) (1) Data frame handling
I0120 14:20:24.099965       8 log.go:172] (0xc0027575e0) (1) Data frame sent
I0120 14:20:24.099981       8 log.go:172] (0xc002ad1d90) (0xc0027575e0) Stream removed, broadcasting: 1
I0120 14:20:24.100060       8 log.go:172] (0xc002ad1d90) (0xc002b96fa0) Stream removed, broadcasting: 5
I0120 14:20:24.100100       8 log.go:172] (0xc002ad1d90) Go away received
I0120 14:20:24.100545       8 log.go:172] (0xc002ad1d90) (0xc0027575e0) Stream removed, broadcasting: 1
I0120 14:20:24.100679       8 log.go:172] (0xc002ad1d90) (0xc001dad5e0) Stream removed, broadcasting: 3
I0120 14:20:24.100689       8 log.go:172] (0xc002ad1d90) (0xc002b96fa0) Stream removed, broadcasting: 5
Jan 20 14:20:24.100: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan 20 14:20:24.100: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3395 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 14:20:24.101: INFO: >>> kubeConfig: /root/.kube/config
I0120 14:20:24.177525       8 log.go:172] (0xc000caedc0) (0xc002645ea0) Create stream
I0120 14:20:24.177640       8 log.go:172] (0xc000caedc0) (0xc002645ea0) Stream added, broadcasting: 1
I0120 14:20:24.189616       8 log.go:172] (0xc000caedc0) Reply frame received for 1
I0120 14:20:24.189690       8 log.go:172] (0xc000caedc0) (0xc002b97040) Create stream
I0120 14:20:24.189702       8 log.go:172] (0xc000caedc0) (0xc002b97040) Stream added, broadcasting: 3
I0120 14:20:24.191283       8 log.go:172] (0xc000caedc0) Reply frame received for 3
I0120 14:20:24.191307       8 log.go:172] (0xc000caedc0) (0xc002b970e0) Create stream
I0120 14:20:24.191312       8 log.go:172] (0xc000caedc0) (0xc002b970e0) Stream added, broadcasting: 5
I0120 14:20:24.192494       8 log.go:172] (0xc000caedc0) Reply frame received for 5
I0120 14:20:24.285658       8 log.go:172] (0xc000caedc0) Data frame received for 3
I0120 14:20:24.285738       8 log.go:172] (0xc002b97040) (3) Data frame handling
I0120 14:20:24.285753       8 log.go:172] (0xc002b97040) (3) Data frame sent
I0120 14:20:24.379769       8 log.go:172] (0xc000caedc0) Data frame received for 1
I0120 14:20:24.379824       8 log.go:172] (0xc000caedc0) (0xc002b97040) Stream removed, broadcasting: 3
I0120 14:20:24.379909       8 log.go:172] (0xc002645ea0) (1) Data frame handling
I0120 14:20:24.379946       8 log.go:172] (0xc002645ea0) (1) Data frame sent
I0120 14:20:24.379975       8 log.go:172] (0xc000caedc0) (0xc002b970e0) Stream removed, broadcasting: 5
I0120 14:20:24.380001       8 log.go:172] (0xc000caedc0) (0xc002645ea0) Stream removed, broadcasting: 1
I0120 14:20:24.380014       8 log.go:172] (0xc000caedc0) Go away received
I0120 14:20:24.380176       8 log.go:172] (0xc000caedc0) (0xc002645ea0) Stream removed, broadcasting: 1
I0120 14:20:24.380194       8 log.go:172] (0xc000caedc0) (0xc002b97040) Stream removed, broadcasting: 3
I0120 14:20:24.380200       8 log.go:172] (0xc000caedc0) (0xc002b970e0) Stream removed, broadcasting: 5
Jan 20 14:20:24.380: INFO: Exec stderr: ""
Jan 20 14:20:24.380: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3395 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 14:20:24.380: INFO: >>> kubeConfig: /root/.kube/config
I0120 14:20:24.425432       8 log.go:172] (0xc000caf600) (0xc0031361e0) Create stream
I0120 14:20:24.425453       8 log.go:172] (0xc000caf600) (0xc0031361e0) Stream added, broadcasting: 1
I0120 14:20:24.430408       8 log.go:172] (0xc000caf600) Reply frame received for 1
I0120 14:20:24.430474       8 log.go:172] (0xc000caf600) (0xc001dad860) Create stream
I0120 14:20:24.430483       8 log.go:172] (0xc000caf600) (0xc001dad860) Stream added, broadcasting: 3
I0120 14:20:24.431343       8 log.go:172] (0xc000caf600) Reply frame received for 3
I0120 14:20:24.431360       8 log.go:172] (0xc000caf600) (0xc003136280) Create stream
I0120 14:20:24.431367       8 log.go:172] (0xc000caf600) (0xc003136280) Stream added, broadcasting: 5
I0120 14:20:24.432182       8 log.go:172] (0xc000caf600) Reply frame received for 5
I0120 14:20:24.585725       8 log.go:172] (0xc000caf600) Data frame received for 3
I0120 14:20:24.585914       8 log.go:172] (0xc001dad860) (3) Data frame handling
I0120 14:20:24.585975       8 log.go:172] (0xc001dad860) (3) Data frame sent
I0120 14:20:24.730709       8 log.go:172] (0xc000caf600) Data frame received for 1
I0120 14:20:24.730800       8 log.go:172] (0xc000caf600) (0xc003136280) Stream removed, broadcasting: 5
I0120 14:20:24.730860       8 log.go:172] (0xc0031361e0) (1) Data frame handling
I0120 14:20:24.730881       8 log.go:172] (0xc0031361e0) (1) Data frame sent
I0120 14:20:24.730902       8 log.go:172] (0xc000caf600) (0xc001dad860) Stream removed, broadcasting: 3
I0120 14:20:24.730921       8 log.go:172] (0xc000caf600) (0xc0031361e0) Stream removed, broadcasting: 1
I0120 14:20:24.730934       8 log.go:172] (0xc000caf600) Go away received
I0120 14:20:24.731366       8 log.go:172] (0xc000caf600) (0xc0031361e0) Stream removed, broadcasting: 1
I0120 14:20:24.731381       8 log.go:172] (0xc000caf600) (0xc001dad860) Stream removed, broadcasting: 3
I0120 14:20:24.731386       8 log.go:172] (0xc000caf600) (0xc003136280) Stream removed, broadcasting: 5
Jan 20 14:20:24.731: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan 20 14:20:24.731: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3395 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 14:20:24.731: INFO: >>> kubeConfig: /root/.kube/config
I0120 14:20:24.781497       8 log.go:172] (0xc001cd0000) (0xc002b97400) Create stream
I0120 14:20:24.781626       8 log.go:172] (0xc001cd0000) (0xc002b97400) Stream added, broadcasting: 1
I0120 14:20:24.788114       8 log.go:172] (0xc001cd0000) Reply frame received for 1
I0120 14:20:24.788165       8 log.go:172] (0xc001cd0000) (0xc001dad900) Create stream
I0120 14:20:24.788174       8 log.go:172] (0xc001cd0000) (0xc001dad900) Stream added, broadcasting: 3
I0120 14:20:24.791029       8 log.go:172] (0xc001cd0000) Reply frame received for 3
I0120 14:20:24.791195       8 log.go:172] (0xc001cd0000) (0xc003136320) Create stream
I0120 14:20:24.791217       8 log.go:172] (0xc001cd0000) (0xc003136320) Stream added, broadcasting: 5
I0120 14:20:24.793861       8 log.go:172] (0xc001cd0000) Reply frame received for 5
I0120 14:20:24.896214       8 log.go:172] (0xc001cd0000) Data frame received for 3
I0120 14:20:24.896478       8 log.go:172] (0xc001dad900) (3) Data frame handling
I0120 14:20:24.896554       8 log.go:172] (0xc001dad900) (3) Data frame sent
I0120 14:20:25.010481       8 log.go:172] (0xc001cd0000) Data frame received for 1
I0120 14:20:25.010534       8 log.go:172] (0xc001cd0000) (0xc003136320) Stream removed, broadcasting: 5
I0120 14:20:25.010579       8 log.go:172] (0xc002b97400) (1) Data frame handling
I0120 14:20:25.010594       8 log.go:172] (0xc001cd0000) (0xc001dad900) Stream removed, broadcasting: 3
I0120 14:20:25.010628       8 log.go:172] (0xc002b97400) (1) Data frame sent
I0120 14:20:25.010640       8 log.go:172] (0xc001cd0000) (0xc002b97400) Stream removed, broadcasting: 1
I0120 14:20:25.010669       8 log.go:172] (0xc001cd0000) Go away received
I0120 14:20:25.010908       8 log.go:172] (0xc001cd0000) (0xc002b97400) Stream removed, broadcasting: 1
I0120 14:20:25.010935       8 log.go:172] (0xc001cd0000) (0xc001dad900) Stream removed, broadcasting: 3
I0120 14:20:25.010954       8 log.go:172] (0xc001cd0000) (0xc003136320) Stream removed, broadcasting: 5
Jan 20 14:20:25.011: INFO: Exec stderr: ""
Jan 20 14:20:25.011: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3395 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 14:20:25.011: INFO: >>> kubeConfig: /root/.kube/config
I0120 14:20:25.062071       8 log.go:172] (0xc0009dfce0) (0xc003330460) Create stream
I0120 14:20:25.062127       8 log.go:172] (0xc0009dfce0) (0xc003330460) Stream added, broadcasting: 1
I0120 14:20:25.067222       8 log.go:172] (0xc0009dfce0) Reply frame received for 1
I0120 14:20:25.067258       8 log.go:172] (0xc0009dfce0) (0xc003330500) Create stream
I0120 14:20:25.067268       8 log.go:172] (0xc0009dfce0) (0xc003330500) Stream added, broadcasting: 3
I0120 14:20:25.068451       8 log.go:172] (0xc0009dfce0) Reply frame received for 3
I0120 14:20:25.068479       8 log.go:172] (0xc0009dfce0) (0xc003136500) Create stream
I0120 14:20:25.068492       8 log.go:172] (0xc0009dfce0) (0xc003136500) Stream added, broadcasting: 5
I0120 14:20:25.071581       8 log.go:172] (0xc0009dfce0) Reply frame received for 5
I0120 14:20:25.236374       8 log.go:172] (0xc0009dfce0) Data frame received for 3
I0120 14:20:25.236462       8 log.go:172] (0xc003330500) (3) Data frame handling
I0120 14:20:25.236476       8 log.go:172] (0xc003330500) (3) Data frame sent
I0120 14:20:25.345547       8 log.go:172] (0xc0009dfce0) Data frame received for 1
I0120 14:20:25.345628       8 log.go:172] (0xc0009dfce0) (0xc003330500) Stream removed, broadcasting: 3
I0120 14:20:25.345698       8 log.go:172] (0xc003330460) (1) Data frame handling
I0120 14:20:25.345743       8 log.go:172] (0xc003330460) (1) Data frame sent
I0120 14:20:25.345769       8 log.go:172] (0xc0009dfce0) (0xc003136500) Stream removed, broadcasting: 5
I0120 14:20:25.345813       8 log.go:172] (0xc0009dfce0) (0xc003330460) Stream removed, broadcasting: 1
I0120 14:20:25.345847       8 log.go:172] (0xc0009dfce0) Go away received
I0120 14:20:25.346153       8 log.go:172] (0xc0009dfce0) (0xc003330460) Stream removed, broadcasting: 1
I0120 14:20:25.346180       8 log.go:172] (0xc0009dfce0) (0xc003330500) Stream removed, broadcasting: 3
I0120 14:20:25.346197       8 log.go:172] (0xc0009dfce0) (0xc003136500) Stream removed, broadcasting: 5
Jan 20 14:20:25.346: INFO: Exec stderr: ""
Jan 20 14:20:25.346: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3395 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 14:20:25.346: INFO: >>> kubeConfig: /root/.kube/config
I0120 14:20:25.415305       8 log.go:172] (0xc0024b4000) (0xc001dadea0) Create stream
I0120 14:20:25.415356       8 log.go:172] (0xc0024b4000) (0xc001dadea0) Stream added, broadcasting: 1
I0120 14:20:25.420520       8 log.go:172] (0xc0024b4000) Reply frame received for 1
I0120 14:20:25.420550       8 log.go:172] (0xc0024b4000) (0xc002757680) Create stream
I0120 14:20:25.420558       8 log.go:172] (0xc0024b4000) (0xc002757680) Stream added, broadcasting: 3
I0120 14:20:25.421927       8 log.go:172] (0xc0024b4000) Reply frame received for 3
I0120 14:20:25.422036       8 log.go:172] (0xc0024b4000) (0xc0033305a0) Create stream
I0120 14:20:25.422044       8 log.go:172] (0xc0024b4000) (0xc0033305a0) Stream added, broadcasting: 5
I0120 14:20:25.423045       8 log.go:172] (0xc0024b4000) Reply frame received for 5
I0120 14:20:25.505711       8 log.go:172] (0xc0024b4000) Data frame received for 3
I0120 14:20:25.505752       8 log.go:172] (0xc002757680) (3) Data frame handling
I0120 14:20:25.505771       8 log.go:172] (0xc002757680) (3) Data frame sent
I0120 14:20:25.619315       8 log.go:172] (0xc0024b4000) (0xc0033305a0) Stream removed, broadcasting: 5
I0120 14:20:25.619479       8 log.go:172] (0xc0024b4000) Data frame received for 1
I0120 14:20:25.619512       8 log.go:172] (0xc0024b4000) (0xc002757680) Stream removed, broadcasting: 3
I0120 14:20:25.619555       8 log.go:172] (0xc001dadea0) (1) Data frame handling
I0120 14:20:25.619587       8 log.go:172] (0xc001dadea0) (1) Data frame sent
I0120 14:20:25.619604       8 log.go:172] (0xc0024b4000) (0xc001dadea0) Stream removed, broadcasting: 1
I0120 14:20:25.619620       8 log.go:172] (0xc0024b4000) Go away received
I0120 14:20:25.620008       8 log.go:172] (0xc0024b4000) (0xc001dadea0) Stream removed, broadcasting: 1
I0120 14:20:25.620034       8 log.go:172] (0xc0024b4000) (0xc002757680) Stream removed, broadcasting: 3
I0120 14:20:25.620049       8 log.go:172] (0xc0024b4000) (0xc0033305a0) Stream removed, broadcasting: 5
Jan 20 14:20:25.620: INFO: Exec stderr: ""
Jan 20 14:20:25.620: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3395 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 14:20:25.620: INFO: >>> kubeConfig: /root/.kube/config
I0120 14:20:25.707576       8 log.go:172] (0xc0024b4c60) (0xc0018d4320) Create stream
I0120 14:20:25.707633       8 log.go:172] (0xc0024b4c60) (0xc0018d4320) Stream added, broadcasting: 1
I0120 14:20:25.711671       8 log.go:172] (0xc0024b4c60) Reply frame received for 1
I0120 14:20:25.711773       8 log.go:172] (0xc0024b4c60) (0xc0018d43c0) Create stream
I0120 14:20:25.711779       8 log.go:172] (0xc0024b4c60) (0xc0018d43c0) Stream added, broadcasting: 3
I0120 14:20:25.713497       8 log.go:172] (0xc0024b4c60) Reply frame received for 3
I0120 14:20:25.713537       8 log.go:172] (0xc0024b4c60) (0xc002757720) Create stream
I0120 14:20:25.713548       8 log.go:172] (0xc0024b4c60) (0xc002757720) Stream added, broadcasting: 5
I0120 14:20:25.715301       8 log.go:172] (0xc0024b4c60) Reply frame received for 5
I0120 14:20:25.802385       8 log.go:172] (0xc0024b4c60) Data frame received for 3
I0120 14:20:25.802433       8 log.go:172] (0xc0018d43c0) (3) Data frame handling
I0120 14:20:25.802451       8 log.go:172] (0xc0018d43c0) (3) Data frame sent
I0120 14:20:25.948005       8 log.go:172] (0xc0024b4c60) (0xc0018d43c0) Stream removed, broadcasting: 3
I0120 14:20:25.948218       8 log.go:172] (0xc0024b4c60) Data frame received for 1
I0120 14:20:25.948251       8 log.go:172] (0xc0018d4320) (1) Data frame handling
I0120 14:20:25.948438       8 log.go:172] (0xc0024b4c60) (0xc002757720) Stream removed, broadcasting: 5
I0120 14:20:25.948620       8 log.go:172] (0xc0018d4320) (1) Data frame sent
I0120 14:20:25.948653       8 log.go:172] (0xc0024b4c60) (0xc0018d4320) Stream removed, broadcasting: 1
I0120 14:20:25.948705       8 log.go:172] (0xc0024b4c60) Go away received
I0120 14:20:25.949181       8 log.go:172] (0xc0024b4c60) (0xc0018d4320) Stream removed, broadcasting: 1
I0120 14:20:25.949205       8 log.go:172] (0xc0024b4c60) (0xc0018d43c0) Stream removed, broadcasting: 3
I0120 14:20:25.949213       8 log.go:172] (0xc0024b4c60) (0xc002757720) Stream removed, broadcasting: 5
Jan 20 14:20:25.949: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:20:25.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-3395" for this suite.
Jan 20 14:21:12.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:21:12.105: INFO: namespace e2e-kubelet-etc-hosts-3395 deletion completed in 46.145617058s

• [SLOW TEST:69.952 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:21:12.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 20 14:21:12.197: INFO: Waiting up to 5m0s for pod "pod-ffb41188-fa69-47f5-a17a-ec2e4df788db" in namespace "emptydir-9057" to be "success or failure"
Jan 20 14:21:12.203: INFO: Pod "pod-ffb41188-fa69-47f5-a17a-ec2e4df788db": Phase="Pending", Reason="", readiness=false. Elapsed: 5.841294ms
Jan 20 14:21:14.211: INFO: Pod "pod-ffb41188-fa69-47f5-a17a-ec2e4df788db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013938012s
Jan 20 14:21:16.217: INFO: Pod "pod-ffb41188-fa69-47f5-a17a-ec2e4df788db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01999218s
Jan 20 14:21:18.230: INFO: Pod "pod-ffb41188-fa69-47f5-a17a-ec2e4df788db": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033155763s
Jan 20 14:21:20.246: INFO: Pod "pod-ffb41188-fa69-47f5-a17a-ec2e4df788db": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049129996s
Jan 20 14:21:22.257: INFO: Pod "pod-ffb41188-fa69-47f5-a17a-ec2e4df788db": Phase="Running", Reason="", readiness=true. Elapsed: 10.059927077s
Jan 20 14:21:24.263: INFO: Pod "pod-ffb41188-fa69-47f5-a17a-ec2e4df788db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.065705128s
STEP: Saw pod success
Jan 20 14:21:24.263: INFO: Pod "pod-ffb41188-fa69-47f5-a17a-ec2e4df788db" satisfied condition "success or failure"
Jan 20 14:21:24.281: INFO: Trying to get logs from node iruya-node pod pod-ffb41188-fa69-47f5-a17a-ec2e4df788db container test-container: 
STEP: delete the pod
Jan 20 14:21:24.368: INFO: Waiting for pod pod-ffb41188-fa69-47f5-a17a-ec2e4df788db to disappear
Jan 20 14:21:24.373: INFO: Pod pod-ffb41188-fa69-47f5-a17a-ec2e4df788db no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:21:24.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9057" for this suite.
Jan 20 14:21:30.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:21:30.508: INFO: namespace emptydir-9057 deletion completed in 6.130406699s

• [SLOW TEST:18.403 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:21:30.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 20 14:21:30.689: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3738b010-5282-464e-a99a-7786e9bfe383" in namespace "downward-api-8100" to be "success or failure"
Jan 20 14:21:30.740: INFO: Pod "downwardapi-volume-3738b010-5282-464e-a99a-7786e9bfe383": Phase="Pending", Reason="", readiness=false. Elapsed: 50.924162ms
Jan 20 14:21:32.748: INFO: Pod "downwardapi-volume-3738b010-5282-464e-a99a-7786e9bfe383": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058719454s
Jan 20 14:21:34.757: INFO: Pod "downwardapi-volume-3738b010-5282-464e-a99a-7786e9bfe383": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067756826s
Jan 20 14:21:36.767: INFO: Pod "downwardapi-volume-3738b010-5282-464e-a99a-7786e9bfe383": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077445567s
Jan 20 14:21:41.346: INFO: Pod "downwardapi-volume-3738b010-5282-464e-a99a-7786e9bfe383": Phase="Pending", Reason="", readiness=false. Elapsed: 10.656609731s
Jan 20 14:21:43.356: INFO: Pod "downwardapi-volume-3738b010-5282-464e-a99a-7786e9bfe383": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.666367255s
STEP: Saw pod success
Jan 20 14:21:43.356: INFO: Pod "downwardapi-volume-3738b010-5282-464e-a99a-7786e9bfe383" satisfied condition "success or failure"
Jan 20 14:21:43.361: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3738b010-5282-464e-a99a-7786e9bfe383 container client-container: 
STEP: delete the pod
Jan 20 14:21:43.556: INFO: Waiting for pod downwardapi-volume-3738b010-5282-464e-a99a-7786e9bfe383 to disappear
Jan 20 14:21:43.563: INFO: Pod downwardapi-volume-3738b010-5282-464e-a99a-7786e9bfe383 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:21:43.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8100" for this suite.
Jan 20 14:21:49.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:21:49.726: INFO: namespace downward-api-8100 deletion completed in 6.158434152s

• [SLOW TEST:19.218 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:21:49.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Jan 20 14:21:49.817: INFO: Waiting up to 5m0s for pod "client-containers-c41ec925-b696-44dc-b141-cab87cecbca7" in namespace "containers-2488" to be "success or failure"
Jan 20 14:21:49.824: INFO: Pod "client-containers-c41ec925-b696-44dc-b141-cab87cecbca7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.854624ms
Jan 20 14:21:51.839: INFO: Pod "client-containers-c41ec925-b696-44dc-b141-cab87cecbca7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021577017s
Jan 20 14:21:53.875: INFO: Pod "client-containers-c41ec925-b696-44dc-b141-cab87cecbca7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057617355s
Jan 20 14:21:55.884: INFO: Pod "client-containers-c41ec925-b696-44dc-b141-cab87cecbca7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066605973s
Jan 20 14:21:57.893: INFO: Pod "client-containers-c41ec925-b696-44dc-b141-cab87cecbca7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075353142s
Jan 20 14:21:59.904: INFO: Pod "client-containers-c41ec925-b696-44dc-b141-cab87cecbca7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.087230475s
STEP: Saw pod success
Jan 20 14:21:59.905: INFO: Pod "client-containers-c41ec925-b696-44dc-b141-cab87cecbca7" satisfied condition "success or failure"
Jan 20 14:21:59.909: INFO: Trying to get logs from node iruya-node pod client-containers-c41ec925-b696-44dc-b141-cab87cecbca7 container test-container: 
STEP: delete the pod
Jan 20 14:22:00.152: INFO: Waiting for pod client-containers-c41ec925-b696-44dc-b141-cab87cecbca7 to disappear
Jan 20 14:22:00.162: INFO: Pod client-containers-c41ec925-b696-44dc-b141-cab87cecbca7 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:22:00.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2488" for this suite.
Jan 20 14:22:06.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:22:06.376: INFO: namespace containers-2488 deletion completed in 6.201168464s

• [SLOW TEST:16.649 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:22:06.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan 20 14:22:06.541: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:22:20.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9353" for this suite.
Jan 20 14:22:26.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:22:26.544: INFO: namespace init-container-9353 deletion completed in 6.181114007s

• [SLOW TEST:20.168 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:22:26.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Jan 20 14:22:26.683: INFO: Waiting up to 5m0s for pod "var-expansion-811aefc7-7141-4b08-9a23-633f594de107" in namespace "var-expansion-1990" to be "success or failure"
Jan 20 14:22:26.698: INFO: Pod "var-expansion-811aefc7-7141-4b08-9a23-633f594de107": Phase="Pending", Reason="", readiness=false. Elapsed: 14.684748ms
Jan 20 14:22:28.705: INFO: Pod "var-expansion-811aefc7-7141-4b08-9a23-633f594de107": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021751758s
Jan 20 14:22:30.710: INFO: Pod "var-expansion-811aefc7-7141-4b08-9a23-633f594de107": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02739984s
Jan 20 14:22:32.742: INFO: Pod "var-expansion-811aefc7-7141-4b08-9a23-633f594de107": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058599946s
Jan 20 14:22:34.749: INFO: Pod "var-expansion-811aefc7-7141-4b08-9a23-633f594de107": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065760128s
STEP: Saw pod success
Jan 20 14:22:34.749: INFO: Pod "var-expansion-811aefc7-7141-4b08-9a23-633f594de107" satisfied condition "success or failure"
Jan 20 14:22:34.752: INFO: Trying to get logs from node iruya-node pod var-expansion-811aefc7-7141-4b08-9a23-633f594de107 container dapi-container: 
STEP: delete the pod
Jan 20 14:22:34.805: INFO: Waiting for pod var-expansion-811aefc7-7141-4b08-9a23-633f594de107 to disappear
Jan 20 14:22:34.835: INFO: Pod var-expansion-811aefc7-7141-4b08-9a23-633f594de107 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:22:34.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1990" for this suite.
Jan 20 14:22:40.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:22:40.974: INFO: namespace var-expansion-1990 deletion completed in 6.134691504s

• [SLOW TEST:14.430 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:22:40.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-890
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Jan 20 14:22:41.136: INFO: Found 0 stateful pods, waiting for 3
Jan 20 14:22:51.495: INFO: Found 2 stateful pods, waiting for 3
Jan 20 14:23:01.157: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 20 14:23:01.157: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 20 14:23:01.157: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 20 14:23:11.146: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 20 14:23:11.146: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 20 14:23:11.146: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan 20 14:23:11.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-890 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 20 14:23:11.658: INFO: stderr: "I0120 14:23:11.421597    2315 log.go:172] (0xc0008a6370) (0xc00096e640) Create stream\nI0120 14:23:11.423252    2315 log.go:172] (0xc0008a6370) (0xc00096e640) Stream added, broadcasting: 1\nI0120 14:23:11.432472    2315 log.go:172] (0xc0008a6370) Reply frame received for 1\nI0120 14:23:11.432535    2315 log.go:172] (0xc0008a6370) (0xc000960000) Create stream\nI0120 14:23:11.432549    2315 log.go:172] (0xc0008a6370) (0xc000960000) Stream added, broadcasting: 3\nI0120 14:23:11.433909    2315 log.go:172] (0xc0008a6370) Reply frame received for 3\nI0120 14:23:11.433939    2315 log.go:172] (0xc0008a6370) (0xc000764460) Create stream\nI0120 14:23:11.433953    2315 log.go:172] (0xc0008a6370) (0xc000764460) Stream added, broadcasting: 5\nI0120 14:23:11.435476    2315 log.go:172] (0xc0008a6370) Reply frame received for 5\nI0120 14:23:11.536833    2315 log.go:172] (0xc0008a6370) Data frame received for 5\nI0120 14:23:11.536932    2315 log.go:172] (0xc000764460) (5) Data frame handling\nI0120 14:23:11.536968    2315 log.go:172] (0xc000764460) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0120 14:23:11.571030    2315 log.go:172] (0xc0008a6370) Data frame received for 3\nI0120 14:23:11.571075    2315 log.go:172] (0xc000960000) (3) Data frame handling\nI0120 14:23:11.571111    2315 log.go:172] (0xc000960000) (3) Data frame sent\nI0120 14:23:11.649771    2315 log.go:172] (0xc0008a6370) Data frame received for 1\nI0120 14:23:11.649839    2315 log.go:172] (0xc00096e640) (1) Data frame handling\nI0120 14:23:11.649867    2315 log.go:172] (0xc00096e640) (1) Data frame sent\nI0120 14:23:11.649901    2315 log.go:172] (0xc0008a6370) (0xc000960000) Stream removed, broadcasting: 3\nI0120 14:23:11.649986    2315 log.go:172] (0xc0008a6370) (0xc00096e640) Stream removed, broadcasting: 1\nI0120 14:23:11.651319    2315 log.go:172] (0xc0008a6370) (0xc000764460) Stream removed, broadcasting: 5\nI0120 14:23:11.651429    2315 log.go:172] (0xc0008a6370) Go away received\nI0120 14:23:11.651607    2315 log.go:172] (0xc0008a6370) (0xc00096e640) Stream removed, broadcasting: 1\nI0120 14:23:11.651643    2315 log.go:172] (0xc0008a6370) (0xc000960000) Stream removed, broadcasting: 3\nI0120 14:23:11.651661    2315 log.go:172] (0xc0008a6370) (0xc000764460) Stream removed, broadcasting: 5\n"
Jan 20 14:23:11.659: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 20 14:23:11.659: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan 20 14:23:21.711: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan 20 14:23:31.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-890 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:23:32.358: INFO: stderr: "I0120 14:23:32.085785    2334 log.go:172] (0xc00085e630) (0xc000662820) Create stream\nI0120 14:23:32.086307    2334 log.go:172] (0xc00085e630) (0xc000662820) Stream added, broadcasting: 1\nI0120 14:23:32.098658    2334 log.go:172] (0xc00085e630) Reply frame received for 1\nI0120 14:23:32.098712    2334 log.go:172] (0xc00085e630) (0xc000662000) Create stream\nI0120 14:23:32.098726    2334 log.go:172] (0xc00085e630) (0xc000662000) Stream added, broadcasting: 3\nI0120 14:23:32.099439    2334 log.go:172] (0xc00085e630) Reply frame received for 3\nI0120 14:23:32.099472    2334 log.go:172] (0xc00085e630) (0xc00060c320) Create stream\nI0120 14:23:32.099485    2334 log.go:172] (0xc00085e630) (0xc00060c320) Stream added, broadcasting: 5\nI0120 14:23:32.103752    2334 log.go:172] (0xc00085e630) Reply frame received for 5\nI0120 14:23:32.208617    2334 log.go:172] (0xc00085e630) Data frame received for 5\nI0120 14:23:32.208920    2334 log.go:172] (0xc00060c320) (5) Data frame handling\nI0120 14:23:32.208974    2334 log.go:172] (0xc00060c320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0120 14:23:32.209042    2334 log.go:172] (0xc00085e630) Data frame received for 3\nI0120 14:23:32.209062    2334 log.go:172] (0xc000662000) (3) Data frame handling\nI0120 14:23:32.209090    2334 log.go:172] (0xc000662000) (3) Data frame sent\nI0120 14:23:32.346347    2334 log.go:172] (0xc00085e630) Data frame received for 1\nI0120 14:23:32.346596    2334 log.go:172] (0xc00085e630) (0xc00060c320) Stream removed, broadcasting: 5\nI0120 14:23:32.346705    2334 log.go:172] (0xc000662820) (1) Data frame handling\nI0120 14:23:32.346756    2334 log.go:172] (0xc000662820) (1) Data frame sent\nI0120 14:23:32.346850    2334 log.go:172] (0xc00085e630) (0xc000662000) Stream removed, broadcasting: 3\nI0120 14:23:32.347171    2334 log.go:172] (0xc00085e630) (0xc000662820) Stream removed, broadcasting: 1\nI0120 14:23:32.347418    2334 log.go:172] (0xc00085e630) Go away received\nI0120 14:23:32.348725    2334 log.go:172] (0xc00085e630) (0xc000662820) Stream removed, broadcasting: 1\nI0120 14:23:32.348806    2334 log.go:172] (0xc00085e630) (0xc000662000) Stream removed, broadcasting: 3\nI0120 14:23:32.348856    2334 log.go:172] (0xc00085e630) (0xc00060c320) Stream removed, broadcasting: 5\n"
Jan 20 14:23:32.358: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 20 14:23:32.358: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 20 14:23:42.401: INFO: Waiting for StatefulSet statefulset-890/ss2 to complete update
Jan 20 14:23:42.401: INFO: Waiting for Pod statefulset-890/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 20 14:23:42.401: INFO: Waiting for Pod statefulset-890/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 20 14:23:52.419: INFO: Waiting for StatefulSet statefulset-890/ss2 to complete update
Jan 20 14:23:52.419: INFO: Waiting for Pod statefulset-890/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 20 14:23:52.419: INFO: Waiting for Pod statefulset-890/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 20 14:24:02.925: INFO: Waiting for StatefulSet statefulset-890/ss2 to complete update
Jan 20 14:24:02.925: INFO: Waiting for Pod statefulset-890/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 20 14:24:12.415: INFO: Waiting for StatefulSet statefulset-890/ss2 to complete update
Jan 20 14:24:12.415: INFO: Waiting for Pod statefulset-890/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Rolling back to a previous revision
Jan 20 14:24:22.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-890 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 20 14:24:22.883: INFO: stderr: "I0120 14:24:22.629787    2354 log.go:172] (0xc000388370) (0xc0007b46e0) Create stream\nI0120 14:24:22.629917    2354 log.go:172] (0xc000388370) (0xc0007b46e0) Stream added, broadcasting: 1\nI0120 14:24:22.632312    2354 log.go:172] (0xc000388370) Reply frame received for 1\nI0120 14:24:22.632345    2354 log.go:172] (0xc000388370) (0xc0005a6280) Create stream\nI0120 14:24:22.632355    2354 log.go:172] (0xc000388370) (0xc0005a6280) Stream added, broadcasting: 3\nI0120 14:24:22.633101    2354 log.go:172] (0xc000388370) Reply frame received for 3\nI0120 14:24:22.633126    2354 log.go:172] (0xc000388370) (0xc00040fae0) Create stream\nI0120 14:24:22.633134    2354 log.go:172] (0xc000388370) (0xc00040fae0) Stream added, broadcasting: 5\nI0120 14:24:22.633895    2354 log.go:172] (0xc000388370) Reply frame received for 5\nI0120 14:24:22.763421    2354 log.go:172] (0xc000388370) Data frame received for 5\nI0120 14:24:22.763484    2354 log.go:172] (0xc00040fae0) (5) Data frame handling\nI0120 14:24:22.763499    2354 log.go:172] (0xc00040fae0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0120 14:24:22.803471    2354 log.go:172] (0xc000388370) Data frame received for 3\nI0120 14:24:22.803578    2354 log.go:172] (0xc0005a6280) (3) Data frame handling\nI0120 14:24:22.803600    2354 log.go:172] (0xc0005a6280) (3) Data frame sent\nI0120 14:24:22.873382    2354 log.go:172] (0xc000388370) (0xc0005a6280) Stream removed, broadcasting: 3\nI0120 14:24:22.873724    2354 log.go:172] (0xc000388370) Data frame received for 1\nI0120 14:24:22.873890    2354 log.go:172] (0xc000388370) (0xc00040fae0) Stream removed, broadcasting: 5\nI0120 14:24:22.873973    2354 log.go:172] (0xc0007b46e0) (1) Data frame handling\nI0120 14:24:22.874030    2354 log.go:172] (0xc0007b46e0) (1) Data frame sent\nI0120 14:24:22.874054    2354 log.go:172] (0xc000388370) (0xc0007b46e0) Stream removed, broadcasting: 1\nI0120 14:24:22.874076    2354 log.go:172] (0xc000388370) Go away received\nI0120 14:24:22.876203    2354 log.go:172] (0xc000388370) (0xc0007b46e0) Stream removed, broadcasting: 1\nI0120 14:24:22.876254    2354 log.go:172] (0xc000388370) (0xc0005a6280) Stream removed, broadcasting: 3\nI0120 14:24:22.876289    2354 log.go:172] (0xc000388370) (0xc00040fae0) Stream removed, broadcasting: 5\n"
Jan 20 14:24:22.883: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 20 14:24:22.883: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 20 14:24:32.939: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan 20 14:24:43.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-890 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 14:24:43.315: INFO: stderr: "I0120 14:24:43.172629    2374 log.go:172] (0xc0009c6370) (0xc000982640) Create stream\nI0120 14:24:43.172801    2374 log.go:172] (0xc0009c6370) (0xc000982640) Stream added, broadcasting: 1\nI0120 14:24:43.174983    2374 log.go:172] (0xc0009c6370) Reply frame received for 1\nI0120 14:24:43.175007    2374 log.go:172] (0xc0009c6370) (0xc00064c140) Create stream\nI0120 14:24:43.175013    2374 log.go:172] (0xc0009c6370) (0xc00064c140) Stream added, broadcasting: 3\nI0120 14:24:43.175683    2374 log.go:172] (0xc0009c6370) Reply frame received for 3\nI0120 14:24:43.175707    2374 log.go:172] (0xc0009c6370) (0xc000914000) Create stream\nI0120 14:24:43.175718    2374 log.go:172] (0xc0009c6370) (0xc000914000) Stream added, broadcasting: 5\nI0120 14:24:43.182279    2374 log.go:172] (0xc0009c6370) Reply frame received for 5\nI0120 14:24:43.242239    2374 log.go:172] (0xc0009c6370) Data frame received for 5\nI0120 14:24:43.242335    2374 log.go:172] (0xc000914000) (5) Data frame handling\nI0120 14:24:43.242347    2374 log.go:172] (0xc000914000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0120 14:24:43.242364    2374 log.go:172] (0xc0009c6370) Data frame received for 3\nI0120 14:24:43.242368    2374 log.go:172] (0xc00064c140) (3) Data frame handling\nI0120 14:24:43.242372    2374 log.go:172] (0xc00064c140) (3) Data frame sent\nI0120 14:24:43.307602    2374 log.go:172] (0xc0009c6370) (0xc00064c140) Stream removed, broadcasting: 3\nI0120 14:24:43.307773    2374 log.go:172] (0xc0009c6370) Data frame received for 1\nI0120 14:24:43.307834    2374 log.go:172] (0xc000982640) (1) Data frame handling\nI0120 14:24:43.307886    2374 log.go:172] (0xc0009c6370) (0xc000914000) Stream removed, broadcasting: 5\nI0120 14:24:43.307918    2374 log.go:172] (0xc000982640) (1) Data frame sent\nI0120 14:24:43.307934    2374 log.go:172] (0xc0009c6370) (0xc000982640) Stream removed, broadcasting: 1\nI0120 14:24:43.307947    2374 log.go:172] (0xc0009c6370) Go away received\nI0120 14:24:43.308773    2374 log.go:172] (0xc0009c6370) (0xc000982640) Stream removed, broadcasting: 1\nI0120 14:24:43.308791    2374 log.go:172] (0xc0009c6370) (0xc00064c140) Stream removed, broadcasting: 3\nI0120 14:24:43.308812    2374 log.go:172] (0xc0009c6370) (0xc000914000) Stream removed, broadcasting: 5\n"
Jan 20 14:24:43.316: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 20 14:24:43.316: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 20 14:24:53.361: INFO: Waiting for StatefulSet statefulset-890/ss2 to complete update
Jan 20 14:24:53.361: INFO: Waiting for Pod statefulset-890/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 20 14:24:53.361: INFO: Waiting for Pod statefulset-890/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 20 14:25:03.538: INFO: Waiting for StatefulSet statefulset-890/ss2 to complete update
Jan 20 14:25:03.538: INFO: Waiting for Pod statefulset-890/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 20 14:25:13.377: INFO: Waiting for StatefulSet statefulset-890/ss2 to complete update
Jan 20 14:25:13.377: INFO: Waiting for Pod statefulset-890/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 20 14:25:23.382: INFO: Waiting for StatefulSet statefulset-890/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 20 14:25:33.376: INFO: Deleting all statefulset in ns statefulset-890
Jan 20 14:25:33.382: INFO: Scaling statefulset ss2 to 0
Jan 20 14:25:53.422: INFO: Waiting for statefulset status.replicas updated to 0
Jan 20 14:25:53.429: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:25:53.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-890" for this suite.
Jan 20 14:26:01.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:26:01.683: INFO: namespace statefulset-890 deletion completed in 8.206343097s

• [SLOW TEST:200.708 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:26:01.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan 20 14:26:01.848: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 20 14:26:01.904: INFO: Waiting for terminating namespaces to be deleted...
Jan 20 14:26:01.907: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan 20 14:26:01.922: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan 20 14:26:01.922: INFO: 	Container weave ready: true, restart count 0
Jan 20 14:26:01.922: INFO: 	Container weave-npc ready: true, restart count 0
Jan 20 14:26:01.922: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan 20 14:26:01.922: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 20 14:26:01.922: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan 20 14:26:01.931: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan 20 14:26:01.931: INFO: 	Container etcd ready: true, restart count 0
Jan 20 14:26:01.931: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan 20 14:26:01.931: INFO: 	Container weave ready: true, restart count 0
Jan 20 14:26:01.931: INFO: 	Container weave-npc ready: true, restart count 0
Jan 20 14:26:01.931: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 20 14:26:01.931: INFO: 	Container coredns ready: true, restart count 0
Jan 20 14:26:01.931: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan 20 14:26:01.931: INFO: 	Container kube-controller-manager ready: true, restart count 19
Jan 20 14:26:01.931: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan 20 14:26:01.931: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 20 14:26:01.931: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan 20 14:26:01.931: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan 20 14:26:01.931: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan 20 14:26:01.931: INFO: 	Container kube-scheduler ready: true, restart count 13
Jan 20 14:26:01.931: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 20 14:26:01.931: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-42273184-0453-43db-9add-35db39f2c61b 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-42273184-0453-43db-9add-35db39f2c61b off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-42273184-0453-43db-9add-35db39f2c61b
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:26:20.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3980" for this suite.
Jan 20 14:26:34.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:26:34.555: INFO: namespace sched-pred-3980 deletion completed in 14.191252688s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:32.872 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:26:34.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Jan 20 14:26:34.701: INFO: Waiting up to 5m0s for pod "pod-cfb953f4-5b26-4aed-9a4f-6edde77dc5b7" in namespace "emptydir-1366" to be "success or failure"
Jan 20 14:26:34.709: INFO: Pod "pod-cfb953f4-5b26-4aed-9a4f-6edde77dc5b7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.892078ms
Jan 20 14:26:36.721: INFO: Pod "pod-cfb953f4-5b26-4aed-9a4f-6edde77dc5b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019881055s
Jan 20 14:26:38.830: INFO: Pod "pod-cfb953f4-5b26-4aed-9a4f-6edde77dc5b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129205216s
Jan 20 14:26:40.841: INFO: Pod "pod-cfb953f4-5b26-4aed-9a4f-6edde77dc5b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139594885s
Jan 20 14:26:42.852: INFO: Pod "pod-cfb953f4-5b26-4aed-9a4f-6edde77dc5b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.151123014s
STEP: Saw pod success
Jan 20 14:26:42.853: INFO: Pod "pod-cfb953f4-5b26-4aed-9a4f-6edde77dc5b7" satisfied condition "success or failure"
Jan 20 14:26:42.857: INFO: Trying to get logs from node iruya-node pod pod-cfb953f4-5b26-4aed-9a4f-6edde77dc5b7 container test-container: 
STEP: delete the pod
Jan 20 14:26:42.919: INFO: Waiting for pod pod-cfb953f4-5b26-4aed-9a4f-6edde77dc5b7 to disappear
Jan 20 14:26:42.933: INFO: Pod pod-cfb953f4-5b26-4aed-9a4f-6edde77dc5b7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:26:42.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1366" for this suite.
Jan 20 14:26:48.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:26:49.095: INFO: namespace emptydir-1366 deletion completed in 6.155440319s

• [SLOW TEST:14.539 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:26:49.095: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 20 14:26:49.216: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan 20 14:26:49.249: INFO: Number of nodes with available pods: 0
Jan 20 14:26:49.249: INFO: Node iruya-node is running more than one daemon pod
Jan 20 14:26:50.883: INFO: Number of nodes with available pods: 0
Jan 20 14:26:50.883: INFO: Node iruya-node is running more than one daemon pod
Jan 20 14:26:51.320: INFO: Number of nodes with available pods: 0
Jan 20 14:26:51.320: INFO: Node iruya-node is running more than one daemon pod
Jan 20 14:26:52.266: INFO: Number of nodes with available pods: 0
Jan 20 14:26:52.266: INFO: Node iruya-node is running more than one daemon pod
Jan 20 14:26:53.273: INFO: Number of nodes with available pods: 0
Jan 20 14:26:53.273: INFO: Node iruya-node is running more than one daemon pod
Jan 20 14:26:55.221: INFO: Number of nodes with available pods: 0
Jan 20 14:26:55.221: INFO: Node iruya-node is running more than one daemon pod
Jan 20 14:26:55.648: INFO: Number of nodes with available pods: 0
Jan 20 14:26:55.649: INFO: Node iruya-node is running more than one daemon pod
Jan 20 14:26:56.264: INFO: Number of nodes with available pods: 0
Jan 20 14:26:56.264: INFO: Node iruya-node is running more than one daemon pod
Jan 20 14:26:57.334: INFO: Number of nodes with available pods: 0
Jan 20 14:26:57.334: INFO: Node iruya-node is running more than one daemon pod
Jan 20 14:26:58.260: INFO: Number of nodes with available pods: 0
Jan 20 14:26:58.260: INFO: Node iruya-node is running more than one daemon pod
Jan 20 14:26:59.278: INFO: Number of nodes with available pods: 1
Jan 20 14:26:59.278: INFO: Node iruya-node is running more than one daemon pod
Jan 20 14:27:00.265: INFO: Number of nodes with available pods: 2
Jan 20 14:27:00.265: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan 20 14:27:00.343: INFO: Wrong image for pod: daemon-set-hbgvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:00.343: INFO: Wrong image for pod: daemon-set-vkq2h. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:01.573: INFO: Wrong image for pod: daemon-set-hbgvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:01.573: INFO: Wrong image for pod: daemon-set-vkq2h. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:02.388: INFO: Wrong image for pod: daemon-set-hbgvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:02.388: INFO: Wrong image for pod: daemon-set-vkq2h. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:03.385: INFO: Wrong image for pod: daemon-set-hbgvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:03.385: INFO: Wrong image for pod: daemon-set-vkq2h. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:04.383: INFO: Wrong image for pod: daemon-set-hbgvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:04.383: INFO: Wrong image for pod: daemon-set-vkq2h. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:04.383: INFO: Pod daemon-set-vkq2h is not available
Jan 20 14:27:05.385: INFO: Wrong image for pod: daemon-set-hbgvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:05.385: INFO: Wrong image for pod: daemon-set-vkq2h. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:05.385: INFO: Pod daemon-set-vkq2h is not available
Jan 20 14:27:06.387: INFO: Wrong image for pod: daemon-set-hbgvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:06.387: INFO: Wrong image for pod: daemon-set-vkq2h. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:06.387: INFO: Pod daemon-set-vkq2h is not available
Jan 20 14:27:07.385: INFO: Wrong image for pod: daemon-set-hbgvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:07.385: INFO: Wrong image for pod: daemon-set-vkq2h. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:07.385: INFO: Pod daemon-set-vkq2h is not available
Jan 20 14:27:08.394: INFO: Pod daemon-set-7cmbc is not available
Jan 20 14:27:08.395: INFO: Wrong image for pod: daemon-set-hbgvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:09.384: INFO: Pod daemon-set-7cmbc is not available
Jan 20 14:27:09.384: INFO: Wrong image for pod: daemon-set-hbgvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:10.392: INFO: Pod daemon-set-7cmbc is not available
Jan 20 14:27:10.392: INFO: Wrong image for pod: daemon-set-hbgvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:11.967: INFO: Pod daemon-set-7cmbc is not available
Jan 20 14:27:11.967: INFO: Wrong image for pod: daemon-set-hbgvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:12.825: INFO: Pod daemon-set-7cmbc is not available
Jan 20 14:27:12.825: INFO: Wrong image for pod: daemon-set-hbgvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:13.384: INFO: Pod daemon-set-7cmbc is not available
Jan 20 14:27:13.384: INFO: Wrong image for pod: daemon-set-hbgvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:14.432: INFO: Pod daemon-set-7cmbc is not available
Jan 20 14:27:14.432: INFO: Wrong image for pod: daemon-set-hbgvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:15.389: INFO: Wrong image for pod: daemon-set-hbgvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:16.389: INFO: Wrong image for pod: daemon-set-hbgvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:17.382: INFO: Wrong image for pod: daemon-set-hbgvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:18.389: INFO: Wrong image for pod: daemon-set-hbgvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:19.384: INFO: Wrong image for pod: daemon-set-hbgvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:20.385: INFO: Wrong image for pod: daemon-set-hbgvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:20.385: INFO: Pod daemon-set-hbgvc is not available
Jan 20 14:27:21.385: INFO: Wrong image for pod: daemon-set-hbgvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:21.385: INFO: Pod daemon-set-hbgvc is not available
Jan 20 14:27:22.385: INFO: Wrong image for pod: daemon-set-hbgvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:22.385: INFO: Pod daemon-set-hbgvc is not available
Jan 20 14:27:23.384: INFO: Wrong image for pod: daemon-set-hbgvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:23.384: INFO: Pod daemon-set-hbgvc is not available
Jan 20 14:27:24.390: INFO: Wrong image for pod: daemon-set-hbgvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:24.390: INFO: Pod daemon-set-hbgvc is not available
Jan 20 14:27:25.384: INFO: Wrong image for pod: daemon-set-hbgvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:25.384: INFO: Pod daemon-set-hbgvc is not available
Jan 20 14:27:26.386: INFO: Wrong image for pod: daemon-set-hbgvc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 14:27:26.386: INFO: Pod daemon-set-hbgvc is not available
Jan 20 14:27:27.383: INFO: Pod daemon-set-mqqgc is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan 20 14:27:27.401: INFO: Number of nodes with available pods: 1
Jan 20 14:27:27.401: INFO: Node iruya-node is running more than one daemon pod
Jan 20 14:27:28.423: INFO: Number of nodes with available pods: 1
Jan 20 14:27:28.424: INFO: Node iruya-node is running more than one daemon pod
Jan 20 14:27:29.422: INFO: Number of nodes with available pods: 1
Jan 20 14:27:29.422: INFO: Node iruya-node is running more than one daemon pod
Jan 20 14:27:30.412: INFO: Number of nodes with available pods: 1
Jan 20 14:27:30.412: INFO: Node iruya-node is running more than one daemon pod
Jan 20 14:27:31.418: INFO: Number of nodes with available pods: 1
Jan 20 14:27:31.418: INFO: Node iruya-node is running more than one daemon pod
Jan 20 14:27:32.416: INFO: Number of nodes with available pods: 1
Jan 20 14:27:32.416: INFO: Node iruya-node is running more than one daemon pod
Jan 20 14:27:33.417: INFO: Number of nodes with available pods: 2
Jan 20 14:27:33.417: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4941, will wait for the garbage collector to delete the pods
Jan 20 14:27:33.507: INFO: Deleting DaemonSet.extensions daemon-set took: 11.23718ms
Jan 20 14:27:33.908: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.609523ms
Jan 20 14:27:47.923: INFO: Number of nodes with available pods: 0
Jan 20 14:27:47.923: INFO: Number of running nodes: 0, number of available pods: 0
Jan 20 14:27:47.928: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4941/daemonsets","resourceVersion":"21191688"},"items":null}

Jan 20 14:27:47.931: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4941/pods","resourceVersion":"21191688"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:27:47.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4941" for this suite.
Jan 20 14:27:56.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:27:56.117: INFO: namespace daemonsets-4941 deletion completed in 8.167765045s

• [SLOW TEST:67.022 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:27:56.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-875f32e2-03ab-46ef-a257-b1b3ac5f2a7e
STEP: Creating a pod to test consume configMaps
Jan 20 14:27:56.326: INFO: Waiting up to 5m0s for pod "pod-configmaps-216345d8-e38c-4a99-a0df-600de5dad49d" in namespace "configmap-9856" to be "success or failure"
Jan 20 14:27:56.387: INFO: Pod "pod-configmaps-216345d8-e38c-4a99-a0df-600de5dad49d": Phase="Pending", Reason="", readiness=false. Elapsed: 60.536061ms
Jan 20 14:27:58.396: INFO: Pod "pod-configmaps-216345d8-e38c-4a99-a0df-600de5dad49d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070435825s
Jan 20 14:28:00.409: INFO: Pod "pod-configmaps-216345d8-e38c-4a99-a0df-600de5dad49d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082489031s
Jan 20 14:28:02.430: INFO: Pod "pod-configmaps-216345d8-e38c-4a99-a0df-600de5dad49d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104098122s
Jan 20 14:28:04.438: INFO: Pod "pod-configmaps-216345d8-e38c-4a99-a0df-600de5dad49d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.112295966s
Jan 20 14:28:06.465: INFO: Pod "pod-configmaps-216345d8-e38c-4a99-a0df-600de5dad49d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.138505016s
STEP: Saw pod success
Jan 20 14:28:06.465: INFO: Pod "pod-configmaps-216345d8-e38c-4a99-a0df-600de5dad49d" satisfied condition "success or failure"
Jan 20 14:28:06.469: INFO: Trying to get logs from node iruya-node pod pod-configmaps-216345d8-e38c-4a99-a0df-600de5dad49d container configmap-volume-test: 
STEP: delete the pod
Jan 20 14:28:06.524: INFO: Waiting for pod pod-configmaps-216345d8-e38c-4a99-a0df-600de5dad49d to disappear
Jan 20 14:28:06.538: INFO: Pod pod-configmaps-216345d8-e38c-4a99-a0df-600de5dad49d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:28:06.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9856" for this suite.
Jan 20 14:28:12.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:28:13.174: INFO: namespace configmap-9856 deletion completed in 6.628986212s

• [SLOW TEST:17.057 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:28:13.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan 20 14:28:14.388: INFO: Pod name wrapped-volume-race-2c08a0ca-3a34-4262-9e83-7f38a38ab3bc: Found 0 pods out of 5
Jan 20 14:28:19.402: INFO: Pod name wrapped-volume-race-2c08a0ca-3a34-4262-9e83-7f38a38ab3bc: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-2c08a0ca-3a34-4262-9e83-7f38a38ab3bc in namespace emptydir-wrapper-3186, will wait for the garbage collector to delete the pods
Jan 20 14:28:49.513: INFO: Deleting ReplicationController wrapped-volume-race-2c08a0ca-3a34-4262-9e83-7f38a38ab3bc took: 12.271125ms
Jan 20 14:28:50.014: INFO: Terminating ReplicationController wrapped-volume-race-2c08a0ca-3a34-4262-9e83-7f38a38ab3bc pods took: 500.559795ms
STEP: Creating RC which spawns configmap-volume pods
Jan 20 14:29:37.698: INFO: Pod name wrapped-volume-race-7bd28402-07b4-4e0c-8f9a-6e81669d7e20: Found 0 pods out of 5
Jan 20 14:29:42.711: INFO: Pod name wrapped-volume-race-7bd28402-07b4-4e0c-8f9a-6e81669d7e20: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-7bd28402-07b4-4e0c-8f9a-6e81669d7e20 in namespace emptydir-wrapper-3186, will wait for the garbage collector to delete the pods
Jan 20 14:30:18.804: INFO: Deleting ReplicationController wrapped-volume-race-7bd28402-07b4-4e0c-8f9a-6e81669d7e20 took: 12.960304ms
Jan 20 14:30:19.205: INFO: Terminating ReplicationController wrapped-volume-race-7bd28402-07b4-4e0c-8f9a-6e81669d7e20 pods took: 400.776562ms
STEP: Creating RC which spawns configmap-volume pods
Jan 20 14:31:06.659: INFO: Pod name wrapped-volume-race-c9b35489-1ece-4c5b-ac91-534f4c3f2f6f: Found 0 pods out of 5
Jan 20 14:31:11.674: INFO: Pod name wrapped-volume-race-c9b35489-1ece-4c5b-ac91-534f4c3f2f6f: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-c9b35489-1ece-4c5b-ac91-534f4c3f2f6f in namespace emptydir-wrapper-3186, will wait for the garbage collector to delete the pods
Jan 20 14:31:39.807: INFO: Deleting ReplicationController wrapped-volume-race-c9b35489-1ece-4c5b-ac91-534f4c3f2f6f took: 34.706742ms
Jan 20 14:31:40.208: INFO: Terminating ReplicationController wrapped-volume-race-c9b35489-1ece-4c5b-ac91-534f4c3f2f6f pods took: 400.64413ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:32:28.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-3186" for this suite.
Jan 20 14:32:38.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:32:38.448: INFO: namespace emptydir-wrapper-3186 deletion completed in 10.188034872s

• [SLOW TEST:265.273 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:32:38.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-9863
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 20 14:32:38.543: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 20 14:33:22.917: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-9863 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 14:33:22.917: INFO: >>> kubeConfig: /root/.kube/config
I0120 14:33:22.993300       8 log.go:172] (0xc001b56210) (0xc0007074a0) Create stream
I0120 14:33:22.993419       8 log.go:172] (0xc001b56210) (0xc0007074a0) Stream added, broadcasting: 1
I0120 14:33:22.999524       8 log.go:172] (0xc001b56210) Reply frame received for 1
I0120 14:33:22.999578       8 log.go:172] (0xc001b56210) (0xc000707540) Create stream
I0120 14:33:22.999586       8 log.go:172] (0xc001b56210) (0xc000707540) Stream added, broadcasting: 3
I0120 14:33:23.001097       8 log.go:172] (0xc001b56210) Reply frame received for 3
I0120 14:33:23.001121       8 log.go:172] (0xc001b56210) (0xc001116000) Create stream
I0120 14:33:23.001128       8 log.go:172] (0xc001b56210) (0xc001116000) Stream added, broadcasting: 5
I0120 14:33:23.002453       8 log.go:172] (0xc001b56210) Reply frame received for 5
I0120 14:33:23.171531       8 log.go:172] (0xc001b56210) Data frame received for 3
I0120 14:33:23.171634       8 log.go:172] (0xc000707540) (3) Data frame handling
I0120 14:33:23.171674       8 log.go:172] (0xc000707540) (3) Data frame sent
I0120 14:33:23.323240       8 log.go:172] (0xc001b56210) (0xc000707540) Stream removed, broadcasting: 3
I0120 14:33:23.323382       8 log.go:172] (0xc001b56210) Data frame received for 1
I0120 14:33:23.323417       8 log.go:172] (0xc001b56210) (0xc001116000) Stream removed, broadcasting: 5
I0120 14:33:23.323460       8 log.go:172] (0xc0007074a0) (1) Data frame handling
I0120 14:33:23.323500       8 log.go:172] (0xc0007074a0) (1) Data frame sent
I0120 14:33:23.323520       8 log.go:172] (0xc001b56210) (0xc0007074a0) Stream removed, broadcasting: 1
I0120 14:33:23.323548       8 log.go:172] (0xc001b56210) Go away received
I0120 14:33:23.323861       8 log.go:172] (0xc001b56210) (0xc0007074a0) Stream removed, broadcasting: 1
I0120 14:33:23.323883       8 log.go:172] (0xc001b56210) (0xc000707540) Stream removed, broadcasting: 3
I0120 14:33:23.323896       8 log.go:172] (0xc001b56210) (0xc001116000) Stream removed, broadcasting: 5
Jan 20 14:33:23.324: INFO: Waiting for endpoints: map[]
Jan 20 14:33:23.334: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-9863 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 14:33:23.334: INFO: >>> kubeConfig: /root/.kube/config
I0120 14:33:23.390990       8 log.go:172] (0xc001c8a4d0) (0xc0018d4d20) Create stream
I0120 14:33:23.391101       8 log.go:172] (0xc001c8a4d0) (0xc0018d4d20) Stream added, broadcasting: 1
I0120 14:33:23.397420       8 log.go:172] (0xc001c8a4d0) Reply frame received for 1
I0120 14:33:23.397463       8 log.go:172] (0xc001c8a4d0) (0xc001116140) Create stream
I0120 14:33:23.397474       8 log.go:172] (0xc001c8a4d0) (0xc001116140) Stream added, broadcasting: 3
I0120 14:33:23.399425       8 log.go:172] (0xc001c8a4d0) Reply frame received for 3
I0120 14:33:23.399448       8 log.go:172] (0xc001c8a4d0) (0xc0011165a0) Create stream
I0120 14:33:23.399457       8 log.go:172] (0xc001c8a4d0) (0xc0011165a0) Stream added, broadcasting: 5
I0120 14:33:23.402759       8 log.go:172] (0xc001c8a4d0) Reply frame received for 5
I0120 14:33:23.525918       8 log.go:172] (0xc001c8a4d0) Data frame received for 3
I0120 14:33:23.525949       8 log.go:172] (0xc001116140) (3) Data frame handling
I0120 14:33:23.525962       8 log.go:172] (0xc001116140) (3) Data frame sent
I0120 14:33:23.688551       8 log.go:172] (0xc001c8a4d0) Data frame received for 1
I0120 14:33:23.688643       8 log.go:172] (0xc001c8a4d0) (0xc001116140) Stream removed, broadcasting: 3
I0120 14:33:23.688718       8 log.go:172] (0xc0018d4d20) (1) Data frame handling
I0120 14:33:23.688729       8 log.go:172] (0xc0018d4d20) (1) Data frame sent
I0120 14:33:23.688735       8 log.go:172] (0xc001c8a4d0) (0xc0018d4d20) Stream removed, broadcasting: 1
I0120 14:33:23.689148       8 log.go:172] (0xc001c8a4d0) (0xc0011165a0) Stream removed, broadcasting: 5
I0120 14:33:23.689176       8 log.go:172] (0xc001c8a4d0) Go away received
I0120 14:33:23.689313       8 log.go:172] (0xc001c8a4d0) (0xc0018d4d20) Stream removed, broadcasting: 1
I0120 14:33:23.689349       8 log.go:172] (0xc001c8a4d0) (0xc001116140) Stream removed, broadcasting: 3
I0120 14:33:23.689360       8 log.go:172] (0xc001c8a4d0) (0xc0011165a0) Stream removed, broadcasting: 5
Jan 20 14:33:23.689: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:33:23.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9863" for this suite.
Jan 20 14:33:47.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:33:47.922: INFO: namespace pod-network-test-9863 deletion completed in 24.217831251s

• [SLOW TEST:69.473 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:33:47.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-e3c1be58-7583-4be1-98fe-45ffb45cdb23
STEP: Creating a pod to test consume configMaps
Jan 20 14:33:48.074: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f205bd80-1929-45e8-89a6-fc01e6b5bd27" in namespace "projected-9837" to be "success or failure"
Jan 20 14:33:48.078: INFO: Pod "pod-projected-configmaps-f205bd80-1929-45e8-89a6-fc01e6b5bd27": Phase="Pending", Reason="", readiness=false. Elapsed: 3.822631ms
Jan 20 14:33:50.085: INFO: Pod "pod-projected-configmaps-f205bd80-1929-45e8-89a6-fc01e6b5bd27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010897813s
Jan 20 14:33:52.097: INFO: Pod "pod-projected-configmaps-f205bd80-1929-45e8-89a6-fc01e6b5bd27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023228998s
Jan 20 14:33:54.108: INFO: Pod "pod-projected-configmaps-f205bd80-1929-45e8-89a6-fc01e6b5bd27": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033834152s
Jan 20 14:33:56.120: INFO: Pod "pod-projected-configmaps-f205bd80-1929-45e8-89a6-fc01e6b5bd27": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046274916s
Jan 20 14:33:58.130: INFO: Pod "pod-projected-configmaps-f205bd80-1929-45e8-89a6-fc01e6b5bd27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.056112057s
STEP: Saw pod success
Jan 20 14:33:58.131: INFO: Pod "pod-projected-configmaps-f205bd80-1929-45e8-89a6-fc01e6b5bd27" satisfied condition "success or failure"
Jan 20 14:33:58.137: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-f205bd80-1929-45e8-89a6-fc01e6b5bd27 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 20 14:33:58.209: INFO: Waiting for pod pod-projected-configmaps-f205bd80-1929-45e8-89a6-fc01e6b5bd27 to disappear
Jan 20 14:33:58.219: INFO: Pod pod-projected-configmaps-f205bd80-1929-45e8-89a6-fc01e6b5bd27 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:33:58.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9837" for this suite.
Jan 20 14:34:04.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:34:04.414: INFO: namespace projected-9837 deletion completed in 6.182871303s

• [SLOW TEST:16.491 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:34:04.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan 20 14:34:04.528: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 20 14:34:04.542: INFO: Waiting for terminating namespaces to be deleted...
Jan 20 14:34:04.547: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan 20 14:34:04.564: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan 20 14:34:04.564: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 20 14:34:04.564: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan 20 14:34:04.564: INFO: 	Container weave ready: true, restart count 0
Jan 20 14:34:04.564: INFO: 	Container weave-npc ready: true, restart count 0
Jan 20 14:34:04.564: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan 20 14:34:04.612: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan 20 14:34:04.612: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan 20 14:34:04.612: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan 20 14:34:04.612: INFO: 	Container kube-scheduler ready: true, restart count 13
Jan 20 14:34:04.612: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 20 14:34:04.612: INFO: 	Container coredns ready: true, restart count 0
Jan 20 14:34:04.612: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan 20 14:34:04.612: INFO: 	Container etcd ready: true, restart count 0
Jan 20 14:34:04.612: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan 20 14:34:04.612: INFO: 	Container weave ready: true, restart count 0
Jan 20 14:34:04.612: INFO: 	Container weave-npc ready: true, restart count 0
Jan 20 14:34:04.612: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 20 14:34:04.612: INFO: 	Container coredns ready: true, restart count 0
Jan 20 14:34:04.612: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan 20 14:34:04.612: INFO: 	Container kube-controller-manager ready: true, restart count 19
Jan 20 14:34:04.612: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan 20 14:34:04.612: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Jan 20 14:34:04.752: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan 20 14:34:04.752: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan 20 14:34:04.752: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Jan 20 14:34:04.752: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Jan 20 14:34:04.752: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Jan 20 14:34:04.752: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Jan 20 14:34:04.752: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Jan 20 14:34:04.752: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan 20 14:34:04.752: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Jan 20 14:34:04.752: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c7bcf486-9afe-45de-b407-b594946c197c.15eb9eef4dcecbe1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3853/filler-pod-c7bcf486-9afe-45de-b407-b594946c197c to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c7bcf486-9afe-45de-b407-b594946c197c.15eb9ef088d59fdf], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c7bcf486-9afe-45de-b407-b594946c197c.15eb9ef169f40527], Reason = [Created], Message = [Created container filler-pod-c7bcf486-9afe-45de-b407-b594946c197c]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c7bcf486-9afe-45de-b407-b594946c197c.15eb9ef18edd6311], Reason = [Started], Message = [Started container filler-pod-c7bcf486-9afe-45de-b407-b594946c197c]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-dd54c2a0-0b46-4f7d-a35c-2e832529eab2.15eb9eef4dced026], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3853/filler-pod-dd54c2a0-0b46-4f7d-a35c-2e832529eab2 to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-dd54c2a0-0b46-4f7d-a35c-2e832529eab2.15eb9ef08578e0f0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-dd54c2a0-0b46-4f7d-a35c-2e832529eab2.15eb9ef13b42de06], Reason = [Created], Message = [Created container filler-pod-dd54c2a0-0b46-4f7d-a35c-2e832529eab2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-dd54c2a0-0b46-4f7d-a35c-2e832529eab2.15eb9ef16296db91], Reason = [Started], Message = [Started container filler-pod-dd54c2a0-0b46-4f7d-a35c-2e832529eab2]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15eb9ef21baf148e], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:34:18.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3853" for this suite.
Jan 20 14:34:26.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:34:26.233: INFO: namespace sched-pred-3853 deletion completed in 8.093622127s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:21.820 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:34:26.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-7698
I0120 14:34:27.792766       8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7698, replica count: 1
I0120 14:34:28.843274       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 14:34:29.843865       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 14:34:30.844236       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 14:34:31.845144       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 14:34:32.845798       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 14:34:33.846441       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 14:34:34.847105       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 14:34:35.847522       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 20 14:34:36.009: INFO: Created: latency-svc-l4ztn
Jan 20 14:34:36.085: INFO: Got endpoints: latency-svc-l4ztn [137.051695ms]
Jan 20 14:34:36.149: INFO: Created: latency-svc-mlq5d
Jan 20 14:34:36.167: INFO: Got endpoints: latency-svc-mlq5d [81.609168ms]
Jan 20 14:34:36.235: INFO: Created: latency-svc-t86r4
Jan 20 14:34:36.260: INFO: Got endpoints: latency-svc-t86r4 [174.709698ms]
Jan 20 14:34:36.334: INFO: Created: latency-svc-22sdp
Jan 20 14:34:36.443: INFO: Got endpoints: latency-svc-22sdp [358.272857ms]
Jan 20 14:34:36.469: INFO: Created: latency-svc-4l6th
Jan 20 14:34:36.493: INFO: Got endpoints: latency-svc-4l6th [407.88824ms]
Jan 20 14:34:36.535: INFO: Created: latency-svc-gdc25
Jan 20 14:34:36.606: INFO: Got endpoints: latency-svc-gdc25 [520.199176ms]
Jan 20 14:34:36.636: INFO: Created: latency-svc-t269w
Jan 20 14:34:36.662: INFO: Got endpoints: latency-svc-t269w [168.372583ms]
Jan 20 14:34:36.809: INFO: Created: latency-svc-c9bdb
Jan 20 14:34:36.810: INFO: Got endpoints: latency-svc-c9bdb [724.689366ms]
Jan 20 14:34:36.878: INFO: Created: latency-svc-8hcs8
Jan 20 14:34:36.983: INFO: Got endpoints: latency-svc-8hcs8 [897.002883ms]
Jan 20 14:34:37.049: INFO: Created: latency-svc-fxln9
Jan 20 14:34:37.064: INFO: Got endpoints: latency-svc-fxln9 [978.613196ms]
Jan 20 14:34:37.216: INFO: Created: latency-svc-s94gq
Jan 20 14:34:37.223: INFO: Got endpoints: latency-svc-s94gq [1.137228948s]
Jan 20 14:34:37.397: INFO: Created: latency-svc-zl745
Jan 20 14:34:37.421: INFO: Got endpoints: latency-svc-zl745 [1.336305448s]
Jan 20 14:34:37.576: INFO: Created: latency-svc-frrsm
Jan 20 14:34:37.644: INFO: Created: latency-svc-zflqc
Jan 20 14:34:37.645: INFO: Got endpoints: latency-svc-frrsm [1.559695649s]
Jan 20 14:34:37.650: INFO: Got endpoints: latency-svc-zflqc [1.564304989s]
Jan 20 14:34:37.902: INFO: Created: latency-svc-2fxqn
Jan 20 14:34:37.988: INFO: Got endpoints: latency-svc-2fxqn [1.90264439s]
Jan 20 14:34:38.032: INFO: Created: latency-svc-jsmbt
Jan 20 14:34:38.075: INFO: Got endpoints: latency-svc-jsmbt [1.989514032s]
Jan 20 14:34:38.214: INFO: Created: latency-svc-5n8cb
Jan 20 14:34:38.227: INFO: Got endpoints: latency-svc-5n8cb [2.141266622s]
Jan 20 14:34:38.265: INFO: Created: latency-svc-75pfl
Jan 20 14:34:38.273: INFO: Got endpoints: latency-svc-75pfl [2.105430508s]
Jan 20 14:34:38.302: INFO: Created: latency-svc-cvj4z
Jan 20 14:34:38.404: INFO: Got endpoints: latency-svc-cvj4z [2.143479783s]
Jan 20 14:34:38.434: INFO: Created: latency-svc-tbk2w
Jan 20 14:34:38.452: INFO: Got endpoints: latency-svc-tbk2w [2.008155643s]
Jan 20 14:34:38.504: INFO: Created: latency-svc-9vb98
Jan 20 14:34:38.504: INFO: Got endpoints: latency-svc-9vb98 [1.897820283s]
Jan 20 14:34:38.587: INFO: Created: latency-svc-w855n
Jan 20 14:34:38.593: INFO: Got endpoints: latency-svc-w855n [1.930521458s]
Jan 20 14:34:38.655: INFO: Created: latency-svc-59nfv
Jan 20 14:34:38.657: INFO: Got endpoints: latency-svc-59nfv [1.846627172s]
Jan 20 14:34:38.750: INFO: Created: latency-svc-mn78l
Jan 20 14:34:38.750: INFO: Got endpoints: latency-svc-mn78l [1.767122236s]
Jan 20 14:34:38.794: INFO: Created: latency-svc-trf95
Jan 20 14:34:38.801: INFO: Got endpoints: latency-svc-trf95 [1.736938808s]
Jan 20 14:34:38.955: INFO: Created: latency-svc-dldnc
Jan 20 14:34:38.980: INFO: Got endpoints: latency-svc-dldnc [1.756517488s]
Jan 20 14:34:39.139: INFO: Created: latency-svc-5x6h8
Jan 20 14:34:39.197: INFO: Got endpoints: latency-svc-5x6h8 [1.775531147s]
Jan 20 14:34:39.202: INFO: Created: latency-svc-wsf2x
Jan 20 14:34:39.264: INFO: Got endpoints: latency-svc-wsf2x [1.618556907s]
Jan 20 14:34:39.306: INFO: Created: latency-svc-qt9c7
Jan 20 14:34:39.336: INFO: Got endpoints: latency-svc-qt9c7 [1.685709179s]
Jan 20 14:34:39.422: INFO: Created: latency-svc-cppxw
Jan 20 14:34:39.450: INFO: Got endpoints: latency-svc-cppxw [1.460859245s]
Jan 20 14:34:39.500: INFO: Created: latency-svc-5cfzj
Jan 20 14:34:39.507: INFO: Got endpoints: latency-svc-5cfzj [1.431587208s]
Jan 20 14:34:39.666: INFO: Created: latency-svc-7gr24
Jan 20 14:34:39.675: INFO: Got endpoints: latency-svc-7gr24 [1.448551652s]
Jan 20 14:34:39.742: INFO: Created: latency-svc-h7dq2
Jan 20 14:34:39.831: INFO: Got endpoints: latency-svc-h7dq2 [1.558283983s]
Jan 20 14:34:39.873: INFO: Created: latency-svc-csfxl
Jan 20 14:34:39.880: INFO: Got endpoints: latency-svc-csfxl [1.475758791s]
Jan 20 14:34:40.008: INFO: Created: latency-svc-297s8
Jan 20 14:34:40.225: INFO: Created: latency-svc-zk6z6
Jan 20 14:34:40.225: INFO: Got endpoints: latency-svc-297s8 [1.773201442s]
Jan 20 14:34:40.246: INFO: Got endpoints: latency-svc-zk6z6 [1.742467164s]
Jan 20 14:34:40.463: INFO: Created: latency-svc-qxcck
Jan 20 14:34:40.547: INFO: Got endpoints: latency-svc-qxcck [1.954433255s]
Jan 20 14:34:40.555: INFO: Created: latency-svc-fsvdc
Jan 20 14:34:40.650: INFO: Got endpoints: latency-svc-fsvdc [1.992463447s]
Jan 20 14:34:40.723: INFO: Created: latency-svc-zxndf
Jan 20 14:34:40.730: INFO: Got endpoints: latency-svc-zxndf [1.979889269s]
Jan 20 14:34:40.907: INFO: Created: latency-svc-nvbj7
Jan 20 14:34:40.921: INFO: Got endpoints: latency-svc-nvbj7 [2.119832938s]
Jan 20 14:34:40.963: INFO: Created: latency-svc-gk7c7
Jan 20 14:34:41.071: INFO: Got endpoints: latency-svc-gk7c7 [2.091373899s]
Jan 20 14:34:41.121: INFO: Created: latency-svc-wvc9d
Jan 20 14:34:41.129: INFO: Got endpoints: latency-svc-wvc9d [1.931526325s]
Jan 20 14:34:41.246: INFO: Created: latency-svc-cwv7b
Jan 20 14:34:41.254: INFO: Got endpoints: latency-svc-cwv7b [1.990121353s]
Jan 20 14:34:41.334: INFO: Created: latency-svc-9cdr8
Jan 20 14:34:41.408: INFO: Got endpoints: latency-svc-9cdr8 [2.072164054s]
Jan 20 14:34:41.439: INFO: Created: latency-svc-w4gk5
Jan 20 14:34:41.449: INFO: Got endpoints: latency-svc-w4gk5 [1.998952066s]
Jan 20 14:34:41.603: INFO: Created: latency-svc-tccxl
Jan 20 14:34:41.623: INFO: Got endpoints: latency-svc-tccxl [2.115537499s]
Jan 20 14:34:41.660: INFO: Created: latency-svc-rwqvf
Jan 20 14:34:41.665: INFO: Got endpoints: latency-svc-rwqvf [1.989234725s]
Jan 20 14:34:41.759: INFO: Created: latency-svc-djhg9
Jan 20 14:34:41.784: INFO: Got endpoints: latency-svc-djhg9 [1.952282892s]
Jan 20 14:34:42.000: INFO: Created: latency-svc-47dtr
Jan 20 14:34:42.016: INFO: Got endpoints: latency-svc-47dtr [2.136184122s]
Jan 20 14:34:42.066: INFO: Created: latency-svc-kq79s
Jan 20 14:34:42.076: INFO: Got endpoints: latency-svc-kq79s [1.850196094s]
Jan 20 14:34:42.186: INFO: Created: latency-svc-jffw5
Jan 20 14:34:42.188: INFO: Got endpoints: latency-svc-jffw5 [1.941547877s]
Jan 20 14:34:42.236: INFO: Created: latency-svc-7bsd6
Jan 20 14:34:42.247: INFO: Got endpoints: latency-svc-7bsd6 [1.699416171s]
Jan 20 14:34:42.349: INFO: Created: latency-svc-bn8nc
Jan 20 14:34:42.354: INFO: Got endpoints: latency-svc-bn8nc [1.703881097s]
Jan 20 14:34:42.410: INFO: Created: latency-svc-4vtvd
Jan 20 14:34:42.529: INFO: Got endpoints: latency-svc-4vtvd [1.79835155s]
Jan 20 14:34:42.534: INFO: Created: latency-svc-85t2v
Jan 20 14:34:42.554: INFO: Got endpoints: latency-svc-85t2v [1.632176758s]
Jan 20 14:34:42.659: INFO: Created: latency-svc-gf424
Jan 20 14:34:42.671: INFO: Got endpoints: latency-svc-gf424 [1.59914197s]
Jan 20 14:34:42.746: INFO: Created: latency-svc-pcsdf
Jan 20 14:34:42.813: INFO: Got endpoints: latency-svc-pcsdf [1.683277258s]
Jan 20 14:34:42.915: INFO: Created: latency-svc-wfqpf
Jan 20 14:34:42.987: INFO: Got endpoints: latency-svc-wfqpf [1.732390295s]
Jan 20 14:34:43.025: INFO: Created: latency-svc-pb4dx
Jan 20 14:34:43.055: INFO: Got endpoints: latency-svc-pb4dx [1.64646441s]
Jan 20 14:34:43.061: INFO: Created: latency-svc-xl7w8
Jan 20 14:34:43.086: INFO: Got endpoints: latency-svc-xl7w8 [1.636970757s]
Jan 20 14:34:43.162: INFO: Created: latency-svc-rvbt9
Jan 20 14:34:43.171: INFO: Got endpoints: latency-svc-rvbt9 [1.547351675s]
Jan 20 14:34:43.247: INFO: Created: latency-svc-xdhr2
Jan 20 14:34:43.334: INFO: Got endpoints: latency-svc-xdhr2 [1.669497679s]
Jan 20 14:34:43.358: INFO: Created: latency-svc-cn97s
Jan 20 14:34:43.366: INFO: Got endpoints: latency-svc-cn97s [1.582074317s]
Jan 20 14:34:43.488: INFO: Created: latency-svc-gf45j
Jan 20 14:34:43.494: INFO: Got endpoints: latency-svc-gf45j [1.477437893s]
Jan 20 14:34:43.547: INFO: Created: latency-svc-fq7d6
Jan 20 14:34:43.550: INFO: Got endpoints: latency-svc-fq7d6 [1.474221758s]
Jan 20 14:34:43.684: INFO: Created: latency-svc-59gb6
Jan 20 14:34:43.684: INFO: Got endpoints: latency-svc-59gb6 [1.496229006s]
Jan 20 14:34:43.752: INFO: Created: latency-svc-8rs2c
Jan 20 14:34:43.759: INFO: Got endpoints: latency-svc-8rs2c [1.51158234s]
Jan 20 14:34:43.911: INFO: Created: latency-svc-c5dxq
Jan 20 14:34:43.919: INFO: Got endpoints: latency-svc-c5dxq [1.565424938s]
Jan 20 14:34:44.101: INFO: Created: latency-svc-kfk2n
Jan 20 14:34:44.116: INFO: Got endpoints: latency-svc-kfk2n [1.587036719s]
Jan 20 14:34:44.174: INFO: Created: latency-svc-l9mm5
Jan 20 14:34:44.187: INFO: Got endpoints: latency-svc-l9mm5 [1.633043935s]
Jan 20 14:34:44.316: INFO: Created: latency-svc-s4btw
Jan 20 14:34:44.329: INFO: Got endpoints: latency-svc-s4btw [1.658295155s]
Jan 20 14:34:44.436: INFO: Created: latency-svc-kxjph
Jan 20 14:34:44.451: INFO: Got endpoints: latency-svc-kxjph [1.638386667s]
Jan 20 14:34:44.504: INFO: Created: latency-svc-hz5wn
Jan 20 14:34:44.506: INFO: Got endpoints: latency-svc-hz5wn [1.519059904s]
Jan 20 14:34:44.639: INFO: Created: latency-svc-5wvgd
Jan 20 14:34:44.652: INFO: Got endpoints: latency-svc-5wvgd [1.597018434s]
Jan 20 14:34:44.694: INFO: Created: latency-svc-nb2ng
Jan 20 14:34:44.706: INFO: Got endpoints: latency-svc-nb2ng [1.619473302s]
Jan 20 14:34:44.855: INFO: Created: latency-svc-fxk6z
Jan 20 14:34:44.872: INFO: Got endpoints: latency-svc-fxk6z [1.701033982s]
Jan 20 14:34:44.993: INFO: Created: latency-svc-5ddvz
Jan 20 14:34:45.005: INFO: Got endpoints: latency-svc-5ddvz [1.670268397s]
Jan 20 14:34:45.135: INFO: Created: latency-svc-qgbfj
Jan 20 14:34:45.175: INFO: Got endpoints: latency-svc-qgbfj [1.80862574s]
Jan 20 14:34:45.194: INFO: Created: latency-svc-w4ksh
Jan 20 14:34:45.194: INFO: Got endpoints: latency-svc-w4ksh [1.699687711s]
Jan 20 14:34:45.316: INFO: Created: latency-svc-hqzkh
Jan 20 14:34:45.316: INFO: Got endpoints: latency-svc-hqzkh [1.766149991s]
Jan 20 14:34:45.372: INFO: Created: latency-svc-5rlh9
Jan 20 14:34:45.381: INFO: Got endpoints: latency-svc-5rlh9 [1.696618209s]
Jan 20 14:34:45.487: INFO: Created: latency-svc-cz6t8
Jan 20 14:34:45.489: INFO: Got endpoints: latency-svc-cz6t8 [1.729154578s]
Jan 20 14:34:45.656: INFO: Created: latency-svc-kdj6f
Jan 20 14:34:45.706: INFO: Got endpoints: latency-svc-kdj6f [1.786141107s]
Jan 20 14:34:45.714: INFO: Created: latency-svc-rgkjr
Jan 20 14:34:45.724: INFO: Got endpoints: latency-svc-rgkjr [1.607297814s]
Jan 20 14:34:45.877: INFO: Created: latency-svc-6tgnp
Jan 20 14:34:45.882: INFO: Got endpoints: latency-svc-6tgnp [1.694183109s]
Jan 20 14:34:45.934: INFO: Created: latency-svc-t8ttv
Jan 20 14:34:45.948: INFO: Got endpoints: latency-svc-t8ttv [1.618218527s]
Jan 20 14:34:46.071: INFO: Created: latency-svc-7drfw
Jan 20 14:34:46.075: INFO: Got endpoints: latency-svc-7drfw [1.623183788s]
Jan 20 14:34:46.126: INFO: Created: latency-svc-vlcrk
Jan 20 14:34:46.134: INFO: Got endpoints: latency-svc-vlcrk [1.627616474s]
Jan 20 14:34:46.262: INFO: Created: latency-svc-rwqcd
Jan 20 14:34:46.278: INFO: Got endpoints: latency-svc-rwqcd [1.625246828s]
Jan 20 14:34:46.328: INFO: Created: latency-svc-g8lmf
Jan 20 14:34:46.337: INFO: Got endpoints: latency-svc-g8lmf [1.631117442s]
Jan 20 14:34:46.449: INFO: Created: latency-svc-4qx6q
Jan 20 14:34:46.454: INFO: Got endpoints: latency-svc-4qx6q [1.582107098s]
Jan 20 14:34:46.535: INFO: Created: latency-svc-vtwsc
Jan 20 14:34:46.626: INFO: Got endpoints: latency-svc-vtwsc [1.621145762s]
Jan 20 14:34:46.678: INFO: Created: latency-svc-v54qq
Jan 20 14:34:46.699: INFO: Got endpoints: latency-svc-v54qq [1.523525374s]
Jan 20 14:34:46.801: INFO: Created: latency-svc-5w498
Jan 20 14:34:46.810: INFO: Got endpoints: latency-svc-5w498 [1.615173237s]
Jan 20 14:34:46.865: INFO: Created: latency-svc-k4vkk
Jan 20 14:34:46.883: INFO: Got endpoints: latency-svc-k4vkk [1.56676772s]
Jan 20 14:34:47.020: INFO: Created: latency-svc-9vk27
Jan 20 14:34:47.032: INFO: Got endpoints: latency-svc-9vk27 [1.650587328s]
Jan 20 14:34:47.102: INFO: Created: latency-svc-jbdrl
Jan 20 14:34:47.102: INFO: Got endpoints: latency-svc-jbdrl [1.613766708s]
Jan 20 14:34:47.259: INFO: Created: latency-svc-vd4mp
Jan 20 14:34:47.266: INFO: Got endpoints: latency-svc-vd4mp [1.560452783s]
Jan 20 14:34:47.305: INFO: Created: latency-svc-vksv5
Jan 20 14:34:47.316: INFO: Got endpoints: latency-svc-vksv5 [1.592254141s]
Jan 20 14:34:47.443: INFO: Created: latency-svc-9c6lz
Jan 20 14:34:47.474: INFO: Got endpoints: latency-svc-9c6lz [1.591895045s]
Jan 20 14:34:47.480: INFO: Created: latency-svc-lvh2d
Jan 20 14:34:47.489: INFO: Got endpoints: latency-svc-lvh2d [1.540784023s]
Jan 20 14:34:47.533: INFO: Created: latency-svc-gv8rs
Jan 20 14:34:47.685: INFO: Got endpoints: latency-svc-gv8rs [1.610539238s]
Jan 20 14:34:47.693: INFO: Created: latency-svc-nv6mx
Jan 20 14:34:47.702: INFO: Got endpoints: latency-svc-nv6mx [1.568179104s]
Jan 20 14:34:47.767: INFO: Created: latency-svc-8q7nc
Jan 20 14:34:47.893: INFO: Got endpoints: latency-svc-8q7nc [1.615597384s]
Jan 20 14:34:47.923: INFO: Created: latency-svc-9j2lz
Jan 20 14:34:48.107: INFO: Got endpoints: latency-svc-9j2lz [1.76964879s]
Jan 20 14:34:48.109: INFO: Created: latency-svc-crjd6
Jan 20 14:34:48.134: INFO: Got endpoints: latency-svc-crjd6 [1.679683881s]
Jan 20 14:34:48.166: INFO: Created: latency-svc-bd85f
Jan 20 14:34:48.281: INFO: Got endpoints: latency-svc-bd85f [1.653776643s]
Jan 20 14:34:48.285: INFO: Created: latency-svc-gjtbq
Jan 20 14:34:48.294: INFO: Got endpoints: latency-svc-gjtbq [1.595274124s]
Jan 20 14:34:48.349: INFO: Created: latency-svc-gt94m
Jan 20 14:34:48.349: INFO: Got endpoints: latency-svc-gt94m [1.538946566s]
Jan 20 14:34:48.378: INFO: Created: latency-svc-t7kf7
Jan 20 14:34:48.482: INFO: Got endpoints: latency-svc-t7kf7 [1.598215108s]
Jan 20 14:34:48.497: INFO: Created: latency-svc-vf7vz
Jan 20 14:34:48.517: INFO: Got endpoints: latency-svc-vf7vz [1.484583661s]
Jan 20 14:34:48.559: INFO: Created: latency-svc-7ctwl
Jan 20 14:34:48.668: INFO: Got endpoints: latency-svc-7ctwl [1.565886693s]
Jan 20 14:34:48.678: INFO: Created: latency-svc-6vjw8
Jan 20 14:34:48.681: INFO: Got endpoints: latency-svc-6vjw8 [1.413940022s]
Jan 20 14:34:48.750: INFO: Created: latency-svc-hk4nm
Jan 20 14:34:48.757: INFO: Got endpoints: latency-svc-hk4nm [1.440416521s]
Jan 20 14:34:48.883: INFO: Created: latency-svc-8d7tj
Jan 20 14:34:48.964: INFO: Got endpoints: latency-svc-8d7tj [1.490294456s]
Jan 20 14:34:49.144: INFO: Created: latency-svc-9j2hc
Jan 20 14:34:49.157: INFO: Got endpoints: latency-svc-9j2hc [1.667855239s]
Jan 20 14:34:49.237: INFO: Created: latency-svc-9c58n
Jan 20 14:34:49.238: INFO: Got endpoints: latency-svc-9c58n [1.552562031s]
Jan 20 14:34:49.320: INFO: Created: latency-svc-p4hjf
Jan 20 14:34:49.325: INFO: Got endpoints: latency-svc-p4hjf [1.622713794s]
Jan 20 14:34:49.367: INFO: Created: latency-svc-wvxjl
Jan 20 14:34:49.376: INFO: Got endpoints: latency-svc-wvxjl [1.482159988s]
Jan 20 14:34:49.518: INFO: Created: latency-svc-2rftk
Jan 20 14:34:49.523: INFO: Got endpoints: latency-svc-2rftk [1.416263993s]
Jan 20 14:34:49.602: INFO: Created: latency-svc-22fvn
Jan 20 14:34:49.711: INFO: Got endpoints: latency-svc-22fvn [1.575987527s]
Jan 20 14:34:49.741: INFO: Created: latency-svc-b8r5z
Jan 20 14:34:49.742: INFO: Got endpoints: latency-svc-b8r5z [1.46121116s]
Jan 20 14:34:49.810: INFO: Created: latency-svc-58rvs
Jan 20 14:34:49.926: INFO: Got endpoints: latency-svc-58rvs [1.631676031s]
Jan 20 14:34:49.957: INFO: Created: latency-svc-5qrm4
Jan 20 14:34:49.958: INFO: Got endpoints: latency-svc-5qrm4 [1.609239534s]
Jan 20 14:34:50.011: INFO: Created: latency-svc-4xhj5
Jan 20 14:34:50.017: INFO: Got endpoints: latency-svc-4xhj5 [1.535357503s]
Jan 20 14:34:50.105: INFO: Created: latency-svc-hcx8n
Jan 20 14:34:50.119: INFO: Got endpoints: latency-svc-hcx8n [1.602226456s]
Jan 20 14:34:50.147: INFO: Created: latency-svc-lvqfr
Jan 20 14:34:50.158: INFO: Got endpoints: latency-svc-lvqfr [1.489191843s]
Jan 20 14:34:50.297: INFO: Created: latency-svc-7r4ns
Jan 20 14:34:50.316: INFO: Got endpoints: latency-svc-7r4ns [1.634960995s]
Jan 20 14:34:50.350: INFO: Created: latency-svc-vkdnx
Jan 20 14:34:50.361: INFO: Got endpoints: latency-svc-vkdnx [1.60421898s]
Jan 20 14:34:50.474: INFO: Created: latency-svc-kwj9j
Jan 20 14:34:50.475: INFO: Got endpoints: latency-svc-kwj9j [1.510299987s]
Jan 20 14:34:50.550: INFO: Created: latency-svc-7kjgw
Jan 20 14:34:50.657: INFO: Got endpoints: latency-svc-7kjgw [1.499366117s]
Jan 20 14:34:50.687: INFO: Created: latency-svc-x5wtc
Jan 20 14:34:50.697: INFO: Got endpoints: latency-svc-x5wtc [1.458268328s]
Jan 20 14:34:50.853: INFO: Created: latency-svc-868k4
Jan 20 14:34:50.878: INFO: Got endpoints: latency-svc-868k4 [1.552545945s]
Jan 20 14:34:51.185: INFO: Created: latency-svc-j5wp5
Jan 20 14:34:51.197: INFO: Got endpoints: latency-svc-j5wp5 [1.820094596s]
Jan 20 14:34:52.200: INFO: Created: latency-svc-v8zzg
Jan 20 14:34:52.210: INFO: Got endpoints: latency-svc-v8zzg [2.686810308s]
Jan 20 14:34:52.388: INFO: Created: latency-svc-tbg6f
Jan 20 14:34:52.441: INFO: Created: latency-svc-5rtjs
Jan 20 14:34:52.442: INFO: Got endpoints: latency-svc-tbg6f [2.731113318s]
Jan 20 14:34:52.453: INFO: Got endpoints: latency-svc-5rtjs [2.711030978s]
Jan 20 14:34:52.576: INFO: Created: latency-svc-26jjc
Jan 20 14:34:52.604: INFO: Got endpoints: latency-svc-26jjc [2.677132909s]
Jan 20 14:34:52.825: INFO: Created: latency-svc-qrsfn
Jan 20 14:34:53.010: INFO: Created: latency-svc-2hhv5
Jan 20 14:34:53.011: INFO: Got endpoints: latency-svc-qrsfn [3.052407101s]
Jan 20 14:34:53.056: INFO: Got endpoints: latency-svc-2hhv5 [3.038423s]
Jan 20 14:34:53.078: INFO: Created: latency-svc-99sjv
Jan 20 14:34:53.174: INFO: Got endpoints: latency-svc-99sjv [3.054809455s]
Jan 20 14:34:53.207: INFO: Created: latency-svc-pfphl
Jan 20 14:34:53.218: INFO: Got endpoints: latency-svc-pfphl [3.060097707s]
Jan 20 14:34:53.262: INFO: Created: latency-svc-gkqrt
Jan 20 14:34:53.380: INFO: Got endpoints: latency-svc-gkqrt [3.063594328s]
Jan 20 14:34:53.390: INFO: Created: latency-svc-prxdx
Jan 20 14:34:53.394: INFO: Got endpoints: latency-svc-prxdx [3.033077822s]
Jan 20 14:34:53.465: INFO: Created: latency-svc-zppc5
Jan 20 14:34:53.471: INFO: Got endpoints: latency-svc-zppc5 [2.995792785s]
Jan 20 14:34:53.606: INFO: Created: latency-svc-4wght
Jan 20 14:34:53.652: INFO: Got endpoints: latency-svc-4wght [2.994382748s]
Jan 20 14:34:53.654: INFO: Created: latency-svc-c25c2
Jan 20 14:34:53.671: INFO: Got endpoints: latency-svc-c25c2 [2.974410914s]
Jan 20 14:34:53.800: INFO: Created: latency-svc-7zp9l
Jan 20 14:34:53.802: INFO: Got endpoints: latency-svc-7zp9l [2.924154539s]
Jan 20 14:34:53.870: INFO: Created: latency-svc-cgxsm
Jan 20 14:34:53.988: INFO: Got endpoints: latency-svc-cgxsm [2.790567679s]
Jan 20 14:34:53.998: INFO: Created: latency-svc-vvhc9
Jan 20 14:34:54.007: INFO: Got endpoints: latency-svc-vvhc9 [1.796577276s]
Jan 20 14:34:54.058: INFO: Created: latency-svc-ns8rq
Jan 20 14:34:54.201: INFO: Got endpoints: latency-svc-ns8rq [1.758571783s]
Jan 20 14:34:54.212: INFO: Created: latency-svc-kg4xx
Jan 20 14:34:54.216: INFO: Got endpoints: latency-svc-kg4xx [1.76267933s]
Jan 20 14:34:54.279: INFO: Created: latency-svc-82scj
Jan 20 14:34:54.402: INFO: Got endpoints: latency-svc-82scj [1.798411788s]
Jan 20 14:34:54.403: INFO: Created: latency-svc-qccfk
Jan 20 14:34:54.415: INFO: Got endpoints: latency-svc-qccfk [1.403871815s]
Jan 20 14:34:54.475: INFO: Created: latency-svc-q989w
Jan 20 14:34:54.570: INFO: Created: latency-svc-44lw7
Jan 20 14:34:54.570: INFO: Got endpoints: latency-svc-q989w [1.514068191s]
Jan 20 14:34:54.578: INFO: Got endpoints: latency-svc-44lw7 [1.404208994s]
Jan 20 14:34:54.634: INFO: Created: latency-svc-kgqzq
Jan 20 14:34:54.644: INFO: Got endpoints: latency-svc-kgqzq [1.425799468s]
Jan 20 14:34:54.662: INFO: Created: latency-svc-h89wf
Jan 20 14:34:54.722: INFO: Got endpoints: latency-svc-h89wf [1.342064604s]
Jan 20 14:34:54.746: INFO: Created: latency-svc-fbgcb
Jan 20 14:34:54.772: INFO: Got endpoints: latency-svc-fbgcb [1.377536553s]
Jan 20 14:34:54.942: INFO: Created: latency-svc-wxzx6
Jan 20 14:34:54.948: INFO: Got endpoints: latency-svc-wxzx6 [1.477412136s]
Jan 20 14:34:54.993: INFO: Created: latency-svc-25vcw
Jan 20 14:34:54.995: INFO: Got endpoints: latency-svc-25vcw [1.342877901s]
Jan 20 14:34:55.024: INFO: Created: latency-svc-pw8pd
Jan 20 14:34:55.139: INFO: Got endpoints: latency-svc-pw8pd [1.467264828s]
Jan 20 14:34:55.145: INFO: Created: latency-svc-h65tr
Jan 20 14:34:55.148: INFO: Got endpoints: latency-svc-h65tr [1.344976802s]
Jan 20 14:34:55.195: INFO: Created: latency-svc-rxmfz
Jan 20 14:34:55.207: INFO: Got endpoints: latency-svc-rxmfz [1.218767433s]
Jan 20 14:34:55.305: INFO: Created: latency-svc-zkqmk
Jan 20 14:34:55.314: INFO: Got endpoints: latency-svc-zkqmk [1.306234012s]
Jan 20 14:34:55.364: INFO: Created: latency-svc-8j4xj
Jan 20 14:34:55.373: INFO: Got endpoints: latency-svc-8j4xj [1.171781271s]
Jan 20 14:34:55.476: INFO: Created: latency-svc-9mdc2
Jan 20 14:34:55.480: INFO: Got endpoints: latency-svc-9mdc2 [1.264176111s]
Jan 20 14:34:55.533: INFO: Created: latency-svc-xpxnf
Jan 20 14:34:55.535: INFO: Got endpoints: latency-svc-xpxnf [1.132802005s]
Jan 20 14:34:55.646: INFO: Created: latency-svc-tb6bq
Jan 20 14:34:55.646: INFO: Got endpoints: latency-svc-tb6bq [1.231409281s]
Jan 20 14:34:55.697: INFO: Created: latency-svc-vddkg
Jan 20 14:34:55.718: INFO: Got endpoints: latency-svc-vddkg [1.147504483s]
Jan 20 14:34:55.724: INFO: Created: latency-svc-wdg2g
Jan 20 14:34:55.925: INFO: Got endpoints: latency-svc-wdg2g [1.34613732s]
Jan 20 14:34:55.947: INFO: Created: latency-svc-ltgqj
Jan 20 14:34:55.950: INFO: Got endpoints: latency-svc-ltgqj [1.305451233s]
Jan 20 14:34:56.123: INFO: Created: latency-svc-g79ss
Jan 20 14:34:56.131: INFO: Got endpoints: latency-svc-g79ss [1.40890445s]
Jan 20 14:34:56.173: INFO: Created: latency-svc-2j6wp
Jan 20 14:34:56.181: INFO: Got endpoints: latency-svc-2j6wp [1.408629686s]
Jan 20 14:34:56.324: INFO: Created: latency-svc-zpx5d
Jan 20 14:34:56.325: INFO: Got endpoints: latency-svc-zpx5d [1.376097462s]
Jan 20 14:34:56.371: INFO: Created: latency-svc-hpzsf
Jan 20 14:34:56.391: INFO: Got endpoints: latency-svc-hpzsf [1.396275013s]
Jan 20 14:34:56.483: INFO: Created: latency-svc-z66hh
Jan 20 14:34:56.531: INFO: Created: latency-svc-bkm2k
Jan 20 14:34:56.536: INFO: Got endpoints: latency-svc-z66hh [1.397323727s]
Jan 20 14:34:56.541: INFO: Got endpoints: latency-svc-bkm2k [1.393552632s]
Jan 20 14:34:56.577: INFO: Created: latency-svc-d6p72
Jan 20 14:34:56.660: INFO: Got endpoints: latency-svc-d6p72 [1.453084511s]
Jan 20 14:34:56.675: INFO: Created: latency-svc-cnk6w
Jan 20 14:34:56.683: INFO: Got endpoints: latency-svc-cnk6w [1.368759483s]
Jan 20 14:34:56.723: INFO: Created: latency-svc-n7rwj
Jan 20 14:34:56.729: INFO: Got endpoints: latency-svc-n7rwj [1.355960826s]
Jan 20 14:34:56.767: INFO: Created: latency-svc-kk7bj
Jan 20 14:34:56.876: INFO: Got endpoints: latency-svc-kk7bj [1.395819198s]
Jan 20 14:34:56.891: INFO: Created: latency-svc-5cng6
Jan 20 14:34:56.926: INFO: Got endpoints: latency-svc-5cng6 [1.391000773s]
Jan 20 14:34:56.939: INFO: Created: latency-svc-9g9t5
Jan 20 14:34:56.939: INFO: Got endpoints: latency-svc-9g9t5 [1.292891623s]
Jan 20 14:34:57.056: INFO: Created: latency-svc-lq6mh
Jan 20 14:34:57.069: INFO: Created: latency-svc-8jrpd
Jan 20 14:34:57.082: INFO: Got endpoints: latency-svc-8jrpd [1.15718948s]
Jan 20 14:34:57.082: INFO: Got endpoints: latency-svc-lq6mh [1.364210069s]
Jan 20 14:34:57.104: INFO: Created: latency-svc-nf7kr
Jan 20 14:34:57.117: INFO: Got endpoints: latency-svc-nf7kr [1.167636296s]
Jan 20 14:34:57.299: INFO: Created: latency-svc-w9qw4
Jan 20 14:34:57.310: INFO: Got endpoints: latency-svc-w9qw4 [1.178507575s]
Jan 20 14:34:57.345: INFO: Created: latency-svc-9t5zr
Jan 20 14:34:57.388: INFO: Got endpoints: latency-svc-9t5zr [1.207188568s]
Jan 20 14:34:57.479: INFO: Created: latency-svc-tbg6s
Jan 20 14:34:57.506: INFO: Created: latency-svc-qbbnq
Jan 20 14:34:57.507: INFO: Got endpoints: latency-svc-tbg6s [1.182404901s]
Jan 20 14:34:57.511: INFO: Got endpoints: latency-svc-qbbnq [1.119649444s]
Jan 20 14:34:57.552: INFO: Created: latency-svc-dlfhw
Jan 20 14:34:57.627: INFO: Got endpoints: latency-svc-dlfhw [1.090026248s]
Jan 20 14:34:57.650: INFO: Created: latency-svc-cn9dp
Jan 20 14:34:57.661: INFO: Got endpoints: latency-svc-cn9dp [1.119920366s]
Jan 20 14:34:57.711: INFO: Created: latency-svc-czmld
Jan 20 14:34:57.712: INFO: Got endpoints: latency-svc-czmld [1.051293191s]
Jan 20 14:34:57.825: INFO: Created: latency-svc-rqsb8
Jan 20 14:34:57.829: INFO: Got endpoints: latency-svc-rqsb8 [1.145809888s]
Jan 20 14:34:57.997: INFO: Created: latency-svc-s424g
Jan 20 14:34:58.006: INFO: Got endpoints: latency-svc-s424g [1.277043771s]
Jan 20 14:34:58.049: INFO: Created: latency-svc-wt9g6
Jan 20 14:34:58.054: INFO: Got endpoints: latency-svc-wt9g6 [1.177026816s]
Jan 20 14:34:58.209: INFO: Created: latency-svc-4jqbk
Jan 20 14:34:58.233: INFO: Got endpoints: latency-svc-4jqbk [1.306108544s]
Jan 20 14:34:58.238: INFO: Created: latency-svc-bsj5k
Jan 20 14:34:58.246: INFO: Got endpoints: latency-svc-bsj5k [1.307048214s]
Jan 20 14:34:58.292: INFO: Created: latency-svc-mxstl
Jan 20 14:34:58.292: INFO: Got endpoints: latency-svc-mxstl [1.209625042s]
Jan 20 14:34:58.452: INFO: Created: latency-svc-qclj6
Jan 20 14:34:58.458: INFO: Got endpoints: latency-svc-qclj6 [1.374987089s]
Jan 20 14:34:58.458: INFO: Latencies: [81.609168ms 168.372583ms 174.709698ms 358.272857ms 407.88824ms 520.199176ms 724.689366ms 897.002883ms 978.613196ms 1.051293191s 1.090026248s 1.119649444s 1.119920366s 1.132802005s 1.137228948s 1.145809888s 1.147504483s 1.15718948s 1.167636296s 1.171781271s 1.177026816s 1.178507575s 1.182404901s 1.207188568s 1.209625042s 1.218767433s 1.231409281s 1.264176111s 1.277043771s 1.292891623s 1.305451233s 1.306108544s 1.306234012s 1.307048214s 1.336305448s 1.342064604s 1.342877901s 1.344976802s 1.34613732s 1.355960826s 1.364210069s 1.368759483s 1.374987089s 1.376097462s 1.377536553s 1.391000773s 1.393552632s 1.395819198s 1.396275013s 1.397323727s 1.403871815s 1.404208994s 1.408629686s 1.40890445s 1.413940022s 1.416263993s 1.425799468s 1.431587208s 1.440416521s 1.448551652s 1.453084511s 1.458268328s 1.460859245s 1.46121116s 1.467264828s 1.474221758s 1.475758791s 1.477412136s 1.477437893s 1.482159988s 1.484583661s 1.489191843s 1.490294456s 1.496229006s 1.499366117s 1.510299987s 1.51158234s 1.514068191s 1.519059904s 1.523525374s 1.535357503s 1.538946566s 1.540784023s 1.547351675s 1.552545945s 1.552562031s 1.558283983s 1.559695649s 1.560452783s 1.564304989s 1.565424938s 1.565886693s 1.56676772s 1.568179104s 1.575987527s 1.582074317s 1.582107098s 1.587036719s 1.591895045s 1.592254141s 1.595274124s 1.597018434s 1.598215108s 1.59914197s 1.602226456s 1.60421898s 1.607297814s 1.609239534s 1.610539238s 1.613766708s 1.615173237s 1.615597384s 1.618218527s 1.618556907s 1.619473302s 1.621145762s 1.622713794s 1.623183788s 1.625246828s 1.627616474s 1.631117442s 1.631676031s 1.632176758s 1.633043935s 1.634960995s 1.636970757s 1.638386667s 1.64646441s 1.650587328s 1.653776643s 1.658295155s 1.667855239s 1.669497679s 1.670268397s 1.679683881s 1.683277258s 1.685709179s 1.694183109s 1.696618209s 1.699416171s 1.699687711s 1.701033982s 1.703881097s 1.729154578s 1.732390295s 1.736938808s 1.742467164s 1.756517488s 1.758571783s 1.76267933s 1.766149991s 1.767122236s 1.76964879s 1.773201442s 1.775531147s 1.786141107s 1.796577276s 1.79835155s 1.798411788s 1.80862574s 1.820094596s 1.846627172s 1.850196094s 1.897820283s 1.90264439s 1.930521458s 1.931526325s 1.941547877s 1.952282892s 1.954433255s 1.979889269s 1.989234725s 1.989514032s 1.990121353s 1.992463447s 1.998952066s 2.008155643s 2.072164054s 2.091373899s 2.105430508s 2.115537499s 2.119832938s 2.136184122s 2.141266622s 2.143479783s 2.677132909s 2.686810308s 2.711030978s 2.731113318s 2.790567679s 2.924154539s 2.974410914s 2.994382748s 2.995792785s 3.033077822s 3.038423s 3.052407101s 3.054809455s 3.060097707s 3.063594328s]
Jan 20 14:34:58.458: INFO: 50 %ile: 1.595274124s
Jan 20 14:34:58.458: INFO: 90 %ile: 2.115537499s
Jan 20 14:34:58.458: INFO: 99 %ile: 3.060097707s
Jan 20 14:34:58.458: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:34:58.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-7698" for this suite.
Jan 20 14:35:54.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:35:54.880: INFO: namespace svc-latency-7698 deletion completed in 56.41110166s

• [SLOW TEST:88.646 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:35:54.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-5951
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-5951
STEP: Creating statefulset with conflicting port in namespace statefulset-5951
STEP: Waiting until pod test-pod will start running in namespace statefulset-5951
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5951
Jan 20 14:36:03.148: INFO: Observed stateful pod in namespace: statefulset-5951, name: ss-0, uid: 3d793b67-2596-4a3f-8347-67d56aabe67e, status phase: Pending. Waiting for statefulset controller to delete.
Jan 20 14:36:06.491: INFO: Observed stateful pod in namespace: statefulset-5951, name: ss-0, uid: 3d793b67-2596-4a3f-8347-67d56aabe67e, status phase: Failed. Waiting for statefulset controller to delete.
Jan 20 14:36:06.514: INFO: Observed stateful pod in namespace: statefulset-5951, name: ss-0, uid: 3d793b67-2596-4a3f-8347-67d56aabe67e, status phase: Failed. Waiting for statefulset controller to delete.
Jan 20 14:36:06.546: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5951
STEP: Removing pod with conflicting port in namespace statefulset-5951
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5951 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 20 14:36:16.766: INFO: Deleting all statefulset in ns statefulset-5951
Jan 20 14:36:16.772: INFO: Scaling statefulset ss to 0
Jan 20 14:36:26.817: INFO: Waiting for statefulset status.replicas updated to 0
Jan 20 14:36:26.828: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:36:26.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5951" for this suite.
Jan 20 14:36:33.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:36:33.165: INFO: namespace statefulset-5951 deletion completed in 6.184180674s

• [SLOW TEST:38.284 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:36:33.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-96523365-226d-4870-883e-e1c983b5fe23
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-96523365-226d-4870-883e-e1c983b5fe23
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:36:47.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6946" for this suite.
Jan 20 14:37:09.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:37:09.704: INFO: namespace configmap-6946 deletion completed in 22.17901636s

• [SLOW TEST:36.539 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:37:09.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 20 14:37:09.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2363'
Jan 20 14:37:11.943: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 20 14:37:11.944: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Jan 20 14:37:13.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-2363'
Jan 20 14:37:14.112: INFO: stderr: ""
Jan 20 14:37:14.113: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:37:14.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2363" for this suite.
Jan 20 14:37:20.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:37:20.336: INFO: namespace kubectl-2363 deletion completed in 6.209091856s

• [SLOW TEST:10.630 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:37:20.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 20 14:37:20.519: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d22a9671-a0d0-43f8-a3a2-eefe87a54ff8" in namespace "downward-api-2646" to be "success or failure"
Jan 20 14:37:20.523: INFO: Pod "downwardapi-volume-d22a9671-a0d0-43f8-a3a2-eefe87a54ff8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.217423ms
Jan 20 14:37:22.551: INFO: Pod "downwardapi-volume-d22a9671-a0d0-43f8-a3a2-eefe87a54ff8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031776752s
Jan 20 14:37:24.561: INFO: Pod "downwardapi-volume-d22a9671-a0d0-43f8-a3a2-eefe87a54ff8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042260182s
Jan 20 14:37:26.573: INFO: Pod "downwardapi-volume-d22a9671-a0d0-43f8-a3a2-eefe87a54ff8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05414005s
Jan 20 14:37:28.587: INFO: Pod "downwardapi-volume-d22a9671-a0d0-43f8-a3a2-eefe87a54ff8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068379516s
Jan 20 14:37:30.607: INFO: Pod "downwardapi-volume-d22a9671-a0d0-43f8-a3a2-eefe87a54ff8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.088168635s
STEP: Saw pod success
Jan 20 14:37:30.607: INFO: Pod "downwardapi-volume-d22a9671-a0d0-43f8-a3a2-eefe87a54ff8" satisfied condition "success or failure"
Jan 20 14:37:30.615: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d22a9671-a0d0-43f8-a3a2-eefe87a54ff8 container client-container: 
STEP: delete the pod
Jan 20 14:37:30.688: INFO: Waiting for pod downwardapi-volume-d22a9671-a0d0-43f8-a3a2-eefe87a54ff8 to disappear
Jan 20 14:37:30.696: INFO: Pod downwardapi-volume-d22a9671-a0d0-43f8-a3a2-eefe87a54ff8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:37:30.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2646" for this suite.
Jan 20 14:37:36.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:37:36.942: INFO: namespace downward-api-2646 deletion completed in 6.240219657s

• [SLOW TEST:16.606 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:37:36.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 20 14:37:37.040: INFO: Creating ReplicaSet my-hostname-basic-b13ee225-6ce2-4e76-af76-c938bf87a170
Jan 20 14:37:37.107: INFO: Pod name my-hostname-basic-b13ee225-6ce2-4e76-af76-c938bf87a170: Found 0 pods out of 1
Jan 20 14:37:42.115: INFO: Pod name my-hostname-basic-b13ee225-6ce2-4e76-af76-c938bf87a170: Found 1 pods out of 1
Jan 20 14:37:42.115: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-b13ee225-6ce2-4e76-af76-c938bf87a170" is running
Jan 20 14:37:46.180: INFO: Pod "my-hostname-basic-b13ee225-6ce2-4e76-af76-c938bf87a170-d8qkf" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-20 14:37:37 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-20 14:37:37 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-b13ee225-6ce2-4e76-af76-c938bf87a170]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-20 14:37:37 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-b13ee225-6ce2-4e76-af76-c938bf87a170]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-20 14:37:37 +0000 UTC Reason: Message:}])
Jan 20 14:37:46.180: INFO: Trying to dial the pod
Jan 20 14:37:51.220: INFO: Controller my-hostname-basic-b13ee225-6ce2-4e76-af76-c938bf87a170: Got expected result from replica 1 [my-hostname-basic-b13ee225-6ce2-4e76-af76-c938bf87a170-d8qkf]: "my-hostname-basic-b13ee225-6ce2-4e76-af76-c938bf87a170-d8qkf", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:37:51.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9722" for this suite.
Jan 20 14:37:57.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:37:57.402: INFO: namespace replicaset-9722 deletion completed in 6.173686038s

• [SLOW TEST:20.459 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:37:57.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 20 14:38:13.731: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 14:38:13.745: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 14:38:15.746: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 14:38:15.760: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 14:38:17.746: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 14:38:17.759: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 14:38:19.746: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 14:38:19.759: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 14:38:21.746: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 14:38:21.756: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 14:38:23.746: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 14:38:23.757: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 14:38:25.746: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 14:38:25.758: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 14:38:27.746: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 14:38:27.756: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 14:38:29.746: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 14:38:29.756: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 14:38:31.746: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 14:38:31.756: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 14:38:33.746: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 14:38:33.758: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 14:38:35.746: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 14:38:35.756: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 14:38:37.746: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 14:38:37.756: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 14:38:39.746: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 14:38:39.759: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 14:38:41.746: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 14:38:41.756: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 14:38:43.746: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 14:38:43.755: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 14:38:45.746: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 14:38:45.754: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 14:38:47.746: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 14:38:47.754: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:38:47.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1309" for this suite.
Jan 20 14:39:09.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:39:09.964: INFO: namespace container-lifecycle-hook-1309 deletion completed in 22.170394423s

• [SLOW TEST:72.562 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:39:09.965: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Jan 20 14:39:10.048: INFO: namespace kubectl-5649
Jan 20 14:39:10.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5649'
Jan 20 14:39:10.528: INFO: stderr: ""
Jan 20 14:39:10.528: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 20 14:39:11.546: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 14:39:11.546: INFO: Found 0 / 1
Jan 20 14:39:12.552: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 14:39:12.552: INFO: Found 0 / 1
Jan 20 14:39:13.542: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 14:39:13.542: INFO: Found 0 / 1
Jan 20 14:39:14.540: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 14:39:14.541: INFO: Found 0 / 1
Jan 20 14:39:15.542: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 14:39:15.542: INFO: Found 0 / 1
Jan 20 14:39:16.551: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 14:39:16.552: INFO: Found 0 / 1
Jan 20 14:39:17.536: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 14:39:17.536: INFO: Found 0 / 1
Jan 20 14:39:18.544: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 14:39:18.544: INFO: Found 1 / 1
Jan 20 14:39:18.544: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 20 14:39:18.553: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 14:39:18.553: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 20 14:39:18.553: INFO: wait on redis-master startup in kubectl-5649 
Jan 20 14:39:18.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jnmvq redis-master --namespace=kubectl-5649'
Jan 20 14:39:18.726: INFO: stderr: ""
Jan 20 14:39:18.726: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 20 Jan 14:39:17.625 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 Jan 14:39:17.625 # Server started, Redis version 3.2.12\n1:M 20 Jan 14:39:17.626 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 Jan 14:39:17.626 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan 20 14:39:18.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-5649'
Jan 20 14:39:18.890: INFO: stderr: ""
Jan 20 14:39:18.890: INFO: stdout: "service/rm2 exposed\n"
Jan 20 14:39:18.935: INFO: Service rm2 in namespace kubectl-5649 found.
STEP: exposing service
Jan 20 14:39:20.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-5649'
Jan 20 14:39:21.174: INFO: stderr: ""
Jan 20 14:39:21.174: INFO: stdout: "service/rm3 exposed\n"
Jan 20 14:39:21.189: INFO: Service rm3 in namespace kubectl-5649 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:39:23.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5649" for this suite.
Jan 20 14:39:45.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:39:45.340: INFO: namespace kubectl-5649 deletion completed in 22.133025447s

• [SLOW TEST:35.375 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:39:45.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0120 14:40:25.566924       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 20 14:40:25.567: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:40:25.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2101" for this suite.
Jan 20 14:40:37.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:40:38.717: INFO: namespace gc-2101 deletion completed in 13.139476111s

• [SLOW TEST:53.377 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:40:38.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 20 14:40:40.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-4685'
Jan 20 14:40:40.871: INFO: stderr: ""
Jan 20 14:40:40.871: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan 20 14:40:55.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-4685 -o json'
Jan 20 14:40:56.095: INFO: stderr: ""
Jan 20 14:40:56.095: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-20T14:40:40Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-4685\",\n        \"resourceVersion\": \"21195497\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-4685/pods/e2e-test-nginx-pod\",\n        \"uid\": \"25d7e339-8143-477b-83ef-97b1ac63becb\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-fndsq\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-fndsq\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-fndsq\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-20T14:40:41Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-20T14:40:51Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-20T14:40:51Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-20T14:40:40Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://71e7ef34d9c05ea09a1c34988a0ec134ed0192c800a37cd7e09f3dad5735a081\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-20T14:40:51Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-20T14:40:41Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan 20 14:40:56.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-4685'
Jan 20 14:40:56.630: INFO: stderr: ""
Jan 20 14:40:56.630: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Jan 20 14:40:56.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-4685'
Jan 20 14:41:03.396: INFO: stderr: ""
Jan 20 14:41:03.396: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:41:03.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4685" for this suite.
Jan 20 14:41:09.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:41:09.702: INFO: namespace kubectl-4685 deletion completed in 6.294328001s

• [SLOW TEST:30.984 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:41:09.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 20 14:41:09.838: INFO: Waiting up to 5m0s for pod "pod-06286d89-f8c0-4f9c-b516-d2538594a27d" in namespace "emptydir-3864" to be "success or failure"
Jan 20 14:41:09.913: INFO: Pod "pod-06286d89-f8c0-4f9c-b516-d2538594a27d": Phase="Pending", Reason="", readiness=false. Elapsed: 75.024336ms
Jan 20 14:41:11.926: INFO: Pod "pod-06286d89-f8c0-4f9c-b516-d2538594a27d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087935231s
Jan 20 14:41:13.941: INFO: Pod "pod-06286d89-f8c0-4f9c-b516-d2538594a27d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103576746s
Jan 20 14:41:15.951: INFO: Pod "pod-06286d89-f8c0-4f9c-b516-d2538594a27d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113484041s
Jan 20 14:41:17.970: INFO: Pod "pod-06286d89-f8c0-4f9c-b516-d2538594a27d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.132464835s
STEP: Saw pod success
Jan 20 14:41:17.970: INFO: Pod "pod-06286d89-f8c0-4f9c-b516-d2538594a27d" satisfied condition "success or failure"
Jan 20 14:41:17.977: INFO: Trying to get logs from node iruya-node pod pod-06286d89-f8c0-4f9c-b516-d2538594a27d container test-container: 
STEP: delete the pod
Jan 20 14:41:18.071: INFO: Waiting for pod pod-06286d89-f8c0-4f9c-b516-d2538594a27d to disappear
Jan 20 14:41:18.094: INFO: Pod pod-06286d89-f8c0-4f9c-b516-d2538594a27d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:41:18.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3864" for this suite.
Jan 20 14:41:24.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:41:24.235: INFO: namespace emptydir-3864 deletion completed in 6.133803607s

• [SLOW TEST:14.532 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:41:24.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-58ad06eb-2a0a-424e-a0c9-c3190c5eca43
STEP: Creating a pod to test consume configMaps
Jan 20 14:41:24.357: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-481ebcff-205d-4966-8132-9ddb01693ca0" in namespace "projected-7862" to be "success or failure"
Jan 20 14:41:24.380: INFO: Pod "pod-projected-configmaps-481ebcff-205d-4966-8132-9ddb01693ca0": Phase="Pending", Reason="", readiness=false. Elapsed: 22.664559ms
Jan 20 14:41:26.394: INFO: Pod "pod-projected-configmaps-481ebcff-205d-4966-8132-9ddb01693ca0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036126785s
Jan 20 14:41:28.405: INFO: Pod "pod-projected-configmaps-481ebcff-205d-4966-8132-9ddb01693ca0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047286216s
Jan 20 14:41:30.417: INFO: Pod "pod-projected-configmaps-481ebcff-205d-4966-8132-9ddb01693ca0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05917817s
Jan 20 14:41:32.424: INFO: Pod "pod-projected-configmaps-481ebcff-205d-4966-8132-9ddb01693ca0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066331126s
STEP: Saw pod success
Jan 20 14:41:32.424: INFO: Pod "pod-projected-configmaps-481ebcff-205d-4966-8132-9ddb01693ca0" satisfied condition "success or failure"
Jan 20 14:41:32.427: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-481ebcff-205d-4966-8132-9ddb01693ca0 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 20 14:41:32.472: INFO: Waiting for pod pod-projected-configmaps-481ebcff-205d-4966-8132-9ddb01693ca0 to disappear
Jan 20 14:41:32.475: INFO: Pod pod-projected-configmaps-481ebcff-205d-4966-8132-9ddb01693ca0 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:41:32.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7862" for this suite.
Jan 20 14:41:38.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:41:38.663: INFO: namespace projected-7862 deletion completed in 6.183091146s

• [SLOW TEST:14.428 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:41:38.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 20 14:41:38.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-335'
Jan 20 14:41:38.925: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 20 14:41:38.925: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Jan 20 14:41:38.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-335'
Jan 20 14:41:39.177: INFO: stderr: ""
Jan 20 14:41:39.177: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:41:39.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-335" for this suite.
Jan 20 14:42:01.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:42:01.330: INFO: namespace kubectl-335 deletion completed in 22.148519754s

• [SLOW TEST:22.667 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:42:01.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan 20 14:42:01.534: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3473,SelfLink:/api/v1/namespaces/watch-3473/configmaps/e2e-watch-test-label-changed,UID:8897a053-4460-49c6-9a89-27bb769b76d6,ResourceVersion:21195687,Generation:0,CreationTimestamp:2020-01-20 14:42:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 20 14:42:01.534: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3473,SelfLink:/api/v1/namespaces/watch-3473/configmaps/e2e-watch-test-label-changed,UID:8897a053-4460-49c6-9a89-27bb769b76d6,ResourceVersion:21195688,Generation:0,CreationTimestamp:2020-01-20 14:42:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 20 14:42:01.534: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3473,SelfLink:/api/v1/namespaces/watch-3473/configmaps/e2e-watch-test-label-changed,UID:8897a053-4460-49c6-9a89-27bb769b76d6,ResourceVersion:21195689,Generation:0,CreationTimestamp:2020-01-20 14:42:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan 20 14:42:11.716: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3473,SelfLink:/api/v1/namespaces/watch-3473/configmaps/e2e-watch-test-label-changed,UID:8897a053-4460-49c6-9a89-27bb769b76d6,ResourceVersion:21195704,Generation:0,CreationTimestamp:2020-01-20 14:42:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 20 14:42:11.716: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3473,SelfLink:/api/v1/namespaces/watch-3473/configmaps/e2e-watch-test-label-changed,UID:8897a053-4460-49c6-9a89-27bb769b76d6,ResourceVersion:21195705,Generation:0,CreationTimestamp:2020-01-20 14:42:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan 20 14:42:11.717: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3473,SelfLink:/api/v1/namespaces/watch-3473/configmaps/e2e-watch-test-label-changed,UID:8897a053-4460-49c6-9a89-27bb769b76d6,ResourceVersion:21195706,Generation:0,CreationTimestamp:2020-01-20 14:42:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:42:11.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3473" for this suite.
Jan 20 14:42:17.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:42:17.986: INFO: namespace watch-3473 deletion completed in 6.258415346s

• [SLOW TEST:16.655 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:42:17.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Jan 20 14:42:18.641: INFO: created pod pod-service-account-defaultsa
Jan 20 14:42:18.641: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan 20 14:42:18.658: INFO: created pod pod-service-account-mountsa
Jan 20 14:42:18.658: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan 20 14:42:18.717: INFO: created pod pod-service-account-nomountsa
Jan 20 14:42:18.717: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan 20 14:42:18.733: INFO: created pod pod-service-account-defaultsa-mountspec
Jan 20 14:42:18.733: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan 20 14:42:18.765: INFO: created pod pod-service-account-mountsa-mountspec
Jan 20 14:42:18.765: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan 20 14:42:18.901: INFO: created pod pod-service-account-nomountsa-mountspec
Jan 20 14:42:18.901: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan 20 14:42:18.953: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan 20 14:42:18.953: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan 20 14:42:19.689: INFO: created pod pod-service-account-mountsa-nomountspec
Jan 20 14:42:19.690: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan 20 14:42:20.252: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan 20 14:42:20.252: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:42:20.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-1277" for this suite.
Jan 20 14:42:58.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:42:58.751: INFO: namespace svcaccounts-1277 deletion completed in 38.403543606s

• [SLOW TEST:40.764 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:42:58.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Jan 20 14:43:07.404: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1694 pod-service-account-1b190ece-5904-4242-a78f-3852649256d2 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jan 20 14:43:07.882: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1694 pod-service-account-1b190ece-5904-4242-a78f-3852649256d2 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jan 20 14:43:08.302: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1694 pod-service-account-1b190ece-5904-4242-a78f-3852649256d2 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:43:08.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-1694" for this suite.
Jan 20 14:43:14.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:43:14.997: INFO: namespace svcaccounts-1694 deletion completed in 6.211252968s

• [SLOW TEST:16.246 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:43:14.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 20 14:43:15.127: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.891098ms)
Jan 20 14:43:15.138: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.304816ms)
Jan 20 14:43:15.164: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 25.74592ms)
Jan 20 14:43:15.169: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.860237ms)
Jan 20 14:43:15.175: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.599197ms)
Jan 20 14:43:15.179: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.1235ms)
Jan 20 14:43:15.183: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.793611ms)
Jan 20 14:43:15.188: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.956235ms)
Jan 20 14:43:15.194: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.266891ms)
Jan 20 14:43:15.198: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.769301ms)
Jan 20 14:43:15.203: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.018603ms)
Jan 20 14:43:15.210: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.197049ms)
Jan 20 14:43:15.216: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.659894ms)
Jan 20 14:43:15.224: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.838423ms)
Jan 20 14:43:15.232: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.077217ms)
Jan 20 14:43:15.239: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.876077ms)
Jan 20 14:43:15.243: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.01978ms)
Jan 20 14:43:15.248: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.401522ms)
Jan 20 14:43:15.251: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.065962ms)
Jan 20 14:43:15.254: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.191605ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:43:15.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5346" for this suite.
Jan 20 14:43:21.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:43:21.439: INFO: namespace proxy-5346 deletion completed in 6.181620596s

• [SLOW TEST:6.442 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:43:21.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Jan 20 14:43:29.631: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jan 20 14:43:39.795: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:43:39.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5395" for this suite.
Jan 20 14:43:45.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:43:45.962: INFO: namespace pods-5395 deletion completed in 6.13322894s

• [SLOW TEST:24.522 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:43:45.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-8b82a86c-edc2-4b78-919a-ba01d725f461
STEP: Creating a pod to test consume configMaps
Jan 20 14:43:46.083: INFO: Waiting up to 5m0s for pod "pod-configmaps-1a275c52-a1dd-4ad2-af98-d7f3da3391ab" in namespace "configmap-1591" to be "success or failure"
Jan 20 14:43:46.086: INFO: Pod "pod-configmaps-1a275c52-a1dd-4ad2-af98-d7f3da3391ab": Phase="Pending", Reason="", readiness=false. Elapsed: 3.094152ms
Jan 20 14:43:48.104: INFO: Pod "pod-configmaps-1a275c52-a1dd-4ad2-af98-d7f3da3391ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021077923s
Jan 20 14:43:50.112: INFO: Pod "pod-configmaps-1a275c52-a1dd-4ad2-af98-d7f3da3391ab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029143429s
Jan 20 14:43:52.127: INFO: Pod "pod-configmaps-1a275c52-a1dd-4ad2-af98-d7f3da3391ab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043995207s
Jan 20 14:43:54.141: INFO: Pod "pod-configmaps-1a275c52-a1dd-4ad2-af98-d7f3da3391ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058284152s
STEP: Saw pod success
Jan 20 14:43:54.141: INFO: Pod "pod-configmaps-1a275c52-a1dd-4ad2-af98-d7f3da3391ab" satisfied condition "success or failure"
Jan 20 14:43:54.147: INFO: Trying to get logs from node iruya-node pod pod-configmaps-1a275c52-a1dd-4ad2-af98-d7f3da3391ab container configmap-volume-test: 
STEP: delete the pod
Jan 20 14:43:54.295: INFO: Waiting for pod pod-configmaps-1a275c52-a1dd-4ad2-af98-d7f3da3391ab to disappear
Jan 20 14:43:54.305: INFO: Pod pod-configmaps-1a275c52-a1dd-4ad2-af98-d7f3da3391ab no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:43:54.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1591" for this suite.
Jan 20 14:44:00.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:44:00.498: INFO: namespace configmap-1591 deletion completed in 6.18454963s

• [SLOW TEST:14.536 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:44:00.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-8702/configmap-test-788c617d-5b77-4a98-b9b0-d21337a97398
STEP: Creating a pod to test consume configMaps
Jan 20 14:44:00.631: INFO: Waiting up to 5m0s for pod "pod-configmaps-e5bc9998-1b32-4ef1-94fb-fa58321e39fa" in namespace "configmap-8702" to be "success or failure"
Jan 20 14:44:00.646: INFO: Pod "pod-configmaps-e5bc9998-1b32-4ef1-94fb-fa58321e39fa": Phase="Pending", Reason="", readiness=false. Elapsed: 14.953672ms
Jan 20 14:44:02.661: INFO: Pod "pod-configmaps-e5bc9998-1b32-4ef1-94fb-fa58321e39fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029804022s
Jan 20 14:44:04.666: INFO: Pod "pod-configmaps-e5bc9998-1b32-4ef1-94fb-fa58321e39fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034999192s
Jan 20 14:44:06.687: INFO: Pod "pod-configmaps-e5bc9998-1b32-4ef1-94fb-fa58321e39fa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055605217s
Jan 20 14:44:08.712: INFO: Pod "pod-configmaps-e5bc9998-1b32-4ef1-94fb-fa58321e39fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.081312742s
STEP: Saw pod success
Jan 20 14:44:08.712: INFO: Pod "pod-configmaps-e5bc9998-1b32-4ef1-94fb-fa58321e39fa" satisfied condition "success or failure"
Jan 20 14:44:08.717: INFO: Trying to get logs from node iruya-node pod pod-configmaps-e5bc9998-1b32-4ef1-94fb-fa58321e39fa container env-test: 
STEP: delete the pod
Jan 20 14:44:08.772: INFO: Waiting for pod pod-configmaps-e5bc9998-1b32-4ef1-94fb-fa58321e39fa to disappear
Jan 20 14:44:08.781: INFO: Pod pod-configmaps-e5bc9998-1b32-4ef1-94fb-fa58321e39fa no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:44:08.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8702" for this suite.
Jan 20 14:44:14.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:44:14.950: INFO: namespace configmap-8702 deletion completed in 6.163489478s

• [SLOW TEST:14.450 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:44:14.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 20 14:44:15.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8871'
Jan 20 14:44:15.367: INFO: stderr: ""
Jan 20 14:44:15.367: INFO: stdout: "replicationcontroller/redis-master created\n"
Jan 20 14:44:15.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8871'
Jan 20 14:44:15.882: INFO: stderr: ""
Jan 20 14:44:15.883: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 20 14:44:16.896: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 14:44:16.896: INFO: Found 0 / 1
Jan 20 14:44:17.890: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 14:44:17.890: INFO: Found 0 / 1
Jan 20 14:44:18.900: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 14:44:18.901: INFO: Found 0 / 1
Jan 20 14:44:19.894: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 14:44:19.894: INFO: Found 0 / 1
Jan 20 14:44:20.892: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 14:44:20.892: INFO: Found 0 / 1
Jan 20 14:44:21.903: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 14:44:21.903: INFO: Found 0 / 1
Jan 20 14:44:22.892: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 14:44:22.893: INFO: Found 0 / 1
Jan 20 14:44:23.898: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 14:44:23.899: INFO: Found 1 / 1
Jan 20 14:44:23.899: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 20 14:44:23.916: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 14:44:23.916: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 20 14:44:23.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-frhhx --namespace=kubectl-8871'
Jan 20 14:44:24.081: INFO: stderr: ""
Jan 20 14:44:24.081: INFO: stdout: "Name:           redis-master-frhhx\nNamespace:      kubectl-8871\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Mon, 20 Jan 2020 14:44:15 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://441f7e180822cca03a29c4cf605e79c5b8be50c3ef60d3f7d366fbf2c284c664\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Mon, 20 Jan 2020 14:44:21 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-gcxfp (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-gcxfp:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-gcxfp\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  9s    default-scheduler    Successfully assigned kubectl-8871/redis-master-frhhx to iruya-node\n  Normal  Pulled     5s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    3s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    3s    kubelet, iruya-node  Started container redis-master\n"
Jan 20 14:44:24.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-8871'
Jan 20 14:44:24.200: INFO: stderr: ""
Jan 20 14:44:24.200: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-8871\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  9s    replication-controller  Created pod: redis-master-frhhx\n"
Jan 20 14:44:24.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-8871'
Jan 20 14:44:24.331: INFO: stderr: ""
Jan 20 14:44:24.331: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-8871\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.103.185.156\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Jan 20 14:44:24.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Jan 20 14:44:24.458: INFO: stderr: ""
Jan 20 14:44:24.458: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Mon, 20 Jan 2020 14:43:35 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Mon, 20 Jan 2020 14:43:35 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Mon, 20 Jan 2020 14:43:35 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Mon, 20 Jan 2020 14:43:35 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         169d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         100d\n  kubectl-8871               redis-master-frhhx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Jan 20 14:44:24.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-8871'
Jan 20 14:44:24.570: INFO: stderr: ""
Jan 20 14:44:24.570: INFO: stdout: "Name:         kubectl-8871\nLabels:       e2e-framework=kubectl\n              e2e-run=fd28f523-115b-4f8d-a77f-f0d26c35e455\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:44:24.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8871" for this suite.
Jan 20 14:44:48.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:44:48.704: INFO: namespace kubectl-8871 deletion completed in 24.12936645s

• [SLOW TEST:33.753 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:44:48.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:44:57.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5531" for this suite.
Jan 20 14:45:19.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:45:20.106: INFO: namespace replication-controller-5531 deletion completed in 22.159412439s

• [SLOW TEST:31.402 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:45:20.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 20 14:45:20.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:45:28.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9063" for this suite.
Jan 20 14:46:20.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:46:20.495: INFO: namespace pods-9063 deletion completed in 52.165861484s

• [SLOW TEST:60.388 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:46:20.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:46:28.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6042" for this suite.
Jan 20 14:47:20.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:47:20.931: INFO: namespace kubelet-test-6042 deletion completed in 52.175883428s

• [SLOW TEST:60.435 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:47:20.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-7h9x
STEP: Creating a pod to test atomic-volume-subpath
Jan 20 14:47:21.055: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7h9x" in namespace "subpath-6631" to be "success or failure"
Jan 20 14:47:21.081: INFO: Pod "pod-subpath-test-configmap-7h9x": Phase="Pending", Reason="", readiness=false. Elapsed: 25.283044ms
Jan 20 14:47:23.088: INFO: Pod "pod-subpath-test-configmap-7h9x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032624848s
Jan 20 14:47:26.316: INFO: Pod "pod-subpath-test-configmap-7h9x": Phase="Pending", Reason="", readiness=false. Elapsed: 5.260792591s
Jan 20 14:47:28.325: INFO: Pod "pod-subpath-test-configmap-7h9x": Phase="Pending", Reason="", readiness=false. Elapsed: 7.269161977s
Jan 20 14:47:30.332: INFO: Pod "pod-subpath-test-configmap-7h9x": Phase="Running", Reason="", readiness=true. Elapsed: 9.276068988s
Jan 20 14:47:32.341: INFO: Pod "pod-subpath-test-configmap-7h9x": Phase="Running", Reason="", readiness=true. Elapsed: 11.285317714s
Jan 20 14:47:34.352: INFO: Pod "pod-subpath-test-configmap-7h9x": Phase="Running", Reason="", readiness=true. Elapsed: 13.295947803s
Jan 20 14:47:36.369: INFO: Pod "pod-subpath-test-configmap-7h9x": Phase="Running", Reason="", readiness=true. Elapsed: 15.312975833s
Jan 20 14:47:38.378: INFO: Pod "pod-subpath-test-configmap-7h9x": Phase="Running", Reason="", readiness=true. Elapsed: 17.322597899s
Jan 20 14:47:40.389: INFO: Pod "pod-subpath-test-configmap-7h9x": Phase="Running", Reason="", readiness=true. Elapsed: 19.333807378s
Jan 20 14:47:42.400: INFO: Pod "pod-subpath-test-configmap-7h9x": Phase="Running", Reason="", readiness=true. Elapsed: 21.344728207s
Jan 20 14:47:44.410: INFO: Pod "pod-subpath-test-configmap-7h9x": Phase="Running", Reason="", readiness=true. Elapsed: 23.354457732s
Jan 20 14:47:46.421: INFO: Pod "pod-subpath-test-configmap-7h9x": Phase="Running", Reason="", readiness=true. Elapsed: 25.365785651s
Jan 20 14:47:48.429: INFO: Pod "pod-subpath-test-configmap-7h9x": Phase="Running", Reason="", readiness=true. Elapsed: 27.373409813s
Jan 20 14:47:50.440: INFO: Pod "pod-subpath-test-configmap-7h9x": Phase="Running", Reason="", readiness=true. Elapsed: 29.384600839s
Jan 20 14:47:52.447: INFO: Pod "pod-subpath-test-configmap-7h9x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.391824591s
STEP: Saw pod success
Jan 20 14:47:52.448: INFO: Pod "pod-subpath-test-configmap-7h9x" satisfied condition "success or failure"
Jan 20 14:47:52.451: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-7h9x container test-container-subpath-configmap-7h9x: 
STEP: delete the pod
Jan 20 14:47:52.507: INFO: Waiting for pod pod-subpath-test-configmap-7h9x to disappear
Jan 20 14:47:52.516: INFO: Pod pod-subpath-test-configmap-7h9x no longer exists
STEP: Deleting pod pod-subpath-test-configmap-7h9x
Jan 20 14:47:52.516: INFO: Deleting pod "pod-subpath-test-configmap-7h9x" in namespace "subpath-6631"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:47:52.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6631" for this suite.
Jan 20 14:47:58.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:47:58.728: INFO: namespace subpath-6631 deletion completed in 6.204356662s

• [SLOW TEST:37.797 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:47:58.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-6f9e03c8-ba9f-4217-abb7-6f1cc46cbff2
STEP: Creating a pod to test consume configMaps
Jan 20 14:47:58.847: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a5342dd9-f399-418e-8f1f-3b0f704dd332" in namespace "projected-4978" to be "success or failure"
Jan 20 14:47:58.873: INFO: Pod "pod-projected-configmaps-a5342dd9-f399-418e-8f1f-3b0f704dd332": Phase="Pending", Reason="", readiness=false. Elapsed: 25.773534ms
Jan 20 14:48:00.880: INFO: Pod "pod-projected-configmaps-a5342dd9-f399-418e-8f1f-3b0f704dd332": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032875024s
Jan 20 14:48:02.896: INFO: Pod "pod-projected-configmaps-a5342dd9-f399-418e-8f1f-3b0f704dd332": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048215705s
Jan 20 14:48:04.904: INFO: Pod "pod-projected-configmaps-a5342dd9-f399-418e-8f1f-3b0f704dd332": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05647762s
Jan 20 14:48:06.911: INFO: Pod "pod-projected-configmaps-a5342dd9-f399-418e-8f1f-3b0f704dd332": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063714657s
STEP: Saw pod success
Jan 20 14:48:06.911: INFO: Pod "pod-projected-configmaps-a5342dd9-f399-418e-8f1f-3b0f704dd332" satisfied condition "success or failure"
Jan 20 14:48:06.916: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-a5342dd9-f399-418e-8f1f-3b0f704dd332 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 20 14:48:07.235: INFO: Waiting for pod pod-projected-configmaps-a5342dd9-f399-418e-8f1f-3b0f704dd332 to disappear
Jan 20 14:48:07.241: INFO: Pod pod-projected-configmaps-a5342dd9-f399-418e-8f1f-3b0f704dd332 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:48:07.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4978" for this suite.
Jan 20 14:48:13.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:48:13.414: INFO: namespace projected-4978 deletion completed in 6.163599088s

• [SLOW TEST:14.685 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:48:13.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 20 14:48:23.716: INFO: Waiting up to 5m0s for pod "client-envvars-2d6fca79-fd68-4022-9f20-aba624b0cf02" in namespace "pods-4684" to be "success or failure"
Jan 20 14:48:23.727: INFO: Pod "client-envvars-2d6fca79-fd68-4022-9f20-aba624b0cf02": Phase="Pending", Reason="", readiness=false. Elapsed: 11.017223ms
Jan 20 14:48:25.737: INFO: Pod "client-envvars-2d6fca79-fd68-4022-9f20-aba624b0cf02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020785815s
Jan 20 14:48:27.750: INFO: Pod "client-envvars-2d6fca79-fd68-4022-9f20-aba624b0cf02": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033202535s
Jan 20 14:48:29.765: INFO: Pod "client-envvars-2d6fca79-fd68-4022-9f20-aba624b0cf02": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049137063s
Jan 20 14:48:31.776: INFO: Pod "client-envvars-2d6fca79-fd68-4022-9f20-aba624b0cf02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059390946s
STEP: Saw pod success
Jan 20 14:48:31.776: INFO: Pod "client-envvars-2d6fca79-fd68-4022-9f20-aba624b0cf02" satisfied condition "success or failure"
Jan 20 14:48:31.782: INFO: Trying to get logs from node iruya-node pod client-envvars-2d6fca79-fd68-4022-9f20-aba624b0cf02 container env3cont: 
STEP: delete the pod
Jan 20 14:48:31.882: INFO: Waiting for pod client-envvars-2d6fca79-fd68-4022-9f20-aba624b0cf02 to disappear
Jan 20 14:48:31.950: INFO: Pod client-envvars-2d6fca79-fd68-4022-9f20-aba624b0cf02 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:48:31.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4684" for this suite.
Jan 20 14:49:17.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:49:18.163: INFO: namespace pods-4684 deletion completed in 46.201605113s

• [SLOW TEST:64.748 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:49:18.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jan 20 14:49:18.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7320'
Jan 20 14:49:20.952: INFO: stderr: ""
Jan 20 14:49:20.952: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 20 14:49:20.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7320'
Jan 20 14:49:21.361: INFO: stderr: ""
Jan 20 14:49:21.361: INFO: stdout: "update-demo-nautilus-fgcrs update-demo-nautilus-h6k55 "
Jan 20 14:49:21.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fgcrs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7320'
Jan 20 14:49:21.481: INFO: stderr: ""
Jan 20 14:49:21.481: INFO: stdout: ""
Jan 20 14:49:21.481: INFO: update-demo-nautilus-fgcrs is created but not running
Jan 20 14:49:26.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7320'
Jan 20 14:49:27.034: INFO: stderr: ""
Jan 20 14:49:27.034: INFO: stdout: "update-demo-nautilus-fgcrs update-demo-nautilus-h6k55 "
Jan 20 14:49:27.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fgcrs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7320'
Jan 20 14:49:27.502: INFO: stderr: ""
Jan 20 14:49:27.502: INFO: stdout: ""
Jan 20 14:49:27.502: INFO: update-demo-nautilus-fgcrs is created but not running
Jan 20 14:49:32.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7320'
Jan 20 14:49:32.653: INFO: stderr: ""
Jan 20 14:49:32.653: INFO: stdout: "update-demo-nautilus-fgcrs update-demo-nautilus-h6k55 "
Jan 20 14:49:32.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fgcrs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7320'
Jan 20 14:49:32.829: INFO: stderr: ""
Jan 20 14:49:32.829: INFO: stdout: "true"
Jan 20 14:49:32.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fgcrs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7320'
Jan 20 14:49:32.917: INFO: stderr: ""
Jan 20 14:49:32.917: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 20 14:49:32.917: INFO: validating pod update-demo-nautilus-fgcrs
Jan 20 14:49:32.928: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 20 14:49:32.928: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 20 14:49:32.928: INFO: update-demo-nautilus-fgcrs is verified up and running
Jan 20 14:49:32.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h6k55 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7320'
Jan 20 14:49:33.028: INFO: stderr: ""
Jan 20 14:49:33.028: INFO: stdout: "true"
Jan 20 14:49:33.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h6k55 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7320'
Jan 20 14:49:33.181: INFO: stderr: ""
Jan 20 14:49:33.181: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 20 14:49:33.181: INFO: validating pod update-demo-nautilus-h6k55
Jan 20 14:49:33.201: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 20 14:49:33.201: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 20 14:49:33.201: INFO: update-demo-nautilus-h6k55 is verified up and running
STEP: using delete to clean up resources
Jan 20 14:49:33.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7320'
Jan 20 14:49:33.314: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 20 14:49:33.314: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 20 14:49:33.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7320'
Jan 20 14:49:33.448: INFO: stderr: "No resources found.\n"
Jan 20 14:49:33.448: INFO: stdout: ""
Jan 20 14:49:33.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7320 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 20 14:49:33.634: INFO: stderr: ""
Jan 20 14:49:33.634: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:49:33.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7320" for this suite.
Jan 20 14:49:55.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:49:55.907: INFO: namespace kubectl-7320 deletion completed in 22.217379213s

• [SLOW TEST:37.744 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:49:55.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 20 14:49:56.012: INFO: Waiting up to 5m0s for pod "downward-api-f0a33963-6e2d-4c1f-91f2-9d45762c7fa8" in namespace "downward-api-5640" to be "success or failure"
Jan 20 14:49:56.019: INFO: Pod "downward-api-f0a33963-6e2d-4c1f-91f2-9d45762c7fa8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.27918ms
Jan 20 14:49:58.027: INFO: Pod "downward-api-f0a33963-6e2d-4c1f-91f2-9d45762c7fa8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014953345s
Jan 20 14:50:00.059: INFO: Pod "downward-api-f0a33963-6e2d-4c1f-91f2-9d45762c7fa8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047667688s
Jan 20 14:50:02.133: INFO: Pod "downward-api-f0a33963-6e2d-4c1f-91f2-9d45762c7fa8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121763934s
Jan 20 14:50:04.150: INFO: Pod "downward-api-f0a33963-6e2d-4c1f-91f2-9d45762c7fa8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.137910394s
STEP: Saw pod success
Jan 20 14:50:04.150: INFO: Pod "downward-api-f0a33963-6e2d-4c1f-91f2-9d45762c7fa8" satisfied condition "success or failure"
Jan 20 14:50:04.154: INFO: Trying to get logs from node iruya-node pod downward-api-f0a33963-6e2d-4c1f-91f2-9d45762c7fa8 container dapi-container: 
STEP: delete the pod
Jan 20 14:50:04.259: INFO: Waiting for pod downward-api-f0a33963-6e2d-4c1f-91f2-9d45762c7fa8 to disappear
Jan 20 14:50:04.278: INFO: Pod downward-api-f0a33963-6e2d-4c1f-91f2-9d45762c7fa8 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:50:04.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5640" for this suite.
Jan 20 14:50:10.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:50:10.511: INFO: namespace downward-api-5640 deletion completed in 6.226283277s

• [SLOW TEST:14.604 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:50:10.511: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 20 14:50:10.585: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8bcf6991-738b-4ad8-a16a-f61227bba4b5" in namespace "projected-3904" to be "success or failure"
Jan 20 14:50:10.704: INFO: Pod "downwardapi-volume-8bcf6991-738b-4ad8-a16a-f61227bba4b5": Phase="Pending", Reason="", readiness=false. Elapsed: 119.259238ms
Jan 20 14:50:12.720: INFO: Pod "downwardapi-volume-8bcf6991-738b-4ad8-a16a-f61227bba4b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134806803s
Jan 20 14:50:14.734: INFO: Pod "downwardapi-volume-8bcf6991-738b-4ad8-a16a-f61227bba4b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149459504s
Jan 20 14:50:16.742: INFO: Pod "downwardapi-volume-8bcf6991-738b-4ad8-a16a-f61227bba4b5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.157445842s
Jan 20 14:50:18.755: INFO: Pod "downwardapi-volume-8bcf6991-738b-4ad8-a16a-f61227bba4b5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.16982445s
Jan 20 14:50:20.767: INFO: Pod "downwardapi-volume-8bcf6991-738b-4ad8-a16a-f61227bba4b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.182011387s
STEP: Saw pod success
Jan 20 14:50:20.767: INFO: Pod "downwardapi-volume-8bcf6991-738b-4ad8-a16a-f61227bba4b5" satisfied condition "success or failure"
Jan 20 14:50:20.773: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8bcf6991-738b-4ad8-a16a-f61227bba4b5 container client-container: 
STEP: delete the pod
Jan 20 14:50:20.841: INFO: Waiting for pod downwardapi-volume-8bcf6991-738b-4ad8-a16a-f61227bba4b5 to disappear
Jan 20 14:50:20.927: INFO: Pod downwardapi-volume-8bcf6991-738b-4ad8-a16a-f61227bba4b5 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:50:20.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3904" for this suite.
Jan 20 14:50:26.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:50:27.083: INFO: namespace projected-3904 deletion completed in 6.149688627s

• [SLOW TEST:16.572 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:50:27.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-39e17a75-a8aa-4cf3-8665-25ae42b6c605 in namespace container-probe-5628
Jan 20 14:50:35.181: INFO: Started pod liveness-39e17a75-a8aa-4cf3-8665-25ae42b6c605 in namespace container-probe-5628
STEP: checking the pod's current state and verifying that restartCount is present
Jan 20 14:50:35.185: INFO: Initial restart count of pod liveness-39e17a75-a8aa-4cf3-8665-25ae42b6c605 is 0
Jan 20 14:50:51.264: INFO: Restart count of pod container-probe-5628/liveness-39e17a75-a8aa-4cf3-8665-25ae42b6c605 is now 1 (16.079232852s elapsed)
Jan 20 14:51:13.691: INFO: Restart count of pod container-probe-5628/liveness-39e17a75-a8aa-4cf3-8665-25ae42b6c605 is now 2 (38.506529428s elapsed)
Jan 20 14:51:33.787: INFO: Restart count of pod container-probe-5628/liveness-39e17a75-a8aa-4cf3-8665-25ae42b6c605 is now 3 (58.601967808s elapsed)
Jan 20 14:51:51.930: INFO: Restart count of pod container-probe-5628/liveness-39e17a75-a8aa-4cf3-8665-25ae42b6c605 is now 4 (1m16.744880967s elapsed)
Jan 20 14:52:54.335: INFO: Restart count of pod container-probe-5628/liveness-39e17a75-a8aa-4cf3-8665-25ae42b6c605 is now 5 (2m19.1504107s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:52:54.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5628" for this suite.
Jan 20 14:53:00.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:53:00.594: INFO: namespace container-probe-5628 deletion completed in 6.17314133s

• [SLOW TEST:153.511 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:53:00.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jan 20 14:53:00.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2335'
Jan 20 14:53:01.036: INFO: stderr: ""
Jan 20 14:53:01.036: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 20 14:53:01.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2335'
Jan 20 14:53:01.214: INFO: stderr: ""
Jan 20 14:53:01.214: INFO: stdout: "update-demo-nautilus-7z2nz update-demo-nautilus-rknmj "
Jan 20 14:53:01.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7z2nz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2335'
Jan 20 14:53:01.419: INFO: stderr: ""
Jan 20 14:53:01.419: INFO: stdout: ""
Jan 20 14:53:01.419: INFO: update-demo-nautilus-7z2nz is created but not running
Jan 20 14:53:06.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2335'
Jan 20 14:53:07.381: INFO: stderr: ""
Jan 20 14:53:07.381: INFO: stdout: "update-demo-nautilus-7z2nz update-demo-nautilus-rknmj "
Jan 20 14:53:07.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7z2nz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2335'
Jan 20 14:53:07.833: INFO: stderr: ""
Jan 20 14:53:07.833: INFO: stdout: ""
Jan 20 14:53:07.833: INFO: update-demo-nautilus-7z2nz is created but not running
Jan 20 14:53:12.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2335'
Jan 20 14:53:13.019: INFO: stderr: ""
Jan 20 14:53:13.019: INFO: stdout: "update-demo-nautilus-7z2nz update-demo-nautilus-rknmj "
Jan 20 14:53:13.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7z2nz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2335'
Jan 20 14:53:13.114: INFO: stderr: ""
Jan 20 14:53:13.114: INFO: stdout: "true"
Jan 20 14:53:13.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7z2nz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2335'
Jan 20 14:53:13.256: INFO: stderr: ""
Jan 20 14:53:13.256: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 20 14:53:13.257: INFO: validating pod update-demo-nautilus-7z2nz
Jan 20 14:53:13.271: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 20 14:53:13.271: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 20 14:53:13.271: INFO: update-demo-nautilus-7z2nz is verified up and running
Jan 20 14:53:13.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rknmj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2335'
Jan 20 14:53:13.355: INFO: stderr: ""
Jan 20 14:53:13.355: INFO: stdout: "true"
Jan 20 14:53:13.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rknmj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2335'
Jan 20 14:53:13.471: INFO: stderr: ""
Jan 20 14:53:13.471: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 20 14:53:13.471: INFO: validating pod update-demo-nautilus-rknmj
Jan 20 14:53:13.477: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 20 14:53:13.477: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 20 14:53:13.477: INFO: update-demo-nautilus-rknmj is verified up and running
STEP: scaling down the replication controller
Jan 20 14:53:13.480: INFO: scanned /root for discovery docs: 
Jan 20 14:53:13.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-2335'
Jan 20 14:53:14.629: INFO: stderr: ""
Jan 20 14:53:14.629: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 20 14:53:14.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2335'
Jan 20 14:53:14.754: INFO: stderr: ""
Jan 20 14:53:14.754: INFO: stdout: "update-demo-nautilus-7z2nz update-demo-nautilus-rknmj "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 20 14:53:19.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2335'
Jan 20 14:53:19.908: INFO: stderr: ""
Jan 20 14:53:19.908: INFO: stdout: "update-demo-nautilus-7z2nz update-demo-nautilus-rknmj "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 20 14:53:24.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2335'
Jan 20 14:53:25.064: INFO: stderr: ""
Jan 20 14:53:25.064: INFO: stdout: "update-demo-nautilus-7z2nz update-demo-nautilus-rknmj "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 20 14:53:30.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2335'
Jan 20 14:53:30.206: INFO: stderr: ""
Jan 20 14:53:30.206: INFO: stdout: "update-demo-nautilus-rknmj "
Jan 20 14:53:30.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rknmj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2335'
Jan 20 14:53:30.311: INFO: stderr: ""
Jan 20 14:53:30.311: INFO: stdout: "true"
Jan 20 14:53:30.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rknmj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2335'
Jan 20 14:53:30.445: INFO: stderr: ""
Jan 20 14:53:30.445: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 20 14:53:30.445: INFO: validating pod update-demo-nautilus-rknmj
Jan 20 14:53:30.453: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 20 14:53:30.453: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 20 14:53:30.453: INFO: update-demo-nautilus-rknmj is verified up and running
STEP: scaling up the replication controller
Jan 20 14:53:30.456: INFO: scanned /root for discovery docs: 
Jan 20 14:53:30.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-2335'
Jan 20 14:53:31.754: INFO: stderr: ""
Jan 20 14:53:31.754: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 20 14:53:31.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2335'
Jan 20 14:53:31.990: INFO: stderr: ""
Jan 20 14:53:31.990: INFO: stdout: "update-demo-nautilus-928v2 update-demo-nautilus-rknmj "
Jan 20 14:53:31.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-928v2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2335'
Jan 20 14:53:32.089: INFO: stderr: ""
Jan 20 14:53:32.090: INFO: stdout: ""
Jan 20 14:53:32.090: INFO: update-demo-nautilus-928v2 is created but not running
Jan 20 14:53:37.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2335'
Jan 20 14:53:37.272: INFO: stderr: ""
Jan 20 14:53:37.273: INFO: stdout: "update-demo-nautilus-928v2 update-demo-nautilus-rknmj "
Jan 20 14:53:37.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-928v2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2335'
Jan 20 14:53:37.447: INFO: stderr: ""
Jan 20 14:53:37.447: INFO: stdout: ""
Jan 20 14:53:37.447: INFO: update-demo-nautilus-928v2 is created but not running
Jan 20 14:53:42.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2335'
Jan 20 14:53:42.631: INFO: stderr: ""
Jan 20 14:53:42.631: INFO: stdout: "update-demo-nautilus-928v2 update-demo-nautilus-rknmj "
Jan 20 14:53:42.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-928v2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2335'
Jan 20 14:53:42.741: INFO: stderr: ""
Jan 20 14:53:42.741: INFO: stdout: "true"
Jan 20 14:53:42.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-928v2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2335'
Jan 20 14:53:42.865: INFO: stderr: ""
Jan 20 14:53:42.866: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 20 14:53:42.866: INFO: validating pod update-demo-nautilus-928v2
Jan 20 14:53:42.883: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 20 14:53:42.883: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 20 14:53:42.883: INFO: update-demo-nautilus-928v2 is verified up and running
Jan 20 14:53:42.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rknmj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2335'
Jan 20 14:53:42.991: INFO: stderr: ""
Jan 20 14:53:42.991: INFO: stdout: "true"
Jan 20 14:53:42.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rknmj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2335'
Jan 20 14:53:43.082: INFO: stderr: ""
Jan 20 14:53:43.082: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 20 14:53:43.082: INFO: validating pod update-demo-nautilus-rknmj
Jan 20 14:53:43.088: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 20 14:53:43.088: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 20 14:53:43.088: INFO: update-demo-nautilus-rknmj is verified up and running
STEP: using delete to clean up resources
Jan 20 14:53:43.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2335'
Jan 20 14:53:43.174: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 20 14:53:43.174: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 20 14:53:43.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2335'
Jan 20 14:53:43.330: INFO: stderr: "No resources found.\n"
Jan 20 14:53:43.330: INFO: stdout: ""
Jan 20 14:53:43.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2335 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 20 14:53:43.549: INFO: stderr: ""
Jan 20 14:53:43.549: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:53:43.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2335" for this suite.
Jan 20 14:54:05.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:54:05.733: INFO: namespace kubectl-2335 deletion completed in 22.173474326s

• [SLOW TEST:65.138 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:54:05.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-9d12356e-2366-457e-90f2-77438444aef4
STEP: Creating a pod to test consume configMaps
Jan 20 14:54:05.890: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-79ef5b79-8eb5-48e3-be9e-7492fdf4eee4" in namespace "projected-6662" to be "success or failure"
Jan 20 14:54:05.896: INFO: Pod "pod-projected-configmaps-79ef5b79-8eb5-48e3-be9e-7492fdf4eee4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.177175ms
Jan 20 14:54:07.905: INFO: Pod "pod-projected-configmaps-79ef5b79-8eb5-48e3-be9e-7492fdf4eee4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014428445s
Jan 20 14:54:09.912: INFO: Pod "pod-projected-configmaps-79ef5b79-8eb5-48e3-be9e-7492fdf4eee4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021529689s
Jan 20 14:54:11.924: INFO: Pod "pod-projected-configmaps-79ef5b79-8eb5-48e3-be9e-7492fdf4eee4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034070951s
Jan 20 14:54:13.945: INFO: Pod "pod-projected-configmaps-79ef5b79-8eb5-48e3-be9e-7492fdf4eee4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054830903s
Jan 20 14:54:15.956: INFO: Pod "pod-projected-configmaps-79ef5b79-8eb5-48e3-be9e-7492fdf4eee4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.066050108s
STEP: Saw pod success
Jan 20 14:54:15.956: INFO: Pod "pod-projected-configmaps-79ef5b79-8eb5-48e3-be9e-7492fdf4eee4" satisfied condition "success or failure"
Jan 20 14:54:15.961: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-79ef5b79-8eb5-48e3-be9e-7492fdf4eee4 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 20 14:54:16.016: INFO: Waiting for pod pod-projected-configmaps-79ef5b79-8eb5-48e3-be9e-7492fdf4eee4 to disappear
Jan 20 14:54:16.022: INFO: Pod pod-projected-configmaps-79ef5b79-8eb5-48e3-be9e-7492fdf4eee4 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:54:16.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6662" for this suite.
Jan 20 14:54:22.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:54:22.213: INFO: namespace projected-6662 deletion completed in 6.186301492s

• [SLOW TEST:16.480 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:54:22.214: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 20 14:54:22.347: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7926600f-b1e6-4d66-b4a1-3e883f591c0f" in namespace "downward-api-7690" to be "success or failure"
Jan 20 14:54:22.382: INFO: Pod "downwardapi-volume-7926600f-b1e6-4d66-b4a1-3e883f591c0f": Phase="Pending", Reason="", readiness=false. Elapsed: 34.733264ms
Jan 20 14:54:24.409: INFO: Pod "downwardapi-volume-7926600f-b1e6-4d66-b4a1-3e883f591c0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061966235s
Jan 20 14:54:26.421: INFO: Pod "downwardapi-volume-7926600f-b1e6-4d66-b4a1-3e883f591c0f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074012244s
Jan 20 14:54:28.428: INFO: Pod "downwardapi-volume-7926600f-b1e6-4d66-b4a1-3e883f591c0f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081198754s
Jan 20 14:54:30.439: INFO: Pod "downwardapi-volume-7926600f-b1e6-4d66-b4a1-3e883f591c0f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.091831112s
Jan 20 14:54:32.448: INFO: Pod "downwardapi-volume-7926600f-b1e6-4d66-b4a1-3e883f591c0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.100606911s
STEP: Saw pod success
Jan 20 14:54:32.448: INFO: Pod "downwardapi-volume-7926600f-b1e6-4d66-b4a1-3e883f591c0f" satisfied condition "success or failure"
Jan 20 14:54:32.452: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-7926600f-b1e6-4d66-b4a1-3e883f591c0f container client-container: 
STEP: delete the pod
Jan 20 14:54:32.528: INFO: Waiting for pod downwardapi-volume-7926600f-b1e6-4d66-b4a1-3e883f591c0f to disappear
Jan 20 14:54:32.533: INFO: Pod downwardapi-volume-7926600f-b1e6-4d66-b4a1-3e883f591c0f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:54:32.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7690" for this suite.
Jan 20 14:54:38.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:54:38.710: INFO: namespace downward-api-7690 deletion completed in 6.17121118s

• [SLOW TEST:16.497 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:54:38.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 20 14:54:38.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:54:47.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4511" for this suite.
Jan 20 14:55:39.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:55:39.563: INFO: namespace pods-4511 deletion completed in 52.174445961s

• [SLOW TEST:60.853 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:55:39.564: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Jan 20 14:55:39.679: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7236" to be "success or failure"
Jan 20 14:55:39.694: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.224332ms
Jan 20 14:55:41.707: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02774481s
Jan 20 14:55:43.721: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041367481s
Jan 20 14:55:45.730: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050333781s
Jan 20 14:55:47.747: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06719217s
Jan 20 14:55:49.785: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.105199697s
STEP: Saw pod success
Jan 20 14:55:49.785: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan 20 14:55:49.793: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan 20 14:55:49.907: INFO: Waiting for pod pod-host-path-test to disappear
Jan 20 14:55:49.916: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:55:49.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-7236" for this suite.
Jan 20 14:55:55.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:55:56.089: INFO: namespace hostpath-7236 deletion completed in 6.168511768s

• [SLOW TEST:16.525 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:55:56.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-52fa33b5-2cc5-4707-a4fa-732fd6c72382
STEP: Creating a pod to test consume secrets
Jan 20 14:55:56.163: INFO: Waiting up to 5m0s for pod "pod-secrets-45f262d2-2370-41a4-a7e8-5d98706105c7" in namespace "secrets-1310" to be "success or failure"
Jan 20 14:55:56.184: INFO: Pod "pod-secrets-45f262d2-2370-41a4-a7e8-5d98706105c7": Phase="Pending", Reason="", readiness=false. Elapsed: 20.613156ms
Jan 20 14:55:58.193: INFO: Pod "pod-secrets-45f262d2-2370-41a4-a7e8-5d98706105c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030033217s
Jan 20 14:56:00.200: INFO: Pod "pod-secrets-45f262d2-2370-41a4-a7e8-5d98706105c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036400383s
Jan 20 14:56:02.209: INFO: Pod "pod-secrets-45f262d2-2370-41a4-a7e8-5d98706105c7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045235376s
Jan 20 14:56:04.219: INFO: Pod "pod-secrets-45f262d2-2370-41a4-a7e8-5d98706105c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05536559s
STEP: Saw pod success
Jan 20 14:56:04.219: INFO: Pod "pod-secrets-45f262d2-2370-41a4-a7e8-5d98706105c7" satisfied condition "success or failure"
Jan 20 14:56:04.232: INFO: Trying to get logs from node iruya-node pod pod-secrets-45f262d2-2370-41a4-a7e8-5d98706105c7 container secret-env-test: 
STEP: delete the pod
Jan 20 14:56:04.309: INFO: Waiting for pod pod-secrets-45f262d2-2370-41a4-a7e8-5d98706105c7 to disappear
Jan 20 14:56:04.403: INFO: Pod pod-secrets-45f262d2-2370-41a4-a7e8-5d98706105c7 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:56:04.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1310" for this suite.
Jan 20 14:56:10.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:56:10.644: INFO: namespace secrets-1310 deletion completed in 6.236416845s

• [SLOW TEST:14.555 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:56:10.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-bdd4a5bb-b283-4450-afc1-a2fadaf6083d
STEP: Creating configMap with name cm-test-opt-upd-1af569f5-e34f-44cf-af59-6b03b806ae17
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-bdd4a5bb-b283-4450-afc1-a2fadaf6083d
STEP: Updating configmap cm-test-opt-upd-1af569f5-e34f-44cf-af59-6b03b806ae17
STEP: Creating configMap with name cm-test-opt-create-948d3df1-1aa0-48c7-8beb-cfef461e30e2
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:57:40.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9933" for this suite.
Jan 20 14:58:04.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:58:04.841: INFO: namespace configmap-9933 deletion completed in 24.167441257s

• [SLOW TEST:114.196 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:58:04.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 20 14:58:05.011: INFO: Waiting up to 5m0s for pod "pod-1e57fffb-5992-4777-ac13-c49d2ef0d3f5" in namespace "emptydir-2632" to be "success or failure"
Jan 20 14:58:05.015: INFO: Pod "pod-1e57fffb-5992-4777-ac13-c49d2ef0d3f5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.85491ms
Jan 20 14:58:07.029: INFO: Pod "pod-1e57fffb-5992-4777-ac13-c49d2ef0d3f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018103477s
Jan 20 14:58:09.040: INFO: Pod "pod-1e57fffb-5992-4777-ac13-c49d2ef0d3f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029163208s
Jan 20 14:58:11.050: INFO: Pod "pod-1e57fffb-5992-4777-ac13-c49d2ef0d3f5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038942289s
Jan 20 14:58:13.060: INFO: Pod "pod-1e57fffb-5992-4777-ac13-c49d2ef0d3f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048980612s
STEP: Saw pod success
Jan 20 14:58:13.060: INFO: Pod "pod-1e57fffb-5992-4777-ac13-c49d2ef0d3f5" satisfied condition "success or failure"
Jan 20 14:58:13.064: INFO: Trying to get logs from node iruya-node pod pod-1e57fffb-5992-4777-ac13-c49d2ef0d3f5 container test-container: 
STEP: delete the pod
Jan 20 14:58:13.141: INFO: Waiting for pod pod-1e57fffb-5992-4777-ac13-c49d2ef0d3f5 to disappear
Jan 20 14:58:13.149: INFO: Pod pod-1e57fffb-5992-4777-ac13-c49d2ef0d3f5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:58:13.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2632" for this suite.
Jan 20 14:58:19.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:58:19.401: INFO: namespace emptydir-2632 deletion completed in 6.168489646s

• [SLOW TEST:14.559 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:58:19.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 20 14:58:19.483: INFO: Waiting up to 5m0s for pod "pod-f2180080-e95a-4229-90c5-dc3a208deb20" in namespace "emptydir-980" to be "success or failure"
Jan 20 14:58:19.489: INFO: Pod "pod-f2180080-e95a-4229-90c5-dc3a208deb20": Phase="Pending", Reason="", readiness=false. Elapsed: 6.284856ms
Jan 20 14:58:21.499: INFO: Pod "pod-f2180080-e95a-4229-90c5-dc3a208deb20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016162373s
Jan 20 14:58:23.506: INFO: Pod "pod-f2180080-e95a-4229-90c5-dc3a208deb20": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023043335s
Jan 20 14:58:25.515: INFO: Pod "pod-f2180080-e95a-4229-90c5-dc3a208deb20": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03237894s
Jan 20 14:58:27.524: INFO: Pod "pod-f2180080-e95a-4229-90c5-dc3a208deb20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041477913s
STEP: Saw pod success
Jan 20 14:58:27.524: INFO: Pod "pod-f2180080-e95a-4229-90c5-dc3a208deb20" satisfied condition "success or failure"
Jan 20 14:58:27.530: INFO: Trying to get logs from node iruya-node pod pod-f2180080-e95a-4229-90c5-dc3a208deb20 container test-container: 
STEP: delete the pod
Jan 20 14:58:27.582: INFO: Waiting for pod pod-f2180080-e95a-4229-90c5-dc3a208deb20 to disappear
Jan 20 14:58:27.642: INFO: Pod pod-f2180080-e95a-4229-90c5-dc3a208deb20 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:58:27.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-980" for this suite.
Jan 20 14:58:33.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:58:33.853: INFO: namespace emptydir-980 deletion completed in 6.205235755s

• [SLOW TEST:14.452 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:58:33.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan 20 14:58:33.979: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2294,SelfLink:/api/v1/namespaces/watch-2294/configmaps/e2e-watch-test-configmap-a,UID:6d64aa62-4851-4c8f-80c7-ff07da3f9dc7,ResourceVersion:21197889,Generation:0,CreationTimestamp:2020-01-20 14:58:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 20 14:58:33.979: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2294,SelfLink:/api/v1/namespaces/watch-2294/configmaps/e2e-watch-test-configmap-a,UID:6d64aa62-4851-4c8f-80c7-ff07da3f9dc7,ResourceVersion:21197889,Generation:0,CreationTimestamp:2020-01-20 14:58:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan 20 14:58:44.002: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2294,SelfLink:/api/v1/namespaces/watch-2294/configmaps/e2e-watch-test-configmap-a,UID:6d64aa62-4851-4c8f-80c7-ff07da3f9dc7,ResourceVersion:21197904,Generation:0,CreationTimestamp:2020-01-20 14:58:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 20 14:58:44.003: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2294,SelfLink:/api/v1/namespaces/watch-2294/configmaps/e2e-watch-test-configmap-a,UID:6d64aa62-4851-4c8f-80c7-ff07da3f9dc7,ResourceVersion:21197904,Generation:0,CreationTimestamp:2020-01-20 14:58:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan 20 14:58:54.020: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2294,SelfLink:/api/v1/namespaces/watch-2294/configmaps/e2e-watch-test-configmap-a,UID:6d64aa62-4851-4c8f-80c7-ff07da3f9dc7,ResourceVersion:21197918,Generation:0,CreationTimestamp:2020-01-20 14:58:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 20 14:58:54.021: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2294,SelfLink:/api/v1/namespaces/watch-2294/configmaps/e2e-watch-test-configmap-a,UID:6d64aa62-4851-4c8f-80c7-ff07da3f9dc7,ResourceVersion:21197918,Generation:0,CreationTimestamp:2020-01-20 14:58:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan 20 14:59:04.039: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2294,SelfLink:/api/v1/namespaces/watch-2294/configmaps/e2e-watch-test-configmap-a,UID:6d64aa62-4851-4c8f-80c7-ff07da3f9dc7,ResourceVersion:21197933,Generation:0,CreationTimestamp:2020-01-20 14:58:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 20 14:59:04.040: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2294,SelfLink:/api/v1/namespaces/watch-2294/configmaps/e2e-watch-test-configmap-a,UID:6d64aa62-4851-4c8f-80c7-ff07da3f9dc7,ResourceVersion:21197933,Generation:0,CreationTimestamp:2020-01-20 14:58:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan 20 14:59:14.052: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2294,SelfLink:/api/v1/namespaces/watch-2294/configmaps/e2e-watch-test-configmap-b,UID:7d653b03-fc87-4048-ac89-4138e9ff8b82,ResourceVersion:21197947,Generation:0,CreationTimestamp:2020-01-20 14:59:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 20 14:59:14.052: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2294,SelfLink:/api/v1/namespaces/watch-2294/configmaps/e2e-watch-test-configmap-b,UID:7d653b03-fc87-4048-ac89-4138e9ff8b82,ResourceVersion:21197947,Generation:0,CreationTimestamp:2020-01-20 14:59:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan 20 14:59:24.067: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2294,SelfLink:/api/v1/namespaces/watch-2294/configmaps/e2e-watch-test-configmap-b,UID:7d653b03-fc87-4048-ac89-4138e9ff8b82,ResourceVersion:21197961,Generation:0,CreationTimestamp:2020-01-20 14:59:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 20 14:59:24.067: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2294,SelfLink:/api/v1/namespaces/watch-2294/configmaps/e2e-watch-test-configmap-b,UID:7d653b03-fc87-4048-ac89-4138e9ff8b82,ResourceVersion:21197961,Generation:0,CreationTimestamp:2020-01-20 14:59:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:59:34.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2294" for this suite.
Jan 20 14:59:40.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:59:40.252: INFO: namespace watch-2294 deletion completed in 6.172328019s

• [SLOW TEST:66.398 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:59:40.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-80ca5137-1620-47ba-b07d-53d326bbf0cd
STEP: Creating a pod to test consume configMaps
Jan 20 14:59:40.539: INFO: Waiting up to 5m0s for pod "pod-configmaps-b8a7bd3e-8e29-4289-a2bd-48244702a88e" in namespace "configmap-6267" to be "success or failure"
Jan 20 14:59:40.550: INFO: Pod "pod-configmaps-b8a7bd3e-8e29-4289-a2bd-48244702a88e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.638245ms
Jan 20 14:59:42.565: INFO: Pod "pod-configmaps-b8a7bd3e-8e29-4289-a2bd-48244702a88e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025641404s
Jan 20 14:59:44.570: INFO: Pod "pod-configmaps-b8a7bd3e-8e29-4289-a2bd-48244702a88e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030813369s
Jan 20 14:59:46.580: INFO: Pod "pod-configmaps-b8a7bd3e-8e29-4289-a2bd-48244702a88e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040698524s
Jan 20 14:59:48.589: INFO: Pod "pod-configmaps-b8a7bd3e-8e29-4289-a2bd-48244702a88e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049742304s
STEP: Saw pod success
Jan 20 14:59:48.589: INFO: Pod "pod-configmaps-b8a7bd3e-8e29-4289-a2bd-48244702a88e" satisfied condition "success or failure"
Jan 20 14:59:48.594: INFO: Trying to get logs from node iruya-node pod pod-configmaps-b8a7bd3e-8e29-4289-a2bd-48244702a88e container configmap-volume-test: 
STEP: delete the pod
Jan 20 14:59:48.652: INFO: Waiting for pod pod-configmaps-b8a7bd3e-8e29-4289-a2bd-48244702a88e to disappear
Jan 20 14:59:48.725: INFO: Pod pod-configmaps-b8a7bd3e-8e29-4289-a2bd-48244702a88e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 14:59:48.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6267" for this suite.
Jan 20 14:59:54.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 14:59:54.902: INFO: namespace configmap-6267 deletion completed in 6.169868473s

• [SLOW TEST:14.650 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 14:59:54.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 15:00:03.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1074" for this suite.
Jan 20 15:00:45.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 15:00:45.253: INFO: namespace kubelet-test-1074 deletion completed in 42.156412645s

• [SLOW TEST:50.350 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 15:00:45.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 20 15:00:45.671: INFO: Number of nodes with available pods: 0
Jan 20 15:00:45.671: INFO: Node iruya-node is running more than one daemon pod
Jan 20 15:00:46.683: INFO: Number of nodes with available pods: 0
Jan 20 15:00:46.683: INFO: Node iruya-node is running more than one daemon pod
Jan 20 15:00:47.690: INFO: Number of nodes with available pods: 0
Jan 20 15:00:47.691: INFO: Node iruya-node is running more than one daemon pod
Jan 20 15:00:48.680: INFO: Number of nodes with available pods: 0
Jan 20 15:00:48.680: INFO: Node iruya-node is running more than one daemon pod
Jan 20 15:00:49.694: INFO: Number of nodes with available pods: 0
Jan 20 15:00:49.694: INFO: Node iruya-node is running more than one daemon pod
Jan 20 15:00:51.016: INFO: Number of nodes with available pods: 0
Jan 20 15:00:51.016: INFO: Node iruya-node is running more than one daemon pod
Jan 20 15:00:51.716: INFO: Number of nodes with available pods: 0
Jan 20 15:00:51.716: INFO: Node iruya-node is running more than one daemon pod
Jan 20 15:00:53.710: INFO: Number of nodes with available pods: 0
Jan 20 15:00:53.710: INFO: Node iruya-node is running more than one daemon pod
Jan 20 15:00:54.691: INFO: Number of nodes with available pods: 0
Jan 20 15:00:54.691: INFO: Node iruya-node is running more than one daemon pod
Jan 20 15:00:55.696: INFO: Number of nodes with available pods: 1
Jan 20 15:00:55.697: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 20 15:00:56.687: INFO: Number of nodes with available pods: 2
Jan 20 15:00:56.687: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan 20 15:00:56.795: INFO: Number of nodes with available pods: 1
Jan 20 15:00:56.795: INFO: Node iruya-node is running more than one daemon pod
Jan 20 15:00:57.820: INFO: Number of nodes with available pods: 1
Jan 20 15:00:57.820: INFO: Node iruya-node is running more than one daemon pod
Jan 20 15:00:58.875: INFO: Number of nodes with available pods: 1
Jan 20 15:00:58.875: INFO: Node iruya-node is running more than one daemon pod
Jan 20 15:00:59.817: INFO: Number of nodes with available pods: 1
Jan 20 15:00:59.817: INFO: Node iruya-node is running more than one daemon pod
Jan 20 15:01:00.825: INFO: Number of nodes with available pods: 1
Jan 20 15:01:00.825: INFO: Node iruya-node is running more than one daemon pod
Jan 20 15:01:01.818: INFO: Number of nodes with available pods: 1
Jan 20 15:01:01.818: INFO: Node iruya-node is running more than one daemon pod
Jan 20 15:01:02.813: INFO: Number of nodes with available pods: 1
Jan 20 15:01:02.813: INFO: Node iruya-node is running more than one daemon pod
Jan 20 15:01:03.810: INFO: Number of nodes with available pods: 1
Jan 20 15:01:03.810: INFO: Node iruya-node is running more than one daemon pod
Jan 20 15:01:04.812: INFO: Number of nodes with available pods: 1
Jan 20 15:01:04.812: INFO: Node iruya-node is running more than one daemon pod
Jan 20 15:01:05.815: INFO: Number of nodes with available pods: 1
Jan 20 15:01:05.815: INFO: Node iruya-node is running more than one daemon pod
Jan 20 15:01:06.807: INFO: Number of nodes with available pods: 2
Jan 20 15:01:06.807: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2680, will wait for the garbage collector to delete the pods
Jan 20 15:01:06.878: INFO: Deleting DaemonSet.extensions daemon-set took: 10.341065ms
Jan 20 15:01:07.178: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.412448ms
Jan 20 15:01:15.492: INFO: Number of nodes with available pods: 0
Jan 20 15:01:15.492: INFO: Number of running nodes: 0, number of available pods: 0
Jan 20 15:01:15.498: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2680/daemonsets","resourceVersion":"21198218"},"items":null}

Jan 20 15:01:15.502: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2680/pods","resourceVersion":"21198218"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 15:01:15.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2680" for this suite.
Jan 20 15:01:21.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 15:01:21.685: INFO: namespace daemonsets-2680 deletion completed in 6.163735005s

• [SLOW TEST:36.432 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 15:01:21.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-385f91ee-25a1-44fa-b136-9d2de293fa64
STEP: Creating a pod to test consume configMaps
Jan 20 15:01:21.910: INFO: Waiting up to 5m0s for pod "pod-configmaps-8f36d6c2-b38e-4203-a73d-c511961638c2" in namespace "configmap-8506" to be "success or failure"
Jan 20 15:01:21.924: INFO: Pod "pod-configmaps-8f36d6c2-b38e-4203-a73d-c511961638c2": Phase="Pending", Reason="", readiness=false. Elapsed: 13.417259ms
Jan 20 15:01:23.939: INFO: Pod "pod-configmaps-8f36d6c2-b38e-4203-a73d-c511961638c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029262269s
Jan 20 15:01:25.947: INFO: Pod "pod-configmaps-8f36d6c2-b38e-4203-a73d-c511961638c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036924596s
Jan 20 15:01:27.956: INFO: Pod "pod-configmaps-8f36d6c2-b38e-4203-a73d-c511961638c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045414537s
Jan 20 15:01:29.980: INFO: Pod "pod-configmaps-8f36d6c2-b38e-4203-a73d-c511961638c2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069425285s
Jan 20 15:01:31.993: INFO: Pod "pod-configmaps-8f36d6c2-b38e-4203-a73d-c511961638c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082343284s
STEP: Saw pod success
Jan 20 15:01:31.993: INFO: Pod "pod-configmaps-8f36d6c2-b38e-4203-a73d-c511961638c2" satisfied condition "success or failure"
Jan 20 15:01:32.000: INFO: Trying to get logs from node iruya-node pod pod-configmaps-8f36d6c2-b38e-4203-a73d-c511961638c2 container configmap-volume-test: 
STEP: delete the pod
Jan 20 15:01:32.053: INFO: Waiting for pod pod-configmaps-8f36d6c2-b38e-4203-a73d-c511961638c2 to disappear
Jan 20 15:01:32.105: INFO: Pod pod-configmaps-8f36d6c2-b38e-4203-a73d-c511961638c2 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 15:01:32.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8506" for this suite.
Jan 20 15:01:38.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 15:01:38.246: INFO: namespace configmap-8506 deletion completed in 6.134104153s

• [SLOW TEST:16.560 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 15:01:38.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-6b1a6077-6275-421c-8f08-2ee259d0efad
STEP: Creating configMap with name cm-test-opt-upd-2866017f-d251-4d78-883f-1e34ffec8215
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-6b1a6077-6275-421c-8f08-2ee259d0efad
STEP: Updating configmap cm-test-opt-upd-2866017f-d251-4d78-883f-1e34ffec8215
STEP: Creating configMap with name cm-test-opt-create-640ffab1-b2d2-448c-93af-30db02ca7220
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 15:01:54.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7910" for this suite.
Jan 20 15:02:16.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 15:02:16.991: INFO: namespace projected-7910 deletion completed in 22.243743945s

• [SLOW TEST:38.744 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 15:02:16.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 20 15:02:17.189: INFO: Waiting up to 5m0s for pod "pod-761be140-549b-47bb-a8aa-d72aca0a74c2" in namespace "emptydir-155" to be "success or failure"
Jan 20 15:02:17.199: INFO: Pod "pod-761be140-549b-47bb-a8aa-d72aca0a74c2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.264077ms
Jan 20 15:02:19.205: INFO: Pod "pod-761be140-549b-47bb-a8aa-d72aca0a74c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016397787s
Jan 20 15:02:21.234: INFO: Pod "pod-761be140-549b-47bb-a8aa-d72aca0a74c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044875826s
Jan 20 15:02:23.244: INFO: Pod "pod-761be140-549b-47bb-a8aa-d72aca0a74c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054863276s
Jan 20 15:02:25.255: INFO: Pod "pod-761be140-549b-47bb-a8aa-d72aca0a74c2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066592052s
Jan 20 15:02:27.263: INFO: Pod "pod-761be140-549b-47bb-a8aa-d72aca0a74c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.074264111s
STEP: Saw pod success
Jan 20 15:02:27.263: INFO: Pod "pod-761be140-549b-47bb-a8aa-d72aca0a74c2" satisfied condition "success or failure"
Jan 20 15:02:27.267: INFO: Trying to get logs from node iruya-node pod pod-761be140-549b-47bb-a8aa-d72aca0a74c2 container test-container: 
STEP: delete the pod
Jan 20 15:02:27.351: INFO: Waiting for pod pod-761be140-549b-47bb-a8aa-d72aca0a74c2 to disappear
Jan 20 15:02:27.372: INFO: Pod pod-761be140-549b-47bb-a8aa-d72aca0a74c2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 15:02:27.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-155" for this suite.
Jan 20 15:02:33.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 15:02:33.649: INFO: namespace emptydir-155 deletion completed in 6.270393205s

• [SLOW TEST:16.658 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 15:02:33.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 20 15:02:41.931: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 15:02:42.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4957" for this suite.
Jan 20 15:02:48.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 15:02:48.211: INFO: namespace container-runtime-4957 deletion completed in 6.189799136s

• [SLOW TEST:14.561 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 15:02:48.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 20 15:03:05.086: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 20 15:03:05.103: INFO: Pod pod-with-poststart-http-hook still exists
Jan 20 15:03:07.104: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 20 15:03:07.111: INFO: Pod pod-with-poststart-http-hook still exists
Jan 20 15:03:09.104: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 20 15:03:09.116: INFO: Pod pod-with-poststart-http-hook still exists
Jan 20 15:03:11.104: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 20 15:03:11.111: INFO: Pod pod-with-poststart-http-hook still exists
Jan 20 15:03:13.104: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 20 15:03:13.118: INFO: Pod pod-with-poststart-http-hook still exists
Jan 20 15:03:15.104: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 20 15:03:15.119: INFO: Pod pod-with-poststart-http-hook still exists
Jan 20 15:03:17.104: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 20 15:03:17.112: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 15:03:17.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3301" for this suite.
Jan 20 15:03:39.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 15:03:39.291: INFO: namespace container-lifecycle-hook-3301 deletion completed in 22.174130444s

• [SLOW TEST:51.080 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 15:03:39.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-8b9ba6ca-fadc-45f1-b720-20d07d293c8b
STEP: Creating a pod to test consume secrets
Jan 20 15:03:39.466: INFO: Waiting up to 5m0s for pod "pod-secrets-8c728f76-bdea-462d-bfe4-541a5f9ee8f3" in namespace "secrets-3861" to be "success or failure"
Jan 20 15:03:39.477: INFO: Pod "pod-secrets-8c728f76-bdea-462d-bfe4-541a5f9ee8f3": Phase="Pending", Reason="", readiness=false. Elapsed: 11.622208ms
Jan 20 15:03:41.484: INFO: Pod "pod-secrets-8c728f76-bdea-462d-bfe4-541a5f9ee8f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018495991s
Jan 20 15:03:43.491: INFO: Pod "pod-secrets-8c728f76-bdea-462d-bfe4-541a5f9ee8f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025540263s
Jan 20 15:03:45.527: INFO: Pod "pod-secrets-8c728f76-bdea-462d-bfe4-541a5f9ee8f3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061159808s
Jan 20 15:03:47.534: INFO: Pod "pod-secrets-8c728f76-bdea-462d-bfe4-541a5f9ee8f3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068294684s
Jan 20 15:03:49.541: INFO: Pod "pod-secrets-8c728f76-bdea-462d-bfe4-541a5f9ee8f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.075224568s
STEP: Saw pod success
Jan 20 15:03:49.541: INFO: Pod "pod-secrets-8c728f76-bdea-462d-bfe4-541a5f9ee8f3" satisfied condition "success or failure"
Jan 20 15:03:49.544: INFO: Trying to get logs from node iruya-node pod pod-secrets-8c728f76-bdea-462d-bfe4-541a5f9ee8f3 container secret-volume-test: 
STEP: delete the pod
Jan 20 15:03:49.911: INFO: Waiting for pod pod-secrets-8c728f76-bdea-462d-bfe4-541a5f9ee8f3 to disappear
Jan 20 15:03:49.921: INFO: Pod pod-secrets-8c728f76-bdea-462d-bfe4-541a5f9ee8f3 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 15:03:49.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3861" for this suite.
Jan 20 15:03:55.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 15:03:56.092: INFO: namespace secrets-3861 deletion completed in 6.165556317s

• [SLOW TEST:16.801 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 15:03:56.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 20 15:03:56.201: INFO: Waiting up to 5m0s for pod "downwardapi-volume-00d76262-2f3e-4b32-8236-50a8df368609" in namespace "projected-2220" to be "success or failure"
Jan 20 15:03:56.208: INFO: Pod "downwardapi-volume-00d76262-2f3e-4b32-8236-50a8df368609": Phase="Pending", Reason="", readiness=false. Elapsed: 6.177187ms
Jan 20 15:03:58.219: INFO: Pod "downwardapi-volume-00d76262-2f3e-4b32-8236-50a8df368609": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017204063s
Jan 20 15:04:00.228: INFO: Pod "downwardapi-volume-00d76262-2f3e-4b32-8236-50a8df368609": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026621603s
Jan 20 15:04:02.237: INFO: Pod "downwardapi-volume-00d76262-2f3e-4b32-8236-50a8df368609": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035807284s
Jan 20 15:04:04.251: INFO: Pod "downwardapi-volume-00d76262-2f3e-4b32-8236-50a8df368609": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049259168s
STEP: Saw pod success
Jan 20 15:04:04.251: INFO: Pod "downwardapi-volume-00d76262-2f3e-4b32-8236-50a8df368609" satisfied condition "success or failure"
Jan 20 15:04:04.255: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-00d76262-2f3e-4b32-8236-50a8df368609 container client-container: 
STEP: delete the pod
Jan 20 15:04:04.352: INFO: Waiting for pod downwardapi-volume-00d76262-2f3e-4b32-8236-50a8df368609 to disappear
Jan 20 15:04:04.357: INFO: Pod downwardapi-volume-00d76262-2f3e-4b32-8236-50a8df368609 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 15:04:04.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2220" for this suite.
Jan 20 15:04:10.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 15:04:10.668: INFO: namespace projected-2220 deletion completed in 6.295805518s

• [SLOW TEST:14.575 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 15:04:10.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 20 15:04:10.740: INFO: Waiting up to 5m0s for pod "downwardapi-volume-30e995b4-554f-416e-9902-9ada66baea08" in namespace "projected-1673" to be "success or failure"
Jan 20 15:04:10.745: INFO: Pod "downwardapi-volume-30e995b4-554f-416e-9902-9ada66baea08": Phase="Pending", Reason="", readiness=false. Elapsed: 4.53925ms
Jan 20 15:04:12.752: INFO: Pod "downwardapi-volume-30e995b4-554f-416e-9902-9ada66baea08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012123921s
Jan 20 15:04:14.780: INFO: Pod "downwardapi-volume-30e995b4-554f-416e-9902-9ada66baea08": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039942309s
Jan 20 15:04:16.807: INFO: Pod "downwardapi-volume-30e995b4-554f-416e-9902-9ada66baea08": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067476345s
Jan 20 15:04:18.815: INFO: Pod "downwardapi-volume-30e995b4-554f-416e-9902-9ada66baea08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.075390367s
STEP: Saw pod success
Jan 20 15:04:18.815: INFO: Pod "downwardapi-volume-30e995b4-554f-416e-9902-9ada66baea08" satisfied condition "success or failure"
Jan 20 15:04:18.822: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-30e995b4-554f-416e-9902-9ada66baea08 container client-container: 
STEP: delete the pod
Jan 20 15:04:18.925: INFO: Waiting for pod downwardapi-volume-30e995b4-554f-416e-9902-9ada66baea08 to disappear
Jan 20 15:04:18.932: INFO: Pod downwardapi-volume-30e995b4-554f-416e-9902-9ada66baea08 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 15:04:18.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1673" for this suite.
Jan 20 15:04:24.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 15:04:25.145: INFO: namespace projected-1673 deletion completed in 6.209435547s

• [SLOW TEST:14.476 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 15:04:25.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan 20 15:04:35.302: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-1950b268-591c-48bf-b6b3-605a76147edd,GenerateName:,Namespace:events-9974,SelfLink:/api/v1/namespaces/events-9974/pods/send-events-1950b268-591c-48bf-b6b3-605a76147edd,UID:435654ae-5934-496d-94d6-6e00e4814606,ResourceVersion:21198746,Generation:0,CreationTimestamp:2020-01-20 15:04:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 190612734,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gztv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gztv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-gztv8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0034a4920} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0034a4940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 15:04:25 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 15:04:33 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 15:04:33 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 15:04:25 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-20 15:04:25 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-20 15:04:32 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://57fdfec21d16131abe8bbfb5b61664db351022917ad5ff3d338deb75d48a8f25}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan 20 15:04:37.312: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan 20 15:04:39.328: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 15:04:39.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-9974" for this suite.
Jan 20 15:05:17.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 15:05:17.593: INFO: namespace events-9974 deletion completed in 38.218152609s

• [SLOW TEST:52.448 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 15:05:17.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-4689
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 20 15:05:17.706: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 20 15:05:51.867: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-4689 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 15:05:51.868: INFO: >>> kubeConfig: /root/.kube/config
I0120 15:05:51.940082       8 log.go:172] (0xc000d426e0) (0xc003137cc0) Create stream
I0120 15:05:51.940172       8 log.go:172] (0xc000d426e0) (0xc003137cc0) Stream added, broadcasting: 1
I0120 15:05:51.945764       8 log.go:172] (0xc000d426e0) Reply frame received for 1
I0120 15:05:51.945814       8 log.go:172] (0xc000d426e0) (0xc001158000) Create stream
I0120 15:05:51.945823       8 log.go:172] (0xc000d426e0) (0xc001158000) Stream added, broadcasting: 3
I0120 15:05:51.947229       8 log.go:172] (0xc000d426e0) Reply frame received for 3
I0120 15:05:51.947253       8 log.go:172] (0xc000d426e0) (0xc00022e140) Create stream
I0120 15:05:51.947270       8 log.go:172] (0xc000d426e0) (0xc00022e140) Stream added, broadcasting: 5
I0120 15:05:51.948443       8 log.go:172] (0xc000d426e0) Reply frame received for 5
I0120 15:05:52.105731       8 log.go:172] (0xc000d426e0) Data frame received for 3
I0120 15:05:52.105879       8 log.go:172] (0xc001158000) (3) Data frame handling
I0120 15:05:52.105987       8 log.go:172] (0xc001158000) (3) Data frame sent
I0120 15:05:52.287613       8 log.go:172] (0xc000d426e0) Data frame received for 1
I0120 15:05:52.287750       8 log.go:172] (0xc000d426e0) (0xc001158000) Stream removed, broadcasting: 3
I0120 15:05:52.287809       8 log.go:172] (0xc003137cc0) (1) Data frame handling
I0120 15:05:52.287835       8 log.go:172] (0xc003137cc0) (1) Data frame sent
I0120 15:05:52.287881       8 log.go:172] (0xc000d426e0) (0xc00022e140) Stream removed, broadcasting: 5
I0120 15:05:52.287924       8 log.go:172] (0xc000d426e0) (0xc003137cc0) Stream removed, broadcasting: 1
I0120 15:05:52.287949       8 log.go:172] (0xc000d426e0) Go away received
I0120 15:05:52.288693       8 log.go:172] (0xc000d426e0) (0xc003137cc0) Stream removed, broadcasting: 1
I0120 15:05:52.288716       8 log.go:172] (0xc000d426e0) (0xc001158000) Stream removed, broadcasting: 3
I0120 15:05:52.288722       8 log.go:172] (0xc000d426e0) (0xc00022e140) Stream removed, broadcasting: 5
Jan 20 15:05:52.289: INFO: Waiting for endpoints: map[]
Jan 20 15:05:52.316: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-4689 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 15:05:52.316: INFO: >>> kubeConfig: /root/.kube/config
I0120 15:05:52.380218       8 log.go:172] (0xc0013fe8f0) (0xc00022e5a0) Create stream
I0120 15:05:52.380310       8 log.go:172] (0xc0013fe8f0) (0xc00022e5a0) Stream added, broadcasting: 1
I0120 15:05:52.390054       8 log.go:172] (0xc0013fe8f0) Reply frame received for 1
I0120 15:05:52.390123       8 log.go:172] (0xc0013fe8f0) (0xc001dac0a0) Create stream
I0120 15:05:52.390130       8 log.go:172] (0xc0013fe8f0) (0xc001dac0a0) Stream added, broadcasting: 3
I0120 15:05:52.392743       8 log.go:172] (0xc0013fe8f0) Reply frame received for 3
I0120 15:05:52.392801       8 log.go:172] (0xc0013fe8f0) (0xc003137d60) Create stream
I0120 15:05:52.392815       8 log.go:172] (0xc0013fe8f0) (0xc003137d60) Stream added, broadcasting: 5
I0120 15:05:52.396845       8 log.go:172] (0xc0013fe8f0) Reply frame received for 5
I0120 15:05:52.555599       8 log.go:172] (0xc0013fe8f0) Data frame received for 3
I0120 15:05:52.555835       8 log.go:172] (0xc001dac0a0) (3) Data frame handling
I0120 15:05:52.555893       8 log.go:172] (0xc001dac0a0) (3) Data frame sent
I0120 15:05:52.736856       8 log.go:172] (0xc0013fe8f0) Data frame received for 1
I0120 15:05:52.736965       8 log.go:172] (0xc0013fe8f0) (0xc001dac0a0) Stream removed, broadcasting: 3
I0120 15:05:52.737021       8 log.go:172] (0xc00022e5a0) (1) Data frame handling
I0120 15:05:52.737040       8 log.go:172] (0xc00022e5a0) (1) Data frame sent
I0120 15:05:52.737088       8 log.go:172] (0xc0013fe8f0) (0xc003137d60) Stream removed, broadcasting: 5
I0120 15:05:52.737114       8 log.go:172] (0xc0013fe8f0) (0xc00022e5a0) Stream removed, broadcasting: 1
I0120 15:05:52.737155       8 log.go:172] (0xc0013fe8f0) Go away received
I0120 15:05:52.737864       8 log.go:172] (0xc0013fe8f0) (0xc00022e5a0) Stream removed, broadcasting: 1
I0120 15:05:52.737906       8 log.go:172] (0xc0013fe8f0) (0xc001dac0a0) Stream removed, broadcasting: 3
I0120 15:05:52.737922       8 log.go:172] (0xc0013fe8f0) (0xc003137d60) Stream removed, broadcasting: 5
Jan 20 15:05:52.738: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 15:05:52.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4689" for this suite.
Jan 20 15:06:14.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 15:06:14.996: INFO: namespace pod-network-test-4689 deletion completed in 22.227869555s

• [SLOW TEST:57.402 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 15:06:14.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 15:06:27.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6997" for this suite.
Jan 20 15:07:19.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 15:07:19.266: INFO: namespace kubelet-test-6997 deletion completed in 52.112678066s

• [SLOW TEST:64.269 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 15:07:19.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan 20 15:07:29.985: INFO: Successfully updated pod "labelsupdatef2bba7e5-6754-4f7c-980c-e13682fd7599"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 15:07:32.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1541" for this suite.
Jan 20 15:07:54.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 15:07:54.226: INFO: namespace downward-api-1541 deletion completed in 22.159761006s

• [SLOW TEST:34.960 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 15:07:54.226: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 15:08:02.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1470" for this suite.
Jan 20 15:08:08.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 15:08:08.478: INFO: namespace kubelet-test-1470 deletion completed in 6.155053632s

• [SLOW TEST:14.252 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 15:08:08.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-c479137c-f9e6-4253-8258-bc3289e06e5b
STEP: Creating secret with name s-test-opt-upd-16abd590-2db1-4795-9116-5b3f445eda42
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-c479137c-f9e6-4253-8258-bc3289e06e5b
STEP: Updating secret s-test-opt-upd-16abd590-2db1-4795-9116-5b3f445eda42
STEP: Creating secret with name s-test-opt-create-6f796da7-a586-4d32-bb94-810a1ace6041
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 15:09:26.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8740" for this suite.
Jan 20 15:09:48.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 15:09:48.462: INFO: namespace secrets-8740 deletion completed in 22.120456881s

• [SLOW TEST:99.983 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 15:09:48.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-8c2x
STEP: Creating a pod to test atomic-volume-subpath
Jan 20 15:09:48.604: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-8c2x" in namespace "subpath-219" to be "success or failure"
Jan 20 15:09:48.681: INFO: Pod "pod-subpath-test-projected-8c2x": Phase="Pending", Reason="", readiness=false. Elapsed: 77.134309ms
Jan 20 15:09:50.688: INFO: Pod "pod-subpath-test-projected-8c2x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084771631s
Jan 20 15:09:52.694: INFO: Pod "pod-subpath-test-projected-8c2x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090623402s
Jan 20 15:09:54.707: INFO: Pod "pod-subpath-test-projected-8c2x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.1028275s
Jan 20 15:09:56.718: INFO: Pod "pod-subpath-test-projected-8c2x": Phase="Running", Reason="", readiness=true. Elapsed: 8.114601381s
Jan 20 15:09:58.730: INFO: Pod "pod-subpath-test-projected-8c2x": Phase="Running", Reason="", readiness=true. Elapsed: 10.126183634s
Jan 20 15:10:00.738: INFO: Pod "pod-subpath-test-projected-8c2x": Phase="Running", Reason="", readiness=true. Elapsed: 12.134769704s
Jan 20 15:10:02.746: INFO: Pod "pod-subpath-test-projected-8c2x": Phase="Running", Reason="", readiness=true. Elapsed: 14.142778638s
Jan 20 15:10:04.755: INFO: Pod "pod-subpath-test-projected-8c2x": Phase="Running", Reason="", readiness=true. Elapsed: 16.151566852s
Jan 20 15:10:06.769: INFO: Pod "pod-subpath-test-projected-8c2x": Phase="Running", Reason="", readiness=true. Elapsed: 18.165336134s
Jan 20 15:10:08.780: INFO: Pod "pod-subpath-test-projected-8c2x": Phase="Running", Reason="", readiness=true. Elapsed: 20.176183359s
Jan 20 15:10:10.788: INFO: Pod "pod-subpath-test-projected-8c2x": Phase="Running", Reason="", readiness=true. Elapsed: 22.184268745s
Jan 20 15:10:12.797: INFO: Pod "pod-subpath-test-projected-8c2x": Phase="Running", Reason="", readiness=true. Elapsed: 24.193159324s
Jan 20 15:10:14.806: INFO: Pod "pod-subpath-test-projected-8c2x": Phase="Running", Reason="", readiness=true. Elapsed: 26.2027903s
Jan 20 15:10:16.816: INFO: Pod "pod-subpath-test-projected-8c2x": Phase="Running", Reason="", readiness=true. Elapsed: 28.211967921s
Jan 20 15:10:18.825: INFO: Pod "pod-subpath-test-projected-8c2x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.22177091s
STEP: Saw pod success
Jan 20 15:10:18.826: INFO: Pod "pod-subpath-test-projected-8c2x" satisfied condition "success or failure"
Jan 20 15:10:18.833: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-8c2x container test-container-subpath-projected-8c2x: 
STEP: delete the pod
Jan 20 15:10:18.904: INFO: Waiting for pod pod-subpath-test-projected-8c2x to disappear
Jan 20 15:10:18.940: INFO: Pod pod-subpath-test-projected-8c2x no longer exists
STEP: Deleting pod pod-subpath-test-projected-8c2x
Jan 20 15:10:18.940: INFO: Deleting pod "pod-subpath-test-projected-8c2x" in namespace "subpath-219"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 15:10:18.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-219" for this suite.
Jan 20 15:10:24.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 15:10:25.112: INFO: namespace subpath-219 deletion completed in 6.163286267s

• [SLOW TEST:36.650 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 15:10:25.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 20 15:10:25.185: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4756dccf-2e9f-4370-b286-9f03b3002aa1" in namespace "projected-990" to be "success or failure"
Jan 20 15:10:25.188: INFO: Pod "downwardapi-volume-4756dccf-2e9f-4370-b286-9f03b3002aa1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.58838ms
Jan 20 15:10:27.201: INFO: Pod "downwardapi-volume-4756dccf-2e9f-4370-b286-9f03b3002aa1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015455033s
Jan 20 15:10:29.210: INFO: Pod "downwardapi-volume-4756dccf-2e9f-4370-b286-9f03b3002aa1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024966593s
Jan 20 15:10:31.236: INFO: Pod "downwardapi-volume-4756dccf-2e9f-4370-b286-9f03b3002aa1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05049944s
Jan 20 15:10:33.244: INFO: Pod "downwardapi-volume-4756dccf-2e9f-4370-b286-9f03b3002aa1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058387416s
STEP: Saw pod success
Jan 20 15:10:33.244: INFO: Pod "downwardapi-volume-4756dccf-2e9f-4370-b286-9f03b3002aa1" satisfied condition "success or failure"
Jan 20 15:10:33.251: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4756dccf-2e9f-4370-b286-9f03b3002aa1 container client-container: 
STEP: delete the pod
Jan 20 15:10:33.324: INFO: Waiting for pod downwardapi-volume-4756dccf-2e9f-4370-b286-9f03b3002aa1 to disappear
Jan 20 15:10:33.336: INFO: Pod downwardapi-volume-4756dccf-2e9f-4370-b286-9f03b3002aa1 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 15:10:33.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-990" for this suite.
Jan 20 15:10:39.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 15:10:39.622: INFO: namespace projected-990 deletion completed in 6.278952131s

• [SLOW TEST:14.510 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 20 15:10:39.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-8a85ff2b-df0e-4424-81c2-92c4b992f74a
STEP: Creating a pod to test consume secrets
Jan 20 15:10:39.786: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-11581aa3-a53a-4ad2-9aa8-a79fb7ba2466" in namespace "projected-8667" to be "success or failure"
Jan 20 15:10:39.796: INFO: Pod "pod-projected-secrets-11581aa3-a53a-4ad2-9aa8-a79fb7ba2466": Phase="Pending", Reason="", readiness=false. Elapsed: 9.673958ms
Jan 20 15:10:41.813: INFO: Pod "pod-projected-secrets-11581aa3-a53a-4ad2-9aa8-a79fb7ba2466": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026057017s
Jan 20 15:10:43.861: INFO: Pod "pod-projected-secrets-11581aa3-a53a-4ad2-9aa8-a79fb7ba2466": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074718826s
Jan 20 15:10:45.884: INFO: Pod "pod-projected-secrets-11581aa3-a53a-4ad2-9aa8-a79fb7ba2466": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097464717s
Jan 20 15:10:47.919: INFO: Pod "pod-projected-secrets-11581aa3-a53a-4ad2-9aa8-a79fb7ba2466": Phase="Running", Reason="", readiness=true. Elapsed: 8.132684743s
Jan 20 15:10:49.953: INFO: Pod "pod-projected-secrets-11581aa3-a53a-4ad2-9aa8-a79fb7ba2466": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.166896947s
STEP: Saw pod success
Jan 20 15:10:49.954: INFO: Pod "pod-projected-secrets-11581aa3-a53a-4ad2-9aa8-a79fb7ba2466" satisfied condition "success or failure"
Jan 20 15:10:49.957: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-11581aa3-a53a-4ad2-9aa8-a79fb7ba2466 container projected-secret-volume-test: 
STEP: delete the pod
Jan 20 15:10:50.334: INFO: Waiting for pod pod-projected-secrets-11581aa3-a53a-4ad2-9aa8-a79fb7ba2466 to disappear
Jan 20 15:10:50.343: INFO: Pod pod-projected-secrets-11581aa3-a53a-4ad2-9aa8-a79fb7ba2466 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 20 15:10:50.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8667" for this suite.
Jan 20 15:10:56.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 15:10:56.496: INFO: namespace projected-8667 deletion completed in 6.140444458s

• [SLOW TEST:16.874 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
Jan 20 15:10:56.497: INFO: Running AfterSuite actions on all nodes
Jan 20 15:10:56.497: INFO: Running AfterSuite actions on node 1
Jan 20 15:10:56.497: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8083.800 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS