I0125 12:56:09.993357 8 e2e.go:243] Starting e2e run "48a6e4e5-95ee-45ac-bb6c-e2e6f3be086b" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1579956968 - Will randomize all specs Will run 215 of 4412 specs Jan 25 12:56:10.351: INFO: >>> kubeConfig: /root/.kube/config Jan 25 12:56:10.356: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 25 12:56:10.391: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 25 12:56:10.466: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 25 12:56:10.466: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 25 12:56:10.466: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 25 12:56:10.478: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 25 12:56:10.478: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 25 12:56:10.478: INFO: e2e test version: v1.15.7 Jan 25 12:56:10.480: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 12:56:10.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc Jan 25 12:56:10.599: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0125 12:56:20.654444 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 25 12:56:20.654: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 12:56:20.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1709" for this suite. Jan 25 12:56:26.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 12:56:26.799: INFO: namespace gc-1709 deletion completed in 6.142069208s • [SLOW TEST:16.319 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 12:56:26.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-447e3ab0-95bf-4f05-beef-e7aef6802885 STEP: Creating a pod to test consume secrets Jan 25 12:56:26.941: INFO: Waiting up to 5m0s for pod "pod-secrets-0530dcb5-b763-4f25-a52f-ceb6930fd21f" in namespace "secrets-7593" to be "success or failure" Jan 25 12:56:26.975: INFO: Pod "pod-secrets-0530dcb5-b763-4f25-a52f-ceb6930fd21f": Phase="Pending", Reason="", readiness=false. Elapsed: 34.061705ms Jan 25 12:56:28.981: INFO: Pod "pod-secrets-0530dcb5-b763-4f25-a52f-ceb6930fd21f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040466897s Jan 25 12:56:30.995: INFO: Pod "pod-secrets-0530dcb5-b763-4f25-a52f-ceb6930fd21f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054310316s Jan 25 12:56:33.045: INFO: Pod "pod-secrets-0530dcb5-b763-4f25-a52f-ceb6930fd21f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103937007s Jan 25 12:56:35.050: INFO: Pod "pod-secrets-0530dcb5-b763-4f25-a52f-ceb6930fd21f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.109463988s Jan 25 12:56:37.059: INFO: Pod "pod-secrets-0530dcb5-b763-4f25-a52f-ceb6930fd21f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.118095212s STEP: Saw pod success Jan 25 12:56:37.059: INFO: Pod "pod-secrets-0530dcb5-b763-4f25-a52f-ceb6930fd21f" satisfied condition "success or failure" Jan 25 12:56:37.064: INFO: Trying to get logs from node iruya-node pod pod-secrets-0530dcb5-b763-4f25-a52f-ceb6930fd21f container secret-volume-test: STEP: delete the pod Jan 25 12:56:37.239: INFO: Waiting for pod pod-secrets-0530dcb5-b763-4f25-a52f-ceb6930fd21f to disappear Jan 25 12:56:37.247: INFO: Pod pod-secrets-0530dcb5-b763-4f25-a52f-ceb6930fd21f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 12:56:37.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7593" for this suite. Jan 25 12:56:43.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 12:56:43.411: INFO: namespace secrets-7593 deletion completed in 6.153565803s • [SLOW TEST:16.611 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 12:56:43.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Jan 25 12:56:51.511: INFO: Pod pod-hostip-95fbd69f-655e-4738-82f7-4af5cd42d580 has hostIP: 10.96.3.65 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 12:56:51.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3521" for this suite. Jan 25 12:57:13.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 12:57:13.653: INFO: namespace pods-3521 deletion completed in 22.135599833s • [SLOW TEST:30.242 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 12:57:13.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-0446f81d-77e8-48f7-912b-162a471a37bf STEP: Creating a pod to test consume secrets Jan 25 12:57:13.763: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-97688205-4fa8-4ec9-9cd3-62e3dc1b3ccf" in namespace "projected-4743" to be "success or failure" Jan 25 12:57:13.841: INFO: Pod "pod-projected-secrets-97688205-4fa8-4ec9-9cd3-62e3dc1b3ccf": Phase="Pending", Reason="", readiness=false. Elapsed: 78.717181ms Jan 25 12:57:15.863: INFO: Pod "pod-projected-secrets-97688205-4fa8-4ec9-9cd3-62e3dc1b3ccf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099926093s Jan 25 12:57:17.884: INFO: Pod "pod-projected-secrets-97688205-4fa8-4ec9-9cd3-62e3dc1b3ccf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120916665s Jan 25 12:57:19.892: INFO: Pod "pod-projected-secrets-97688205-4fa8-4ec9-9cd3-62e3dc1b3ccf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129576971s Jan 25 12:57:21.901: INFO: Pod "pod-projected-secrets-97688205-4fa8-4ec9-9cd3-62e3dc1b3ccf": Phase="Running", Reason="", readiness=true. Elapsed: 8.138200209s Jan 25 12:57:23.922: INFO: Pod "pod-projected-secrets-97688205-4fa8-4ec9-9cd3-62e3dc1b3ccf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.159540672s STEP: Saw pod success Jan 25 12:57:23.922: INFO: Pod "pod-projected-secrets-97688205-4fa8-4ec9-9cd3-62e3dc1b3ccf" satisfied condition "success or failure" Jan 25 12:57:23.933: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-97688205-4fa8-4ec9-9cd3-62e3dc1b3ccf container projected-secret-volume-test: STEP: delete the pod Jan 25 12:57:24.095: INFO: Waiting for pod pod-projected-secrets-97688205-4fa8-4ec9-9cd3-62e3dc1b3ccf to disappear Jan 25 12:57:24.117: INFO: Pod pod-projected-secrets-97688205-4fa8-4ec9-9cd3-62e3dc1b3ccf no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 12:57:24.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4743" for this suite. Jan 25 12:57:30.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 12:57:30.283: INFO: namespace projected-4743 deletion completed in 6.157972377s • [SLOW TEST:16.629 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 12:57:30.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 12:58:26.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4799" for this suite. Jan 25 12:58:32.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 12:58:33.042: INFO: namespace container-runtime-4799 deletion completed in 6.15867402s • [SLOW TEST:62.760 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 12:58:33.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 25 12:58:33.180: INFO: Waiting up to 5m0s for pod "pod-17ff301e-a669-4a21-9c81-9444e943e17e" in namespace "emptydir-4235" to be "success or failure" Jan 25 12:58:33.218: INFO: Pod "pod-17ff301e-a669-4a21-9c81-9444e943e17e": Phase="Pending", Reason="", readiness=false. Elapsed: 38.009951ms Jan 25 12:58:35.230: INFO: Pod "pod-17ff301e-a669-4a21-9c81-9444e943e17e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050278383s Jan 25 12:58:37.242: INFO: Pod "pod-17ff301e-a669-4a21-9c81-9444e943e17e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062503679s Jan 25 12:58:39.261: INFO: Pod "pod-17ff301e-a669-4a21-9c81-9444e943e17e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080957243s Jan 25 12:58:41.272: INFO: Pod "pod-17ff301e-a669-4a21-9c81-9444e943e17e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.091750759s Jan 25 12:58:43.279: INFO: Pod "pod-17ff301e-a669-4a21-9c81-9444e943e17e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.09878233s Jan 25 12:58:45.288: INFO: Pod "pod-17ff301e-a669-4a21-9c81-9444e943e17e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.10834535s STEP: Saw pod success Jan 25 12:58:45.288: INFO: Pod "pod-17ff301e-a669-4a21-9c81-9444e943e17e" satisfied condition "success or failure" Jan 25 12:58:45.293: INFO: Trying to get logs from node iruya-node pod pod-17ff301e-a669-4a21-9c81-9444e943e17e container test-container: STEP: delete the pod Jan 25 12:58:45.360: INFO: Waiting for pod pod-17ff301e-a669-4a21-9c81-9444e943e17e to disappear Jan 25 12:58:45.371: INFO: Pod pod-17ff301e-a669-4a21-9c81-9444e943e17e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 12:58:45.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4235" for this suite. Jan 25 12:58:51.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 12:58:51.548: INFO: namespace emptydir-4235 deletion completed in 6.162381279s • [SLOW TEST:18.505 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 12:58:51.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 25 12:58:51.811: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 12:59:10.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5823" for this suite. Jan 25 12:59:16.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 12:59:16.337: INFO: namespace init-container-5823 deletion completed in 6.119621219s • [SLOW TEST:24.788 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 12:59:16.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 12:59:28.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4097" for this suite. Jan 25 12:59:34.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 12:59:34.728: INFO: namespace kubelet-test-4097 deletion completed in 6.111539952s • [SLOW TEST:18.391 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 12:59:34.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-6187/configmap-test-d51c592f-4e8c-4f4b-9c0c-c704bb9e6982 STEP: Creating a pod to test consume configMaps Jan 25 12:59:34.835: INFO: Waiting up to 5m0s for pod "pod-configmaps-53ef6c7e-1159-47f9-a7b1-4b268bb394ff" in namespace "configmap-6187" to be "success or failure" Jan 25 12:59:34.848: INFO: Pod "pod-configmaps-53ef6c7e-1159-47f9-a7b1-4b268bb394ff": Phase="Pending", Reason="", readiness=false. Elapsed: 12.840305ms Jan 25 12:59:36.859: INFO: Pod "pod-configmaps-53ef6c7e-1159-47f9-a7b1-4b268bb394ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02393083s Jan 25 12:59:38.874: INFO: Pod "pod-configmaps-53ef6c7e-1159-47f9-a7b1-4b268bb394ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039363424s Jan 25 12:59:40.883: INFO: Pod "pod-configmaps-53ef6c7e-1159-47f9-a7b1-4b268bb394ff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048048355s Jan 25 12:59:42.907: INFO: Pod "pod-configmaps-53ef6c7e-1159-47f9-a7b1-4b268bb394ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072116346s STEP: Saw pod success Jan 25 12:59:42.907: INFO: Pod "pod-configmaps-53ef6c7e-1159-47f9-a7b1-4b268bb394ff" satisfied condition "success or failure" Jan 25 12:59:42.917: INFO: Trying to get logs from node iruya-node pod pod-configmaps-53ef6c7e-1159-47f9-a7b1-4b268bb394ff container env-test: STEP: delete the pod Jan 25 12:59:43.322: INFO: Waiting for pod pod-configmaps-53ef6c7e-1159-47f9-a7b1-4b268bb394ff to disappear Jan 25 12:59:43.353: INFO: Pod pod-configmaps-53ef6c7e-1159-47f9-a7b1-4b268bb394ff no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 12:59:43.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6187" for this suite. Jan 25 12:59:49.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 12:59:49.737: INFO: namespace configmap-6187 deletion completed in 6.347777472s • [SLOW TEST:15.009 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 12:59:49.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Jan 25 12:59:49.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9362 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jan 25 13:00:02.017: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0125 13:00:00.837905 34 log.go:172] (0xc000118bb0) (0xc000601180) Create stream\nI0125 13:00:00.838862 34 log.go:172] (0xc000118bb0) (0xc000601180) Stream added, broadcasting: 1\nI0125 13:00:00.871211 34 log.go:172] (0xc000118bb0) Reply frame received for 1\nI0125 13:00:00.871545 34 log.go:172] (0xc000118bb0) (0xc000b90320) Create stream\nI0125 13:00:00.871563 34 log.go:172] (0xc000118bb0) (0xc000b90320) Stream added, broadcasting: 3\nI0125 13:00:00.875687 34 log.go:172] (0xc000118bb0) Reply frame received for 3\nI0125 13:00:00.875953 34 log.go:172] (0xc000118bb0) (0xc0007400a0) Create stream\nI0125 13:00:00.875992 34 log.go:172] (0xc000118bb0) (0xc0007400a0) Stream added, broadcasting: 5\nI0125 13:00:00.880412 34 log.go:172] (0xc000118bb0) Reply frame received for 5\nI0125 13:00:00.880564 34 log.go:172] (0xc000118bb0) (0xc000601220) Create stream\nI0125 13:00:00.880577 34 log.go:172] (0xc000118bb0) (0xc000601220) Stream added, broadcasting: 7\nI0125 13:00:00.889227 34 log.go:172] (0xc000118bb0) Reply frame received for 7\nI0125 13:00:00.889934 34 log.go:172] (0xc000b90320) (3) Writing data frame\nI0125 13:00:00.890323 34 log.go:172] (0xc000b90320) (3) Writing data frame\nI0125 13:00:00.913842 34 log.go:172] (0xc000118bb0) Data frame received for 5\nI0125 13:00:00.914008 34 log.go:172] (0xc0007400a0) (5) Data frame handling\nI0125 13:00:00.914071 34 log.go:172] (0xc0007400a0) (5) Data frame sent\nI0125 13:00:00.915464 34 log.go:172] (0xc000118bb0) Data frame received for 5\nI0125 13:00:00.915476 34 log.go:172] (0xc0007400a0) (5) Data frame handling\nI0125 13:00:00.915486 34 log.go:172] (0xc0007400a0) (5) Data frame sent\nI0125 13:00:01.958419 34 log.go:172] (0xc000118bb0) Data frame received for 1\nI0125 13:00:01.958682 34 log.go:172] (0xc000601180) (1) Data frame handling\nI0125 13:00:01.958748 34 log.go:172] (0xc000601180) (1) Data frame sent\nI0125 13:00:01.960656 34 log.go:172] (0xc000118bb0) (0xc000601180) Stream removed, broadcasting: 1\nI0125 13:00:01.960844 34 log.go:172] (0xc000118bb0) (0xc0007400a0) Stream removed, broadcasting: 5\nI0125 13:00:01.961056 34 log.go:172] (0xc000118bb0) (0xc000b90320) Stream removed, broadcasting: 3\nI0125 13:00:01.961793 34 log.go:172] (0xc000118bb0) (0xc000601220) Stream removed, broadcasting: 7\nI0125 13:00:01.961961 34 log.go:172] (0xc000118bb0) Go away received\nI0125 13:00:01.962133 34 log.go:172] (0xc000118bb0) (0xc000601180) Stream removed, broadcasting: 1\nI0125 13:00:01.962148 34 log.go:172] (0xc000118bb0) (0xc000b90320) Stream removed, broadcasting: 3\nI0125 13:00:01.962160 34 log.go:172] (0xc000118bb0) (0xc0007400a0) Stream removed, broadcasting: 5\nI0125 13:00:01.962172 34 log.go:172] (0xc000118bb0) (0xc000601220) Stream removed, broadcasting: 7\n" Jan 25 13:00:02.017: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:00:04.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9362" for this suite. Jan 25 13:00:10.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:00:10.182: INFO: namespace kubectl-9362 deletion completed in 6.146294135s • [SLOW TEST:20.444 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:00:10.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 25 13:00:10.290: INFO: Waiting up to 5m0s for pod "pod-b09d052b-ab13-4adf-9d4c-22421637f01c" in namespace "emptydir-6754" to be "success or failure" Jan 25 13:00:10.297: INFO: Pod "pod-b09d052b-ab13-4adf-9d4c-22421637f01c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.777612ms Jan 25 13:00:12.312: INFO: Pod "pod-b09d052b-ab13-4adf-9d4c-22421637f01c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02184942s Jan 25 13:00:14.329: INFO: Pod "pod-b09d052b-ab13-4adf-9d4c-22421637f01c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038567477s Jan 25 13:00:16.343: INFO: Pod "pod-b09d052b-ab13-4adf-9d4c-22421637f01c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052369593s Jan 25 13:00:18.404: INFO: Pod "pod-b09d052b-ab13-4adf-9d4c-22421637f01c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.113428012s Jan 25 13:00:20.414: INFO: Pod "pod-b09d052b-ab13-4adf-9d4c-22421637f01c": Phase="Running", Reason="", readiness=true. Elapsed: 10.124165074s Jan 25 13:00:22.483: INFO: Pod "pod-b09d052b-ab13-4adf-9d4c-22421637f01c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.192555131s STEP: Saw pod success Jan 25 13:00:22.483: INFO: Pod "pod-b09d052b-ab13-4adf-9d4c-22421637f01c" satisfied condition "success or failure" Jan 25 13:00:22.496: INFO: Trying to get logs from node iruya-node pod pod-b09d052b-ab13-4adf-9d4c-22421637f01c container test-container: STEP: delete the pod Jan 25 13:00:22.726: INFO: Waiting for pod pod-b09d052b-ab13-4adf-9d4c-22421637f01c to disappear Jan 25 13:00:22.740: INFO: Pod pod-b09d052b-ab13-4adf-9d4c-22421637f01c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:00:22.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6754" for this suite. Jan 25 13:00:28.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:00:29.054: INFO: namespace emptydir-6754 deletion completed in 6.303154144s • [SLOW TEST:18.871 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:00:29.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-69ad9bcd-fd26-4300-a0dd-70536f7de20c STEP: Creating a pod to test consume configMaps Jan 25 13:00:29.214: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-991ec18b-8d31-4f16-8274-3ccaca9c55e6" in namespace "projected-889" to be "success or failure" Jan 25 13:00:29.219: INFO: Pod "pod-projected-configmaps-991ec18b-8d31-4f16-8274-3ccaca9c55e6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.566166ms Jan 25 13:00:31.227: INFO: Pod "pod-projected-configmaps-991ec18b-8d31-4f16-8274-3ccaca9c55e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01310093s Jan 25 13:00:33.234: INFO: Pod "pod-projected-configmaps-991ec18b-8d31-4f16-8274-3ccaca9c55e6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019991516s Jan 25 13:00:35.443: INFO: Pod "pod-projected-configmaps-991ec18b-8d31-4f16-8274-3ccaca9c55e6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.229346411s Jan 25 13:00:37.463: INFO: Pod "pod-projected-configmaps-991ec18b-8d31-4f16-8274-3ccaca9c55e6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.249651947s Jan 25 13:00:39.513: INFO: Pod "pod-projected-configmaps-991ec18b-8d31-4f16-8274-3ccaca9c55e6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.298830785s Jan 25 13:00:41.523: INFO: Pod "pod-projected-configmaps-991ec18b-8d31-4f16-8274-3ccaca9c55e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.309186827s STEP: Saw pod success Jan 25 13:00:41.523: INFO: Pod "pod-projected-configmaps-991ec18b-8d31-4f16-8274-3ccaca9c55e6" satisfied condition "success or failure" Jan 25 13:00:41.527: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-991ec18b-8d31-4f16-8274-3ccaca9c55e6 container projected-configmap-volume-test: STEP: delete the pod Jan 25 13:00:41.606: INFO: Waiting for pod pod-projected-configmaps-991ec18b-8d31-4f16-8274-3ccaca9c55e6 to disappear Jan 25 13:00:41.648: INFO: Pod pod-projected-configmaps-991ec18b-8d31-4f16-8274-3ccaca9c55e6 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:00:41.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-889" for this suite. Jan 25 13:00:47.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:00:47.988: INFO: namespace projected-889 deletion completed in 6.332562318s • [SLOW TEST:18.934 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:00:47.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-9930/secret-test-82160573-efeb-445a-ac6b-b34a49bf4673 STEP: Creating a pod to test consume secrets Jan 25 13:00:48.098: INFO: Waiting up to 5m0s for pod "pod-configmaps-0c02fd9f-289f-45ee-bd9e-475ebefa5e6e" in namespace "secrets-9930" to be "success or failure" Jan 25 13:00:48.108: INFO: Pod "pod-configmaps-0c02fd9f-289f-45ee-bd9e-475ebefa5e6e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.915922ms Jan 25 13:00:50.119: INFO: Pod "pod-configmaps-0c02fd9f-289f-45ee-bd9e-475ebefa5e6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021738934s Jan 25 13:00:52.132: INFO: Pod "pod-configmaps-0c02fd9f-289f-45ee-bd9e-475ebefa5e6e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034804921s Jan 25 13:00:54.137: INFO: Pod "pod-configmaps-0c02fd9f-289f-45ee-bd9e-475ebefa5e6e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039721529s Jan 25 13:00:56.147: INFO: Pod "pod-configmaps-0c02fd9f-289f-45ee-bd9e-475ebefa5e6e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049485268s Jan 25 13:00:58.154: INFO: Pod "pod-configmaps-0c02fd9f-289f-45ee-bd9e-475ebefa5e6e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.056739302s Jan 25 13:01:00.167: INFO: Pod "pod-configmaps-0c02fd9f-289f-45ee-bd9e-475ebefa5e6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.069792861s STEP: Saw pod success Jan 25 13:01:00.167: INFO: Pod "pod-configmaps-0c02fd9f-289f-45ee-bd9e-475ebefa5e6e" satisfied condition "success or failure" Jan 25 13:01:00.174: INFO: Trying to get logs from node iruya-node pod pod-configmaps-0c02fd9f-289f-45ee-bd9e-475ebefa5e6e container env-test: STEP: delete the pod Jan 25 13:01:00.431: INFO: Waiting for pod pod-configmaps-0c02fd9f-289f-45ee-bd9e-475ebefa5e6e to disappear Jan 25 13:01:00.456: INFO: Pod pod-configmaps-0c02fd9f-289f-45ee-bd9e-475ebefa5e6e no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:01:00.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9930" for this suite. Jan 25 13:01:06.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:01:06.884: INFO: namespace secrets-9930 deletion completed in 6.414592876s • [SLOW TEST:18.895 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:01:06.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 25 13:01:07.062: INFO: Waiting up to 5m0s for pod "pod-d8eb3f04-4b52-430b-8d85-6ebee9b2ad7b" in namespace "emptydir-5205" to be "success or failure" Jan 25 13:01:07.094: INFO: Pod "pod-d8eb3f04-4b52-430b-8d85-6ebee9b2ad7b": Phase="Pending", Reason="", readiness=false. Elapsed: 31.445387ms Jan 25 13:01:09.108: INFO: Pod "pod-d8eb3f04-4b52-430b-8d85-6ebee9b2ad7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045443187s Jan 25 13:01:11.119: INFO: Pod "pod-d8eb3f04-4b52-430b-8d85-6ebee9b2ad7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056701211s Jan 25 13:01:13.126: INFO: Pod "pod-d8eb3f04-4b52-430b-8d85-6ebee9b2ad7b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063409814s Jan 25 13:01:15.140: INFO: Pod "pod-d8eb3f04-4b52-430b-8d85-6ebee9b2ad7b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077534261s Jan 25 13:01:17.182: INFO: Pod "pod-d8eb3f04-4b52-430b-8d85-6ebee9b2ad7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.119687809s STEP: Saw pod success Jan 25 13:01:17.182: INFO: Pod "pod-d8eb3f04-4b52-430b-8d85-6ebee9b2ad7b" satisfied condition "success or failure" Jan 25 13:01:17.190: INFO: Trying to get logs from node iruya-node pod pod-d8eb3f04-4b52-430b-8d85-6ebee9b2ad7b container test-container: STEP: delete the pod Jan 25 13:01:17.327: INFO: Waiting for pod pod-d8eb3f04-4b52-430b-8d85-6ebee9b2ad7b to disappear Jan 25 13:01:17.335: INFO: Pod pod-d8eb3f04-4b52-430b-8d85-6ebee9b2ad7b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:01:17.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5205" for this suite. Jan 25 13:01:23.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:01:23.488: INFO: namespace emptydir-5205 deletion completed in 6.146577031s • [SLOW TEST:16.605 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:01:23.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 25 13:01:23.792: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jan 25 13:01:23.930: INFO: Number of nodes with available pods: 0 Jan 25 13:01:23.930: INFO: Node iruya-node is running more than one daemon pod Jan 25 13:01:26.130: INFO: Number of nodes with available pods: 0 Jan 25 13:01:26.130: INFO: Node iruya-node is running more than one daemon pod Jan 25 13:01:27.171: INFO: Number of nodes with available pods: 0 Jan 25 13:01:27.171: INFO: Node iruya-node is running more than one daemon pod Jan 25 13:01:27.950: INFO: Number of nodes with available pods: 0 Jan 25 13:01:27.950: INFO: Node iruya-node is running more than one daemon pod Jan 25 13:01:28.975: INFO: Number of nodes with available pods: 0 Jan 25 13:01:28.975: INFO: Node iruya-node is running more than one daemon pod Jan 25 13:01:30.876: INFO: Number of nodes with available pods: 0 Jan 25 13:01:30.876: INFO: Node iruya-node is running more than one daemon pod Jan 25 13:01:31.513: INFO: Number of nodes with available pods: 0 Jan 25 13:01:31.513: INFO: Node iruya-node is running more than one daemon pod Jan 25 13:01:31.976: INFO: Number of nodes with available pods: 0 Jan 25 13:01:31.976: INFO: Node iruya-node is running more than one daemon pod Jan 25 13:01:33.692: INFO: Number of nodes with available pods: 0 Jan 25 13:01:33.692: INFO: Node iruya-node is running more than one daemon pod Jan 25 13:01:33.962: INFO: Number of nodes with available pods: 0 Jan 25 13:01:33.962: INFO: Node iruya-node is running more than one daemon pod Jan 25 13:01:34.953: INFO: Number of nodes with available pods: 0 Jan 25 13:01:34.953: INFO: Node iruya-node is running more than one daemon pod Jan 25 13:01:35.948: INFO: Number of nodes with available pods: 2 Jan 25 13:01:35.948: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jan 25 13:01:36.175: INFO: Wrong image for pod: daemon-set-s6w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 25 13:01:36.175: INFO: Wrong image for pod: daemon-set-vqggq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 25 13:01:37.201: INFO: Wrong image for pod: daemon-set-s6w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 25 13:01:37.201: INFO: Wrong image for pod: daemon-set-vqggq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 25 13:01:38.732: INFO: Wrong image for pod: daemon-set-s6w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 25 13:01:38.732: INFO: Wrong image for pod: daemon-set-vqggq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 25 13:01:39.198: INFO: Wrong image for pod: daemon-set-s6w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 25 13:01:39.198: INFO: Wrong image for pod: daemon-set-vqggq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 25 13:01:40.201: INFO: Wrong image for pod: daemon-set-s6w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 25 13:01:40.201: INFO: Wrong image for pod: daemon-set-vqggq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 25 13:01:41.197: INFO: Wrong image for pod: daemon-set-s6w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 25 13:01:41.197: INFO: Wrong image for pod: daemon-set-vqggq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 25 13:01:42.194: INFO: Wrong image for pod: daemon-set-s6w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 25 13:01:42.194: INFO: Wrong image for pod: daemon-set-vqggq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 25 13:01:42.194: INFO: Pod daemon-set-vqggq is not available Jan 25 13:01:43.199: INFO: Pod daemon-set-9prx5 is not available Jan 25 13:01:43.199: INFO: Wrong image for pod: daemon-set-s6w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 25 13:01:44.536: INFO: Pod daemon-set-9prx5 is not available Jan 25 13:01:44.536: INFO: Wrong image for pod: daemon-set-s6w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 25 13:01:45.198: INFO: Pod daemon-set-9prx5 is not available Jan 25 13:01:45.198: INFO: Wrong image for pod: daemon-set-s6w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 25 13:01:46.200: INFO: Pod daemon-set-9prx5 is not available Jan 25 13:01:46.200: INFO: Wrong image for pod: daemon-set-s6w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 25 13:01:47.890: INFO: Pod daemon-set-9prx5 is not available Jan 25 13:01:47.891: INFO: Wrong image for pod: daemon-set-s6w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 25 13:01:48.196: INFO: Pod daemon-set-9prx5 is not available Jan 25 13:01:48.196: INFO: Wrong image for pod: daemon-set-s6w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 25 13:01:49.206: INFO: Pod daemon-set-9prx5 is not available Jan 25 13:01:49.206: INFO: Wrong image for pod: daemon-set-s6w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 25 13:01:50.221: INFO: Pod daemon-set-9prx5 is not available Jan 25 13:01:50.221: INFO: Wrong image for pod: daemon-set-s6w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 25 13:01:51.199: INFO: Wrong image for pod: daemon-set-s6w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 25 13:01:52.203: INFO: Wrong image for pod: daemon-set-s6w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 25 13:01:53.199: INFO: Wrong image for pod: daemon-set-s6w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 25 13:01:54.202: INFO: Wrong image for pod: daemon-set-s6w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 25 13:01:55.204: INFO: Wrong image for pod: daemon-set-s6w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 25 13:01:56.218: INFO: Wrong image for pod: daemon-set-s6w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 25 13:01:56.218: INFO: Pod daemon-set-s6w6j is not available Jan 25 13:01:57.197: INFO: Wrong image for pod: daemon-set-s6w6j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 25 13:01:57.197: INFO: Pod daemon-set-s6w6j is not available Jan 25 13:01:58.253: INFO: Pod daemon-set-2cj4j is not available STEP: Check that daemon pods are still running on every node of the cluster. Jan 25 13:01:58.309: INFO: Number of nodes with available pods: 1 Jan 25 13:01:58.310: INFO: Node iruya-node is running more than one daemon pod Jan 25 13:01:59.329: INFO: Number of nodes with available pods: 1 Jan 25 13:01:59.329: INFO: Node iruya-node is running more than one daemon pod Jan 25 13:02:00.348: INFO: Number of nodes with available pods: 1 Jan 25 13:02:00.348: INFO: Node iruya-node is running more than one daemon pod Jan 25 13:02:01.325: INFO: Number of nodes with available pods: 1 Jan 25 13:02:01.325: INFO: Node iruya-node is running more than one daemon pod Jan 25 13:02:02.331: INFO: Number of nodes with available pods: 1 Jan 25 13:02:02.331: INFO: Node iruya-node is running more than one daemon pod Jan 25 13:02:03.327: INFO: Number of nodes with available pods: 1 Jan 25 13:02:03.328: INFO: Node iruya-node is running more than one daemon pod Jan 25 13:02:04.347: INFO: Number of nodes with available pods: 1 Jan 25 13:02:04.347: INFO: Node iruya-node is running more than one daemon pod Jan 25 13:02:05.327: INFO: Number of nodes with available pods: 1 Jan 25 13:02:05.327: INFO: Node iruya-node is running more than one daemon pod Jan 25 13:02:06.333: INFO: Number of nodes with available pods: 2 Jan 25 13:02:06.333: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2670, will wait for the garbage collector to delete the pods Jan 25 13:02:06.439: INFO: Deleting DaemonSet.extensions daemon-set took: 19.348195ms Jan 25 13:02:06.739: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.588308ms Jan 25 13:02:17.943: INFO: Number of nodes with available pods: 0 Jan 25 13:02:17.943: INFO: Number of running nodes: 0, number of available pods: 0 Jan 25 13:02:17.946: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2670/daemonsets","resourceVersion":"21807466"},"items":null} Jan 25 13:02:17.948: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2670/pods","resourceVersion":"21807466"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:02:17.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2670" for this suite. Jan 25 13:02:25.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:02:26.085: INFO: namespace daemonsets-2670 deletion completed in 8.118685041s • [SLOW TEST:62.596 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:02:26.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jan 25 13:02:39.048: INFO: Successfully updated pod "annotationupdatea4bc1c33-0b00-4217-9f29-55cb0af5d784" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:02:41.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6672" for this suite. Jan 25 13:03:03.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:03:03.664: INFO: namespace projected-6672 deletion completed in 22.160329294s • [SLOW TEST:37.578 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:03:03.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-751b6495-ce51-4562-8fcb-25b353a8f33b STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:03:18.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6658" for this suite. Jan 25 13:03:42.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:03:42.588: INFO: namespace configmap-6658 deletion completed in 24.33927828s • [SLOW TEST:38.923 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:03:42.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Jan 25 13:03:42.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8627' Jan 25 13:03:43.426: INFO: stderr: "" Jan 25 13:03:43.426: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Jan 25 13:03:44.460: INFO: Selector matched 1 pods for map[app:redis] Jan 25 13:03:44.460: INFO: Found 0 / 1 Jan 25 13:03:45.444: INFO: Selector matched 1 pods for map[app:redis] Jan 25 13:03:45.444: INFO: Found 0 / 1 Jan 25 13:03:46.517: INFO: Selector matched 1 pods for map[app:redis] Jan 25 13:03:46.517: INFO: Found 0 / 1 Jan 25 13:03:47.441: INFO: Selector matched 1 pods for map[app:redis] Jan 25 13:03:47.441: INFO: Found 0 / 1 Jan 25 13:03:48.439: INFO: Selector matched 1 pods for map[app:redis] Jan 25 13:03:48.440: INFO: Found 0 / 1 Jan 25 13:03:49.438: INFO: Selector matched 1 pods for map[app:redis] Jan 25 13:03:49.438: INFO: Found 0 / 1 Jan 25 13:03:50.436: INFO: Selector matched 1 pods for map[app:redis] Jan 25 13:03:50.436: INFO: Found 0 / 1 Jan 25 13:03:51.439: INFO: Selector matched 1 pods for map[app:redis] Jan 25 13:03:51.439: INFO: Found 0 / 1 Jan 25 13:03:52.434: INFO: Selector matched 1 pods for map[app:redis] Jan 25 13:03:52.434: INFO: Found 0 / 1 Jan 25 13:03:53.487: INFO: Selector matched 1 pods for map[app:redis] Jan 25 13:03:53.487: INFO: Found 0 / 1 Jan 25 13:03:54.439: INFO: Selector matched 1 pods for map[app:redis] Jan 25 13:03:54.439: INFO: Found 0 / 1 Jan 25 13:03:55.437: INFO: Selector matched 1 pods for map[app:redis] Jan 25 13:03:55.437: INFO: Found 1 / 1 Jan 25 13:03:55.437: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 25 13:03:55.442: INFO: Selector matched 1 pods for map[app:redis] Jan 25 13:03:55.442: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jan 25 13:03:55.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qtkm5 redis-master --namespace=kubectl-8627' Jan 25 13:03:55.610: INFO: stderr: "" Jan 25 13:03:55.610: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 25 Jan 13:03:53.119 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 25 Jan 13:03:53.119 # Server started, Redis version 3.2.12\n1:M 25 Jan 13:03:53.139 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 25 Jan 13:03:53.140 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jan 25 13:03:55.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qtkm5 redis-master --namespace=kubectl-8627 --tail=1' Jan 25 13:03:55.749: INFO: stderr: "" Jan 25 13:03:55.749: INFO: stdout: "1:M 25 Jan 13:03:53.140 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jan 25 13:03:55.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qtkm5 redis-master --namespace=kubectl-8627 --limit-bytes=1' Jan 25 13:03:55.883: INFO: stderr: "" Jan 25 13:03:55.883: INFO: stdout: " " STEP: exposing timestamps Jan 25 13:03:55.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qtkm5 redis-master --namespace=kubectl-8627 --tail=1 --timestamps' Jan 25 13:03:56.037: INFO: stderr: "" Jan 25 13:03:56.037: INFO: stdout: "2020-01-25T13:03:53.140875815Z 1:M 25 Jan 13:03:53.140 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jan 25 13:03:58.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qtkm5 redis-master --namespace=kubectl-8627 --since=1s' Jan 25 13:03:58.851: INFO: stderr: "" Jan 25 13:03:58.851: INFO: stdout: "" Jan 25 13:03:58.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qtkm5 redis-master --namespace=kubectl-8627 --since=24h' Jan 25 13:03:59.038: INFO: stderr: "" Jan 25 13:03:59.039: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 25 Jan 13:03:53.119 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 25 Jan 13:03:53.119 # Server started, Redis version 3.2.12\n1:M 25 Jan 13:03:53.139 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 25 Jan 13:03:53.140 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Jan 25 13:03:59.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8627' Jan 25 13:03:59.153: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 25 13:03:59.153: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jan 25 13:03:59.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-8627' Jan 25 13:03:59.269: INFO: stderr: "No resources found.\n" Jan 25 13:03:59.269: INFO: stdout: "" Jan 25 13:03:59.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-8627 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 25 13:03:59.386: INFO: stderr: "" Jan 25 13:03:59.386: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:03:59.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8627" for this suite. Jan 25 13:04:05.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:04:05.556: INFO: namespace kubectl-8627 deletion completed in 6.166093512s • [SLOW TEST:22.964 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:04:05.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 25 13:04:25.796: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 25 13:04:25.842: INFO: Pod pod-with-prestop-http-hook still exists Jan 25 13:04:27.842: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 25 13:04:27.853: INFO: Pod pod-with-prestop-http-hook still exists Jan 25 13:04:29.843: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 25 13:04:29.868: INFO: Pod pod-with-prestop-http-hook still exists Jan 25 13:04:31.843: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 25 13:04:31.864: INFO: Pod pod-with-prestop-http-hook still exists Jan 25 13:04:33.843: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 25 13:04:33.874: INFO: Pod pod-with-prestop-http-hook still exists Jan 25 13:04:35.843: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 25 13:04:35.853: INFO: Pod pod-with-prestop-http-hook still exists Jan 25 13:04:37.843: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 25 13:04:37.853: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:04:37.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1719" for this suite. Jan 25 13:04:59.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:05:00.023: INFO: namespace container-lifecycle-hook-1719 deletion completed in 22.131676494s • [SLOW TEST:54.467 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:05:00.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 25 13:05:00.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1311' Jan 25 13:05:00.252: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 25 13:05:00.252: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Jan 25 13:05:02.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-1311' Jan 25 13:05:02.468: INFO: stderr: "" Jan 25 13:05:02.468: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:05:02.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1311" for this suite. Jan 25 13:05:08.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:05:08.735: INFO: namespace kubectl-1311 deletion completed in 6.258930833s • [SLOW TEST:8.712 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:05:08.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0125 13:05:50.019138 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 25 13:05:50.019: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:05:50.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2175" for this suite. Jan 25 13:06:02.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:06:04.631: INFO: namespace gc-2175 deletion completed in 14.608347207s • [SLOW TEST:55.895 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:06:04.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Jan 25 13:06:09.684: INFO: created pod pod-service-account-defaultsa Jan 25 13:06:09.684: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 25 13:06:09.768: INFO: created pod pod-service-account-mountsa Jan 25 13:06:09.768: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 25 13:06:09.931: INFO: created pod pod-service-account-nomountsa Jan 25 13:06:09.931: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 25 13:06:09.955: INFO: created pod pod-service-account-defaultsa-mountspec Jan 25 13:06:09.956: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 25 13:06:10.070: INFO: created pod pod-service-account-mountsa-mountspec Jan 25 13:06:10.070: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 25 13:06:10.304: INFO: created pod pod-service-account-nomountsa-mountspec Jan 25 13:06:10.305: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 25 13:06:10.353: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 25 13:06:10.353: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 25 13:06:10.379: INFO: created pod pod-service-account-mountsa-nomountspec Jan 25 13:06:10.379: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 25 13:06:10.532: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 25 13:06:10.532: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:06:10.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6624" for this suite. Jan 25 13:07:31.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:07:31.990: INFO: namespace svcaccounts-6624 deletion completed in 1m21.389576561s • [SLOW TEST:87.358 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:07:31.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-3497 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 25 13:07:32.107: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 25 13:08:14.433: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-3497 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 13:08:14.433: INFO: >>> kubeConfig: /root/.kube/config I0125 13:08:14.548817 8 log.go:172] (0xc00090e630) (0xc001134780) Create stream I0125 13:08:14.549230 8 log.go:172] (0xc00090e630) (0xc001134780) Stream added, broadcasting: 1 I0125 13:08:14.569875 8 log.go:172] (0xc00090e630) Reply frame received for 1 I0125 13:08:14.569976 8 log.go:172] (0xc00090e630) (0xc001134aa0) Create stream I0125 13:08:14.570006 8 log.go:172] (0xc00090e630) (0xc001134aa0) Stream added, broadcasting: 3 I0125 13:08:14.574693 8 log.go:172] (0xc00090e630) Reply frame received for 3 I0125 13:08:14.574749 8 log.go:172] (0xc00090e630) (0xc0001b6140) Create stream I0125 13:08:14.574766 8 log.go:172] (0xc00090e630) (0xc0001b6140) Stream added, broadcasting: 5 I0125 13:08:14.579516 8 log.go:172] (0xc00090e630) Reply frame received for 5 I0125 13:08:15.561647 8 log.go:172] (0xc00090e630) Data frame received for 3 I0125 13:08:15.561794 8 log.go:172] (0xc001134aa0) (3) Data frame handling I0125 13:08:15.561834 8 log.go:172] (0xc001134aa0) (3) Data frame sent I0125 13:08:15.918044 8 log.go:172] (0xc00090e630) (0xc001134aa0) Stream removed, broadcasting: 3 I0125 13:08:15.918265 8 log.go:172] (0xc00090e630) Data frame received for 1 I0125 13:08:15.918286 8 log.go:172] (0xc001134780) (1) Data frame handling I0125 13:08:15.918301 8 log.go:172] (0xc001134780) (1) Data frame sent I0125 13:08:15.918318 8 log.go:172] (0xc00090e630) (0xc001134780) Stream removed, broadcasting: 1 I0125 13:08:15.918366 8 log.go:172] (0xc00090e630) (0xc0001b6140) Stream removed, broadcasting: 5 I0125 13:08:15.918698 8 log.go:172] (0xc00090e630) Go away received I0125 13:08:15.918867 8 log.go:172] (0xc00090e630) (0xc001134780) Stream removed, broadcasting: 1 I0125 13:08:15.918912 8 log.go:172] (0xc00090e630) (0xc001134aa0) Stream removed, broadcasting: 3 I0125 13:08:15.918931 8 log.go:172] (0xc00090e630) (0xc0001b6140) Stream removed, broadcasting: 5 Jan 25 13:08:15.919: INFO: Waiting for endpoints: map[] Jan 25 13:08:15.932: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-3497 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 13:08:15.932: INFO: >>> kubeConfig: /root/.kube/config I0125 13:08:15.996738 8 log.go:172] (0xc00090f4a0) (0xc001135360) Create stream I0125 13:08:15.996796 8 log.go:172] (0xc00090f4a0) (0xc001135360) Stream added, broadcasting: 1 I0125 13:08:16.003841 8 log.go:172] (0xc00090f4a0) Reply frame received for 1 I0125 13:08:16.003878 8 log.go:172] (0xc00090f4a0) (0xc000e4e000) Create stream I0125 13:08:16.003894 8 log.go:172] (0xc00090f4a0) (0xc000e4e000) Stream added, broadcasting: 3 I0125 13:08:16.006019 8 log.go:172] (0xc00090f4a0) Reply frame received for 3 I0125 13:08:16.006042 8 log.go:172] (0xc00090f4a0) (0xc001135400) Create stream I0125 13:08:16.006052 8 log.go:172] (0xc00090f4a0) (0xc001135400) Stream added, broadcasting: 5 I0125 13:08:16.007439 8 log.go:172] (0xc00090f4a0) Reply frame received for 5 I0125 13:08:16.133815 8 log.go:172] (0xc00090f4a0) Data frame received for 3 I0125 13:08:16.133879 8 log.go:172] (0xc000e4e000) (3) Data frame handling I0125 13:08:16.133901 8 log.go:172] (0xc000e4e000) (3) Data frame sent I0125 13:08:16.276310 8 log.go:172] (0xc00090f4a0) Data frame received for 1 I0125 13:08:16.276367 8 log.go:172] (0xc001135360) (1) Data frame handling I0125 13:08:16.276378 8 log.go:172] (0xc001135360) (1) Data frame sent I0125 13:08:16.277269 8 log.go:172] (0xc00090f4a0) (0xc001135360) Stream removed, broadcasting: 1 I0125 13:08:16.277356 8 log.go:172] (0xc00090f4a0) (0xc000e4e000) Stream removed, broadcasting: 3 I0125 13:08:16.277694 8 log.go:172] (0xc00090f4a0) (0xc001135400) Stream removed, broadcasting: 5 I0125 13:08:16.277749 8 log.go:172] (0xc00090f4a0) (0xc001135360) Stream removed, broadcasting: 1 I0125 13:08:16.277770 8 log.go:172] (0xc00090f4a0) (0xc000e4e000) Stream removed, broadcasting: 3 I0125 13:08:16.277788 8 log.go:172] (0xc00090f4a0) (0xc001135400) Stream removed, broadcasting: 5 Jan 25 13:08:16.278: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:08:16.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3497" for this suite. Jan 25 13:08:38.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:08:38.439: INFO: namespace pod-network-test-3497 deletion completed in 22.15267777s • [SLOW TEST:66.448 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:08:38.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-r8hg STEP: Creating a pod to test atomic-volume-subpath Jan 25 13:08:38.691: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-r8hg" in namespace "subpath-5124" to be "success or failure" Jan 25 13:08:38.896: INFO: Pod "pod-subpath-test-projected-r8hg": Phase="Pending", Reason="", readiness=false. Elapsed: 205.356086ms Jan 25 13:08:40.956: INFO: Pod "pod-subpath-test-projected-r8hg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.265135289s Jan 25 13:08:42.964: INFO: Pod "pod-subpath-test-projected-r8hg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.272802634s Jan 25 13:08:44.972: INFO: Pod "pod-subpath-test-projected-r8hg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.281536051s Jan 25 13:08:46.985: INFO: Pod "pod-subpath-test-projected-r8hg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.294496298s Jan 25 13:08:49.001: INFO: Pod "pod-subpath-test-projected-r8hg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.310241751s Jan 25 13:08:51.057: INFO: Pod "pod-subpath-test-projected-r8hg": Phase="Running", Reason="", readiness=true. Elapsed: 12.366451066s Jan 25 13:08:53.065: INFO: Pod "pod-subpath-test-projected-r8hg": Phase="Running", Reason="", readiness=true. Elapsed: 14.373752335s Jan 25 13:08:55.141: INFO: Pod "pod-subpath-test-projected-r8hg": Phase="Running", Reason="", readiness=true. Elapsed: 16.44962s Jan 25 13:08:57.166: INFO: Pod "pod-subpath-test-projected-r8hg": Phase="Running", Reason="", readiness=true. Elapsed: 18.474861835s Jan 25 13:08:59.179: INFO: Pod "pod-subpath-test-projected-r8hg": Phase="Running", Reason="", readiness=true. Elapsed: 20.488210381s Jan 25 13:09:01.195: INFO: Pod "pod-subpath-test-projected-r8hg": Phase="Running", Reason="", readiness=true. Elapsed: 22.504530751s Jan 25 13:09:03.209: INFO: Pod "pod-subpath-test-projected-r8hg": Phase="Running", Reason="", readiness=true. Elapsed: 24.518257248s Jan 25 13:09:05.219: INFO: Pod "pod-subpath-test-projected-r8hg": Phase="Running", Reason="", readiness=true. Elapsed: 26.527688214s Jan 25 13:09:07.228: INFO: Pod "pod-subpath-test-projected-r8hg": Phase="Running", Reason="", readiness=true. Elapsed: 28.537089662s Jan 25 13:09:09.248: INFO: Pod "pod-subpath-test-projected-r8hg": Phase="Running", Reason="", readiness=true. Elapsed: 30.557370644s Jan 25 13:09:11.272: INFO: Pod "pod-subpath-test-projected-r8hg": Phase="Running", Reason="", readiness=true. Elapsed: 32.581071313s Jan 25 13:09:13.282: INFO: Pod "pod-subpath-test-projected-r8hg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.591244598s STEP: Saw pod success Jan 25 13:09:13.282: INFO: Pod "pod-subpath-test-projected-r8hg" satisfied condition "success or failure" Jan 25 13:09:13.286: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-r8hg container test-container-subpath-projected-r8hg: STEP: delete the pod Jan 25 13:09:13.666: INFO: Waiting for pod pod-subpath-test-projected-r8hg to disappear Jan 25 13:09:13.675: INFO: Pod pod-subpath-test-projected-r8hg no longer exists STEP: Deleting pod pod-subpath-test-projected-r8hg Jan 25 13:09:13.675: INFO: Deleting pod "pod-subpath-test-projected-r8hg" in namespace "subpath-5124" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:09:13.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5124" for this suite. Jan 25 13:09:19.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:09:19.909: INFO: namespace subpath-5124 deletion completed in 6.221565681s • [SLOW TEST:41.469 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:09:19.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-741ad344-e976-4b06-b644-463b76302cd8 STEP: Creating a pod to test consume configMaps Jan 25 13:09:20.175: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8b774f7a-585d-462d-b5f9-7a26d02d9fc0" in namespace "projected-8926" to be "success or failure" Jan 25 13:09:20.192: INFO: Pod "pod-projected-configmaps-8b774f7a-585d-462d-b5f9-7a26d02d9fc0": Phase="Pending", Reason="", readiness=false. Elapsed: 17.619155ms Jan 25 13:09:22.203: INFO: Pod "pod-projected-configmaps-8b774f7a-585d-462d-b5f9-7a26d02d9fc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028398159s Jan 25 13:09:24.223: INFO: Pod "pod-projected-configmaps-8b774f7a-585d-462d-b5f9-7a26d02d9fc0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048240826s Jan 25 13:09:26.232: INFO: Pod "pod-projected-configmaps-8b774f7a-585d-462d-b5f9-7a26d02d9fc0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057438268s Jan 25 13:09:28.278: INFO: Pod "pod-projected-configmaps-8b774f7a-585d-462d-b5f9-7a26d02d9fc0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.103524188s Jan 25 13:09:30.287: INFO: Pod "pod-projected-configmaps-8b774f7a-585d-462d-b5f9-7a26d02d9fc0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.112181562s Jan 25 13:09:32.301: INFO: Pod "pod-projected-configmaps-8b774f7a-585d-462d-b5f9-7a26d02d9fc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.126180993s STEP: Saw pod success Jan 25 13:09:32.301: INFO: Pod "pod-projected-configmaps-8b774f7a-585d-462d-b5f9-7a26d02d9fc0" satisfied condition "success or failure" Jan 25 13:09:32.313: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-8b774f7a-585d-462d-b5f9-7a26d02d9fc0 container projected-configmap-volume-test: STEP: delete the pod Jan 25 13:09:33.458: INFO: Waiting for pod pod-projected-configmaps-8b774f7a-585d-462d-b5f9-7a26d02d9fc0 to disappear Jan 25 13:09:33.467: INFO: Pod pod-projected-configmaps-8b774f7a-585d-462d-b5f9-7a26d02d9fc0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:09:33.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8926" for this suite. Jan 25 13:09:39.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:09:39.698: INFO: namespace projected-8926 deletion completed in 6.22363305s • [SLOW TEST:19.789 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:09:39.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:09:45.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9102" for this suite. Jan 25 13:09:51.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:09:51.578: INFO: namespace watch-9102 deletion completed in 6.205228792s • [SLOW TEST:11.880 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:09:51.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-27f365a2-0085-44d5-bdae-05182aaabe58 STEP: Creating a pod to test consume secrets Jan 25 13:09:51.910: INFO: Waiting up to 5m0s for pod "pod-secrets-727b90f9-1310-4b37-9aef-973343e0db2f" in namespace "secrets-4105" to be "success or failure" Jan 25 13:09:52.054: INFO: Pod "pod-secrets-727b90f9-1310-4b37-9aef-973343e0db2f": Phase="Pending", Reason="", readiness=false. Elapsed: 144.341899ms Jan 25 13:09:54.065: INFO: Pod "pod-secrets-727b90f9-1310-4b37-9aef-973343e0db2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155100273s Jan 25 13:09:56.083: INFO: Pod "pod-secrets-727b90f9-1310-4b37-9aef-973343e0db2f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.17272687s Jan 25 13:09:58.173: INFO: Pod "pod-secrets-727b90f9-1310-4b37-9aef-973343e0db2f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.262837627s Jan 25 13:10:00.180: INFO: Pod "pod-secrets-727b90f9-1310-4b37-9aef-973343e0db2f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.269764957s Jan 25 13:10:02.288: INFO: Pod "pod-secrets-727b90f9-1310-4b37-9aef-973343e0db2f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.377589483s Jan 25 13:10:04.890: INFO: Pod "pod-secrets-727b90f9-1310-4b37-9aef-973343e0db2f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.979818145s Jan 25 13:10:06.902: INFO: Pod "pod-secrets-727b90f9-1310-4b37-9aef-973343e0db2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.991771297s STEP: Saw pod success Jan 25 13:10:06.902: INFO: Pod "pod-secrets-727b90f9-1310-4b37-9aef-973343e0db2f" satisfied condition "success or failure" Jan 25 13:10:06.907: INFO: Trying to get logs from node iruya-node pod pod-secrets-727b90f9-1310-4b37-9aef-973343e0db2f container secret-volume-test: STEP: delete the pod Jan 25 13:10:07.083: INFO: Waiting for pod pod-secrets-727b90f9-1310-4b37-9aef-973343e0db2f to disappear Jan 25 13:10:07.093: INFO: Pod pod-secrets-727b90f9-1310-4b37-9aef-973343e0db2f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:10:07.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4105" for this suite. Jan 25 13:10:13.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:10:13.324: INFO: namespace secrets-4105 deletion completed in 6.225784922s • [SLOW TEST:21.745 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:10:13.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Jan 25 13:10:13.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3404' Jan 25 13:10:16.467: INFO: stderr: "" Jan 25 13:10:16.467: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 25 13:10:16.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3404' Jan 25 13:10:16.717: INFO: stderr: "" Jan 25 13:10:16.717: INFO: stdout: "update-demo-nautilus-2l6dg update-demo-nautilus-9w6pt " Jan 25 13:10:16.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2l6dg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3404' Jan 25 13:10:16.871: INFO: stderr: "" Jan 25 13:10:16.871: INFO: stdout: "" Jan 25 13:10:16.872: INFO: update-demo-nautilus-2l6dg is created but not running Jan 25 13:10:21.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3404' Jan 25 13:10:22.215: INFO: stderr: "" Jan 25 13:10:22.215: INFO: stdout: "update-demo-nautilus-2l6dg update-demo-nautilus-9w6pt " Jan 25 13:10:22.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2l6dg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3404' Jan 25 13:10:22.755: INFO: stderr: "" Jan 25 13:10:22.755: INFO: stdout: "" Jan 25 13:10:22.755: INFO: update-demo-nautilus-2l6dg is created but not running Jan 25 13:10:27.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3404' Jan 25 13:10:27.997: INFO: stderr: "" Jan 25 13:10:27.997: INFO: stdout: "update-demo-nautilus-2l6dg update-demo-nautilus-9w6pt " Jan 25 13:10:27.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2l6dg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3404' Jan 25 13:10:28.124: INFO: stderr: "" Jan 25 13:10:28.124: INFO: stdout: "" Jan 25 13:10:28.124: INFO: update-demo-nautilus-2l6dg is created but not running Jan 25 13:10:33.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3404' Jan 25 13:10:33.253: INFO: stderr: "" Jan 25 13:10:33.253: INFO: stdout: "update-demo-nautilus-2l6dg update-demo-nautilus-9w6pt " Jan 25 13:10:33.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2l6dg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3404' Jan 25 13:10:33.372: INFO: stderr: "" Jan 25 13:10:33.372: INFO: stdout: "true" Jan 25 13:10:33.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2l6dg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3404' Jan 25 13:10:33.475: INFO: stderr: "" Jan 25 13:10:33.475: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 25 13:10:33.475: INFO: validating pod update-demo-nautilus-2l6dg Jan 25 13:10:33.485: INFO: got data: { "image": "nautilus.jpg" } Jan 25 13:10:33.485: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 25 13:10:33.485: INFO: update-demo-nautilus-2l6dg is verified up and running Jan 25 13:10:33.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9w6pt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3404' Jan 25 13:10:33.583: INFO: stderr: "" Jan 25 13:10:33.583: INFO: stdout: "true" Jan 25 13:10:33.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9w6pt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3404' Jan 25 13:10:33.662: INFO: stderr: "" Jan 25 13:10:33.662: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 25 13:10:33.662: INFO: validating pod update-demo-nautilus-9w6pt Jan 25 13:10:33.771: INFO: got data: { "image": "nautilus.jpg" } Jan 25 13:10:33.771: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 25 13:10:33.771: INFO: update-demo-nautilus-9w6pt is verified up and running STEP: using delete to clean up resources Jan 25 13:10:33.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3404' Jan 25 13:10:33.911: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 25 13:10:33.911: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 25 13:10:33.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3404' Jan 25 13:10:34.029: INFO: stderr: "No resources found.\n" Jan 25 13:10:34.030: INFO: stdout: "" Jan 25 13:10:34.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3404 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 25 13:10:34.106: INFO: stderr: "" Jan 25 13:10:34.106: INFO: stdout: "update-demo-nautilus-2l6dg\nupdate-demo-nautilus-9w6pt\n" Jan 25 13:10:34.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3404' Jan 25 13:10:36.040: INFO: stderr: "No resources found.\n" Jan 25 13:10:36.040: INFO: stdout: "" Jan 25 13:10:36.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3404 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 25 13:10:36.374: INFO: stderr: "" Jan 25 13:10:36.374: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:10:36.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3404" for this suite. Jan 25 13:11:00.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:11:00.612: INFO: namespace kubectl-3404 deletion completed in 24.225703492s • [SLOW TEST:47.287 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:11:00.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-f9de915f-ff76-4f3e-a4c6-fa5de9d95f7c STEP: Creating a pod to test consume secrets Jan 25 13:11:00.877: INFO: Waiting up to 5m0s for pod "pod-secrets-748840d0-2cba-4801-9767-7d9d73aa18bd" in namespace "secrets-9110" to be "success or failure" Jan 25 13:11:00.901: INFO: Pod "pod-secrets-748840d0-2cba-4801-9767-7d9d73aa18bd": Phase="Pending", Reason="", readiness=false. Elapsed: 23.643341ms Jan 25 13:11:02.915: INFO: Pod "pod-secrets-748840d0-2cba-4801-9767-7d9d73aa18bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038151843s Jan 25 13:11:04.937: INFO: Pod "pod-secrets-748840d0-2cba-4801-9767-7d9d73aa18bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059746603s Jan 25 13:11:06.945: INFO: Pod "pod-secrets-748840d0-2cba-4801-9767-7d9d73aa18bd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067643334s Jan 25 13:11:09.466: INFO: Pod "pod-secrets-748840d0-2cba-4801-9767-7d9d73aa18bd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.589478931s Jan 25 13:11:11.478: INFO: Pod "pod-secrets-748840d0-2cba-4801-9767-7d9d73aa18bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.601518199s STEP: Saw pod success Jan 25 13:11:11.479: INFO: Pod "pod-secrets-748840d0-2cba-4801-9767-7d9d73aa18bd" satisfied condition "success or failure" Jan 25 13:11:11.482: INFO: Trying to get logs from node iruya-node pod pod-secrets-748840d0-2cba-4801-9767-7d9d73aa18bd container secret-volume-test: STEP: delete the pod Jan 25 13:11:11.709: INFO: Waiting for pod pod-secrets-748840d0-2cba-4801-9767-7d9d73aa18bd to disappear Jan 25 13:11:11.717: INFO: Pod pod-secrets-748840d0-2cba-4801-9767-7d9d73aa18bd no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:11:11.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9110" for this suite. Jan 25 13:11:17.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:11:17.963: INFO: namespace secrets-9110 deletion completed in 6.241436776s • [SLOW TEST:17.351 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:11:17.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 25 13:11:18.125: INFO: Waiting up to 5m0s for pod "downwardapi-volume-90c8afbf-068d-414d-919d-01318d1e9fca" in namespace "projected-6308" to be "success or failure" Jan 25 13:11:18.158: INFO: Pod "downwardapi-volume-90c8afbf-068d-414d-919d-01318d1e9fca": Phase="Pending", Reason="", readiness=false. Elapsed: 32.922976ms Jan 25 13:11:20.171: INFO: Pod "downwardapi-volume-90c8afbf-068d-414d-919d-01318d1e9fca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046276356s Jan 25 13:11:22.186: INFO: Pod "downwardapi-volume-90c8afbf-068d-414d-919d-01318d1e9fca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06148004s Jan 25 13:11:24.199: INFO: Pod "downwardapi-volume-90c8afbf-068d-414d-919d-01318d1e9fca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073688661s Jan 25 13:11:26.208: INFO: Pod "downwardapi-volume-90c8afbf-068d-414d-919d-01318d1e9fca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082500811s Jan 25 13:11:28.216: INFO: Pod "downwardapi-volume-90c8afbf-068d-414d-919d-01318d1e9fca": Phase="Running", Reason="", readiness=true. Elapsed: 10.090737774s Jan 25 13:11:30.889: INFO: Pod "downwardapi-volume-90c8afbf-068d-414d-919d-01318d1e9fca": Phase="Running", Reason="", readiness=true. Elapsed: 12.763885145s Jan 25 13:11:32.896: INFO: Pod "downwardapi-volume-90c8afbf-068d-414d-919d-01318d1e9fca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.771109616s STEP: Saw pod success Jan 25 13:11:32.896: INFO: Pod "downwardapi-volume-90c8afbf-068d-414d-919d-01318d1e9fca" satisfied condition "success or failure" Jan 25 13:11:32.899: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-90c8afbf-068d-414d-919d-01318d1e9fca container client-container: STEP: delete the pod Jan 25 13:11:33.018: INFO: Waiting for pod downwardapi-volume-90c8afbf-068d-414d-919d-01318d1e9fca to disappear Jan 25 13:11:33.026: INFO: Pod downwardapi-volume-90c8afbf-068d-414d-919d-01318d1e9fca no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:11:33.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6308" for this suite. Jan 25 13:11:39.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:11:39.203: INFO: namespace projected-6308 deletion completed in 6.172506174s • [SLOW TEST:21.239 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:11:39.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-af4bf247-5de9-409f-a7d3-3ec335939b06 in namespace container-probe-7395 Jan 25 13:11:49.543: INFO: Started pod liveness-af4bf247-5de9-409f-a7d3-3ec335939b06 in namespace container-probe-7395 STEP: checking the pod's current state and verifying that restartCount is present Jan 25 13:11:49.547: INFO: Initial restart count of pod liveness-af4bf247-5de9-409f-a7d3-3ec335939b06 is 0 Jan 25 13:12:11.736: INFO: Restart count of pod container-probe-7395/liveness-af4bf247-5de9-409f-a7d3-3ec335939b06 is now 1 (22.189553287s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:12:11.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7395" for this suite. Jan 25 13:12:17.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:12:18.007: INFO: namespace container-probe-7395 deletion completed in 6.202528468s • [SLOW TEST:38.803 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:12:18.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Jan 25 13:12:18.089: INFO: Waiting up to 5m0s for pod "client-containers-ce36ad99-9308-416a-9125-90e50cb97dc7" in namespace "containers-5748" to be "success or failure" Jan 25 13:12:18.163: INFO: Pod "client-containers-ce36ad99-9308-416a-9125-90e50cb97dc7": Phase="Pending", Reason="", readiness=false. Elapsed: 73.103964ms Jan 25 13:12:20.172: INFO: Pod "client-containers-ce36ad99-9308-416a-9125-90e50cb97dc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082495382s Jan 25 13:12:22.180: INFO: Pod "client-containers-ce36ad99-9308-416a-9125-90e50cb97dc7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090100579s Jan 25 13:12:24.202: INFO: Pod "client-containers-ce36ad99-9308-416a-9125-90e50cb97dc7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112851023s Jan 25 13:12:26.213: INFO: Pod "client-containers-ce36ad99-9308-416a-9125-90e50cb97dc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.12386566s STEP: Saw pod success Jan 25 13:12:26.213: INFO: Pod "client-containers-ce36ad99-9308-416a-9125-90e50cb97dc7" satisfied condition "success or failure" Jan 25 13:12:26.218: INFO: Trying to get logs from node iruya-node pod client-containers-ce36ad99-9308-416a-9125-90e50cb97dc7 container test-container: STEP: delete the pod Jan 25 13:12:26.268: INFO: Waiting for pod client-containers-ce36ad99-9308-416a-9125-90e50cb97dc7 to disappear Jan 25 13:12:26.275: INFO: Pod client-containers-ce36ad99-9308-416a-9125-90e50cb97dc7 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:12:26.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5748" for this suite. Jan 25 13:12:32.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:12:32.446: INFO: namespace containers-5748 deletion completed in 6.163840738s • [SLOW TEST:14.439 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:12:32.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-3420 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 25 13:12:32.563: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 25 13:13:10.805: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-3420 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 13:13:10.805: INFO: >>> kubeConfig: /root/.kube/config I0125 13:13:10.942159 8 log.go:172] (0xc001a13130) (0xc00182d0e0) Create stream I0125 13:13:10.942292 8 log.go:172] (0xc001a13130) (0xc00182d0e0) Stream added, broadcasting: 1 I0125 13:13:10.949709 8 log.go:172] (0xc001a13130) Reply frame received for 1 I0125 13:13:10.949766 8 log.go:172] (0xc001a13130) (0xc001bdc820) Create stream I0125 13:13:10.949777 8 log.go:172] (0xc001a13130) (0xc001bdc820) Stream added, broadcasting: 3 I0125 13:13:10.955355 8 log.go:172] (0xc001a13130) Reply frame received for 3 I0125 13:13:10.955438 8 log.go:172] (0xc001a13130) (0xc001378a00) Create stream I0125 13:13:10.955450 8 log.go:172] (0xc001a13130) (0xc001378a00) Stream added, broadcasting: 5 I0125 13:13:10.958072 8 log.go:172] (0xc001a13130) Reply frame received for 5 I0125 13:13:11.109774 8 log.go:172] (0xc001a13130) Data frame received for 3 I0125 13:13:11.109881 8 log.go:172] (0xc001bdc820) (3) Data frame handling I0125 13:13:11.109899 8 log.go:172] (0xc001bdc820) (3) Data frame sent I0125 13:13:11.281759 8 log.go:172] (0xc001a13130) (0xc001bdc820) Stream removed, broadcasting: 3 I0125 13:13:11.282041 8 log.go:172] (0xc001a13130) Data frame received for 1 I0125 13:13:11.282213 8 log.go:172] (0xc001a13130) (0xc001378a00) Stream removed, broadcasting: 5 I0125 13:13:11.282279 8 log.go:172] (0xc00182d0e0) (1) Data frame handling I0125 13:13:11.282536 8 log.go:172] (0xc00182d0e0) (1) Data frame sent I0125 13:13:11.282690 8 log.go:172] (0xc001a13130) (0xc00182d0e0) Stream removed, broadcasting: 1 I0125 13:13:11.282730 8 log.go:172] (0xc001a13130) Go away received I0125 13:13:11.283324 8 log.go:172] (0xc001a13130) (0xc00182d0e0) Stream removed, broadcasting: 1 I0125 13:13:11.283354 8 log.go:172] (0xc001a13130) (0xc001bdc820) Stream removed, broadcasting: 3 I0125 13:13:11.283372 8 log.go:172] (0xc001a13130) (0xc001378a00) Stream removed, broadcasting: 5 Jan 25 13:13:11.283: INFO: Waiting for endpoints: map[] Jan 25 13:13:11.298: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-3420 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 13:13:11.298: INFO: >>> kubeConfig: /root/.kube/config I0125 13:13:11.364925 8 log.go:172] (0xc0013a4e70) (0xc0021ca280) Create stream I0125 13:13:11.365095 8 log.go:172] (0xc0013a4e70) (0xc0021ca280) Stream added, broadcasting: 1 I0125 13:13:11.378945 8 log.go:172] (0xc0013a4e70) Reply frame received for 1 I0125 13:13:11.378998 8 log.go:172] (0xc0013a4e70) (0xc001378c80) Create stream I0125 13:13:11.379006 8 log.go:172] (0xc0013a4e70) (0xc001378c80) Stream added, broadcasting: 3 I0125 13:13:11.382241 8 log.go:172] (0xc0013a4e70) Reply frame received for 3 I0125 13:13:11.382364 8 log.go:172] (0xc0013a4e70) (0xc001bdc8c0) Create stream I0125 13:13:11.382382 8 log.go:172] (0xc0013a4e70) (0xc001bdc8c0) Stream added, broadcasting: 5 I0125 13:13:11.384402 8 log.go:172] (0xc0013a4e70) Reply frame received for 5 I0125 13:13:11.518907 8 log.go:172] (0xc0013a4e70) Data frame received for 3 I0125 13:13:11.519196 8 log.go:172] (0xc001378c80) (3) Data frame handling I0125 13:13:11.519292 8 log.go:172] (0xc001378c80) (3) Data frame sent I0125 13:13:11.678370 8 log.go:172] (0xc0013a4e70) (0xc001378c80) Stream removed, broadcasting: 3 I0125 13:13:11.678734 8 log.go:172] (0xc0013a4e70) Data frame received for 1 I0125 13:13:11.678779 8 log.go:172] (0xc0021ca280) (1) Data frame handling I0125 13:13:11.678805 8 log.go:172] (0xc0021ca280) (1) Data frame sent I0125 13:13:11.678923 8 log.go:172] (0xc0013a4e70) (0xc001bdc8c0) Stream removed, broadcasting: 5 I0125 13:13:11.679112 8 log.go:172] (0xc0013a4e70) (0xc0021ca280) Stream removed, broadcasting: 1 I0125 13:13:11.679189 8 log.go:172] (0xc0013a4e70) Go away received I0125 13:13:11.679855 8 log.go:172] (0xc0013a4e70) (0xc0021ca280) Stream removed, broadcasting: 1 I0125 13:13:11.679887 8 log.go:172] (0xc0013a4e70) (0xc001378c80) Stream removed, broadcasting: 3 I0125 13:13:11.679899 8 log.go:172] (0xc0013a4e70) (0xc001bdc8c0) Stream removed, broadcasting: 5 Jan 25 13:13:11.680: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:13:11.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3420" for this suite. Jan 25 13:13:39.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:13:40.072: INFO: namespace pod-network-test-3420 deletion completed in 28.157184224s • [SLOW TEST:67.626 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:13:40.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Jan 25 13:13:40.231: INFO: Waiting up to 5m0s for pod "client-containers-c471d5a3-7a63-4368-9572-057797dcafd0" in namespace "containers-5210" to be "success or failure" Jan 25 13:13:40.241: INFO: Pod "client-containers-c471d5a3-7a63-4368-9572-057797dcafd0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.194922ms Jan 25 13:13:42.283: INFO: Pod "client-containers-c471d5a3-7a63-4368-9572-057797dcafd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052057079s Jan 25 13:13:44.290: INFO: Pod "client-containers-c471d5a3-7a63-4368-9572-057797dcafd0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059724606s Jan 25 13:13:46.299: INFO: Pod "client-containers-c471d5a3-7a63-4368-9572-057797dcafd0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068064751s Jan 25 13:13:48.307: INFO: Pod "client-containers-c471d5a3-7a63-4368-9572-057797dcafd0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076620413s Jan 25 13:13:50.326: INFO: Pod "client-containers-c471d5a3-7a63-4368-9572-057797dcafd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.095717902s STEP: Saw pod success Jan 25 13:13:50.327: INFO: Pod "client-containers-c471d5a3-7a63-4368-9572-057797dcafd0" satisfied condition "success or failure" Jan 25 13:13:50.342: INFO: Trying to get logs from node iruya-node pod client-containers-c471d5a3-7a63-4368-9572-057797dcafd0 container test-container: STEP: delete the pod Jan 25 13:13:50.555: INFO: Waiting for pod client-containers-c471d5a3-7a63-4368-9572-057797dcafd0 to disappear Jan 25 13:13:50.582: INFO: Pod client-containers-c471d5a3-7a63-4368-9572-057797dcafd0 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:13:50.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5210" for this suite. Jan 25 13:13:56.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:13:56.782: INFO: namespace containers-5210 deletion completed in 6.169359646s • [SLOW TEST:16.709 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:13:56.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-2e5b6237-9394-48ca-9073-b655eea6b0e1 STEP: Creating secret with name secret-projected-all-test-volume-dd2b78a9-be8b-463c-a12e-691d904daae5 STEP: Creating a pod to test Check all projections for projected volume plugin Jan 25 13:13:57.668: INFO: Waiting up to 5m0s for pod "projected-volume-082c9482-fd15-4965-8011-f5e689d66bbf" in namespace "projected-2408" to be "success or failure" Jan 25 13:13:57.681: INFO: Pod "projected-volume-082c9482-fd15-4965-8011-f5e689d66bbf": Phase="Pending", Reason="", readiness=false. Elapsed: 13.648542ms Jan 25 13:13:59.689: INFO: Pod "projected-volume-082c9482-fd15-4965-8011-f5e689d66bbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020996888s Jan 25 13:14:01.704: INFO: Pod "projected-volume-082c9482-fd15-4965-8011-f5e689d66bbf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036319415s Jan 25 13:14:03.714: INFO: Pod "projected-volume-082c9482-fd15-4965-8011-f5e689d66bbf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04620801s Jan 25 13:14:05.735: INFO: Pod "projected-volume-082c9482-fd15-4965-8011-f5e689d66bbf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067221318s Jan 25 13:14:07.742: INFO: Pod "projected-volume-082c9482-fd15-4965-8011-f5e689d66bbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.074515139s STEP: Saw pod success Jan 25 13:14:07.742: INFO: Pod "projected-volume-082c9482-fd15-4965-8011-f5e689d66bbf" satisfied condition "success or failure" Jan 25 13:14:07.747: INFO: Trying to get logs from node iruya-node pod projected-volume-082c9482-fd15-4965-8011-f5e689d66bbf container projected-all-volume-test: STEP: delete the pod Jan 25 13:14:07.827: INFO: Waiting for pod projected-volume-082c9482-fd15-4965-8011-f5e689d66bbf to disappear Jan 25 13:14:07.832: INFO: Pod projected-volume-082c9482-fd15-4965-8011-f5e689d66bbf no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:14:07.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2408" for this suite. Jan 25 13:14:13.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:14:14.060: INFO: namespace projected-2408 deletion completed in 6.219439424s • [SLOW TEST:17.278 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:14:14.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-78da0490-629a-440d-a394-3cf7db0f8cac STEP: Creating configMap with name cm-test-opt-upd-f8bc81f5-c12c-4b98-bf40-1dee51461b91 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-78da0490-629a-440d-a394-3cf7db0f8cac STEP: Updating configmap cm-test-opt-upd-f8bc81f5-c12c-4b98-bf40-1dee51461b91 STEP: Creating configMap with name cm-test-opt-create-f87bb979-d420-4315-a9ad-71e237d99480 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:15:36.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5162" for this suite. Jan 25 13:16:00.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:16:00.998: INFO: namespace configmap-5162 deletion completed in 24.207607659s • [SLOW TEST:106.937 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:16:01.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 25 13:19:04.368: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 25 13:19:04.397: INFO: Pod pod-with-poststart-exec-hook still exists Jan 25 13:19:06.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 25 13:19:06.409: INFO: Pod pod-with-poststart-exec-hook still exists Jan 25 13:19:08.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 25 13:19:08.407: INFO: Pod pod-with-poststart-exec-hook still exists Jan 25 13:19:10.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 25 13:19:10.406: INFO: Pod pod-with-poststart-exec-hook still exists Jan 25 13:19:12.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 25 13:19:12.408: INFO: Pod pod-with-poststart-exec-hook still exists Jan 25 13:19:14.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 25 13:19:14.417: INFO: Pod pod-with-poststart-exec-hook still exists Jan 25 13:19:16.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 25 13:19:16.409: INFO: Pod pod-with-poststart-exec-hook still exists Jan 25 13:19:18.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 25 13:19:18.409: INFO: Pod pod-with-poststart-exec-hook still exists Jan 25 13:19:20.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 25 13:19:20.409: INFO: Pod pod-with-poststart-exec-hook still exists Jan 25 13:19:22.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 25 13:19:22.412: INFO: Pod pod-with-poststart-exec-hook still exists Jan 25 13:19:24.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 25 13:19:24.450: INFO: Pod pod-with-poststart-exec-hook still exists Jan 25 13:19:26.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 25 13:19:26.406: INFO: Pod pod-with-poststart-exec-hook still exists Jan 25 13:19:28.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 25 13:19:28.439: INFO: Pod pod-with-poststart-exec-hook still exists Jan 25 13:19:30.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 25 13:19:30.405: INFO: Pod pod-with-poststart-exec-hook still exists Jan 25 13:19:32.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 25 13:19:32.425: INFO: Pod pod-with-poststart-exec-hook still exists Jan 25 13:19:34.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 25 13:19:34.407: INFO: Pod pod-with-poststart-exec-hook still exists Jan 25 13:19:36.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 25 13:19:36.408: INFO: Pod pod-with-poststart-exec-hook still exists Jan 25 13:19:38.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 25 13:19:38.412: INFO: Pod pod-with-poststart-exec-hook still exists Jan 25 13:19:40.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 25 13:19:40.406: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:19:40.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5600" for this suite. Jan 25 13:20:02.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:20:02.626: INFO: namespace container-lifecycle-hook-5600 deletion completed in 22.213078651s • [SLOW TEST:241.626 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:20:02.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 25 13:20:02.785: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ce65c839-c03c-4689-80b3-2e567f7ea060" in namespace "projected-2481" to be "success or failure" Jan 25 13:20:02.883: INFO: Pod "downwardapi-volume-ce65c839-c03c-4689-80b3-2e567f7ea060": Phase="Pending", Reason="", readiness=false. Elapsed: 98.045389ms Jan 25 13:20:04.898: INFO: Pod "downwardapi-volume-ce65c839-c03c-4689-80b3-2e567f7ea060": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113004451s Jan 25 13:20:06.964: INFO: Pod "downwardapi-volume-ce65c839-c03c-4689-80b3-2e567f7ea060": Phase="Pending", Reason="", readiness=false. Elapsed: 4.179105894s Jan 25 13:20:08.976: INFO: Pod "downwardapi-volume-ce65c839-c03c-4689-80b3-2e567f7ea060": Phase="Pending", Reason="", readiness=false. Elapsed: 6.190398145s Jan 25 13:20:11.000: INFO: Pod "downwardapi-volume-ce65c839-c03c-4689-80b3-2e567f7ea060": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.214683469s STEP: Saw pod success Jan 25 13:20:11.000: INFO: Pod "downwardapi-volume-ce65c839-c03c-4689-80b3-2e567f7ea060" satisfied condition "success or failure" Jan 25 13:20:11.016: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ce65c839-c03c-4689-80b3-2e567f7ea060 container client-container: STEP: delete the pod Jan 25 13:20:11.125: INFO: Waiting for pod downwardapi-volume-ce65c839-c03c-4689-80b3-2e567f7ea060 to disappear Jan 25 13:20:11.166: INFO: Pod downwardapi-volume-ce65c839-c03c-4689-80b3-2e567f7ea060 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:20:11.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2481" for this suite. Jan 25 13:20:17.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:20:17.325: INFO: namespace projected-2481 deletion completed in 6.154018475s • [SLOW TEST:14.698 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:20:17.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-4fb92361-18d3-4e8c-b336-27f1f2a7d653 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:20:17.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6820" for this suite. Jan 25 13:20:23.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:20:23.644: INFO: namespace secrets-6820 deletion completed in 6.157294756s • [SLOW TEST:6.318 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:20:23.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 25 13:20:23.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9334' Jan 25 13:20:26.360: INFO: stderr: "" Jan 25 13:20:26.360: INFO: stdout: "replicationcontroller/redis-master created\n" Jan 25 13:20:26.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9334' Jan 25 13:20:26.949: INFO: stderr: "" Jan 25 13:20:26.949: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Jan 25 13:20:27.961: INFO: Selector matched 1 pods for map[app:redis] Jan 25 13:20:27.961: INFO: Found 0 / 1 Jan 25 13:20:28.959: INFO: Selector matched 1 pods for map[app:redis] Jan 25 13:20:28.959: INFO: Found 0 / 1 Jan 25 13:20:29.981: INFO: Selector matched 1 pods for map[app:redis] Jan 25 13:20:29.981: INFO: Found 0 / 1 Jan 25 13:20:30.960: INFO: Selector matched 1 pods for map[app:redis] Jan 25 13:20:30.961: INFO: Found 0 / 1 Jan 25 13:20:31.964: INFO: Selector matched 1 pods for map[app:redis] Jan 25 13:20:31.964: INFO: Found 0 / 1 Jan 25 13:20:32.963: INFO: Selector matched 1 pods for map[app:redis] Jan 25 13:20:32.963: INFO: Found 0 / 1 Jan 25 13:20:33.972: INFO: Selector matched 1 pods for map[app:redis] Jan 25 13:20:33.972: INFO: Found 0 / 1 Jan 25 13:20:34.957: INFO: Selector matched 1 pods for map[app:redis] Jan 25 13:20:34.957: INFO: Found 1 / 1 Jan 25 13:20:34.957: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 25 13:20:34.964: INFO: Selector matched 1 pods for map[app:redis] Jan 25 13:20:34.964: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 25 13:20:34.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-m9k48 --namespace=kubectl-9334' Jan 25 13:20:35.199: INFO: stderr: "" Jan 25 13:20:35.199: INFO: stdout: "Name: redis-master-m9k48\nNamespace: kubectl-9334\nPriority: 0\nNode: iruya-node/10.96.3.65\nStart Time: Sat, 25 Jan 2020 13:20:26 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: docker://00f5cda868445a2763bb17440e7164ad2de34b6a65352e90a6e6249f0375849a\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 25 Jan 2020 13:20:33 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-ddjh7 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-ddjh7:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-ddjh7\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 9s default-scheduler Successfully assigned kubectl-9334/redis-master-m9k48 to iruya-node\n Normal Pulled 5s kubelet, iruya-node Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-node Created container redis-master\n Normal Started 2s kubelet, iruya-node Started container redis-master\n" Jan 25 13:20:35.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-9334' Jan 25 13:20:35.312: INFO: stderr: "" Jan 25 13:20:35.312: INFO: stdout: "Name: redis-master\nNamespace: kubectl-9334\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 9s replication-controller Created pod: redis-master-m9k48\n" Jan 25 13:20:35.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-9334' Jan 25 13:20:35.422: INFO: stderr: "" Jan 25 13:20:35.422: INFO: stdout: "Name: redis-master\nNamespace: kubectl-9334\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.98.81.82\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.44.0.1:6379\nSession Affinity: None\nEvents: \n" Jan 25 13:20:35.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node' Jan 25 13:20:35.626: INFO: stderr: "" Jan 25 13:20:35.626: INFO: stdout: "Name: iruya-node\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-node\n kubernetes.io/os=linux\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 04 Aug 2019 09:01:39 +0000\nTaints: \nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Sat, 12 Oct 2019 11:56:49 +0000 Sat, 12 Oct 2019 11:56:49 +0000 WeaveIsUp Weave pod has set this\n MemoryPressure False Sat, 25 Jan 2020 13:19:40 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 25 Jan 2020 13:19:40 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 25 Jan 2020 13:19:40 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 25 Jan 2020 13:19:40 +0000 Sun, 04 Aug 2019 09:02:19 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.96.3.65\n Hostname: iruya-node\nCapacity:\n cpu: 4\n ephemeral-storage: 20145724Ki\n hugepages-2Mi: 0\n memory: 4039076Ki\n pods: 110\nAllocatable:\n cpu: 4\n ephemeral-storage: 18566299208\n hugepages-2Mi: 0\n memory: 3936676Ki\n pods: 110\nSystem Info:\n Machine ID: f573dcf04d6f4a87856a35d266a2fa7a\n System UUID: F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID: 8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version: 4.15.0-52-generic\n OS Image: Ubuntu 18.04.2 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.9.7\n Kubelet Version: v1.15.1\n Kube-Proxy Version: v1.15.1\nPodCIDR: 10.96.1.0/24\nNon-terminated Pods: (3 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system kube-proxy-976zl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 174d\n kube-system weave-net-rlp57 20m (0%) 0 (0%) 0 (0%) 0 (0%) 105d\n kubectl-9334 redis-master-m9k48 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 20m (0%) 0 (0%)\n memory 0 (0%) 0 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jan 25 13:20:35.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-9334' Jan 25 13:20:35.762: INFO: stderr: "" Jan 25 13:20:35.762: INFO: stdout: "Name: kubectl-9334\nLabels: e2e-framework=kubectl\n e2e-run=48a6e4e5-95ee-45ac-bb6c-e2e6f3be086b\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:20:35.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9334" for this suite. Jan 25 13:20:57.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:20:58.039: INFO: namespace kubectl-9334 deletion completed in 22.272757795s • [SLOW TEST:34.395 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:20:58.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-b065945f-db79-4964-a765-1d30460b3dbd STEP: Creating secret with name s-test-opt-upd-d59c53de-c2eb-4743-87b4-342a2f2dc35f STEP: Creating the pod STEP: Deleting secret s-test-opt-del-b065945f-db79-4964-a765-1d30460b3dbd STEP: Updating secret s-test-opt-upd-d59c53de-c2eb-4743-87b4-342a2f2dc35f STEP: Creating secret with name s-test-opt-create-6af54432-fe78-441a-82c4-73fb20763366 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:21:12.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6652" for this suite. Jan 25 13:21:34.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:21:34.779: INFO: namespace projected-6652 deletion completed in 22.125408635s • [SLOW TEST:36.740 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:21:34.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:21:34.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8391" for this suite. Jan 25 13:21:40.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:21:41.110: INFO: namespace kubelet-test-8391 deletion completed in 6.171814645s • [SLOW TEST:6.330 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:21:41.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Jan 25 13:21:41.354: INFO: Waiting up to 5m0s for pod "pod-c484c911-2861-4c6c-8b34-d28b97d4980a" in namespace "emptydir-9736" to be "success or failure" Jan 25 13:21:41.410: INFO: Pod "pod-c484c911-2861-4c6c-8b34-d28b97d4980a": Phase="Pending", Reason="", readiness=false. Elapsed: 56.37437ms Jan 25 13:21:43.423: INFO: Pod "pod-c484c911-2861-4c6c-8b34-d28b97d4980a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069125029s Jan 25 13:21:45.440: INFO: Pod "pod-c484c911-2861-4c6c-8b34-d28b97d4980a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086633146s Jan 25 13:21:47.456: INFO: Pod "pod-c484c911-2861-4c6c-8b34-d28b97d4980a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102514818s Jan 25 13:21:49.466: INFO: Pod "pod-c484c911-2861-4c6c-8b34-d28b97d4980a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.111934938s STEP: Saw pod success Jan 25 13:21:49.466: INFO: Pod "pod-c484c911-2861-4c6c-8b34-d28b97d4980a" satisfied condition "success or failure" Jan 25 13:21:49.470: INFO: Trying to get logs from node iruya-node pod pod-c484c911-2861-4c6c-8b34-d28b97d4980a container test-container: STEP: delete the pod Jan 25 13:21:49.589: INFO: Waiting for pod pod-c484c911-2861-4c6c-8b34-d28b97d4980a to disappear Jan 25 13:21:49.643: INFO: Pod pod-c484c911-2861-4c6c-8b34-d28b97d4980a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:21:49.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9736" for this suite. Jan 25 13:21:55.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:21:55.824: INFO: namespace emptydir-9736 deletion completed in 6.17534219s • [SLOW TEST:14.714 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:21:55.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:22:05.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9047" for this suite. Jan 25 13:22:27.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:22:27.445: INFO: namespace replication-controller-9047 deletion completed in 22.205405897s • [SLOW TEST:31.621 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:22:27.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-100eeb74-2c01-4e1b-b6a1-37e101d582cb STEP: Creating a pod to test consume secrets Jan 25 13:22:27.564: INFO: Waiting up to 5m0s for pod "pod-secrets-20a2f7e3-1a07-4c5a-9b21-60a677cf9271" in namespace "secrets-4070" to be "success or failure" Jan 25 13:22:27.573: INFO: Pod "pod-secrets-20a2f7e3-1a07-4c5a-9b21-60a677cf9271": Phase="Pending", Reason="", readiness=false. Elapsed: 9.42616ms Jan 25 13:22:29.584: INFO: Pod "pod-secrets-20a2f7e3-1a07-4c5a-9b21-60a677cf9271": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020417093s Jan 25 13:22:31.643: INFO: Pod "pod-secrets-20a2f7e3-1a07-4c5a-9b21-60a677cf9271": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079158488s Jan 25 13:22:33.653: INFO: Pod "pod-secrets-20a2f7e3-1a07-4c5a-9b21-60a677cf9271": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089562607s Jan 25 13:22:35.669: INFO: Pod "pod-secrets-20a2f7e3-1a07-4c5a-9b21-60a677cf9271": Phase="Pending", Reason="", readiness=false. Elapsed: 8.104804902s Jan 25 13:22:37.678: INFO: Pod "pod-secrets-20a2f7e3-1a07-4c5a-9b21-60a677cf9271": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.11417248s STEP: Saw pod success Jan 25 13:22:37.678: INFO: Pod "pod-secrets-20a2f7e3-1a07-4c5a-9b21-60a677cf9271" satisfied condition "success or failure" Jan 25 13:22:37.684: INFO: Trying to get logs from node iruya-node pod pod-secrets-20a2f7e3-1a07-4c5a-9b21-60a677cf9271 container secret-env-test: STEP: delete the pod Jan 25 13:22:37.769: INFO: Waiting for pod pod-secrets-20a2f7e3-1a07-4c5a-9b21-60a677cf9271 to disappear Jan 25 13:22:37.781: INFO: Pod pod-secrets-20a2f7e3-1a07-4c5a-9b21-60a677cf9271 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 25 13:22:37.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4070" for this suite. Jan 25 13:22:43.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 13:22:44.022: INFO: namespace secrets-4070 deletion completed in 6.230303648s • [SLOW TEST:16.576 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 25 13:22:44.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 25 13:22:44.225: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/:
alternatives.log
alternatives.l... (200; 13.480698ms)
Jan 25 13:22:44.233: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.686118ms)
Jan 25 13:22:44.238: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.100657ms)
Jan 25 13:22:44.244: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.820779ms)
Jan 25 13:22:44.250: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.629272ms)
Jan 25 13:22:44.257: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.311795ms)
Jan 25 13:22:44.264: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.434965ms)
Jan 25 13:22:44.269: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.263713ms)
Jan 25 13:22:44.274: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.476431ms)
Jan 25 13:22:44.277: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.522871ms)
Jan 25 13:22:44.282: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.98981ms)
Jan 25 13:22:44.286: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.300111ms)
Jan 25 13:22:44.290: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.41096ms)
Jan 25 13:22:44.295: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.818185ms)
Jan 25 13:22:44.300: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.42973ms)
Jan 25 13:22:44.326: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 25.996761ms)
Jan 25 13:22:44.332: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.05549ms)
Jan 25 13:22:44.341: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.915366ms)
Jan 25 13:22:44.356: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.405635ms)
Jan 25 13:22:44.364: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.918778ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:22:44.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-84" for this suite.
Jan 25 13:22:50.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:22:50.490: INFO: namespace proxy-84 deletion completed in 6.12279527s

• [SLOW TEST:6.467 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:22:50.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 25 13:22:50.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3211'
Jan 25 13:22:50.758: INFO: stderr: ""
Jan 25 13:22:50.758: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Jan 25 13:22:50.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-3211'
Jan 25 13:22:54.740: INFO: stderr: ""
Jan 25 13:22:54.740: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:22:54.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3211" for this suite.
Jan 25 13:23:00.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:23:00.978: INFO: namespace kubectl-3211 deletion completed in 6.226060863s

• [SLOW TEST:10.487 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:23:00.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-2685
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Jan 25 13:23:01.223: INFO: Found 0 stateful pods, waiting for 3
Jan 25 13:23:11.236: INFO: Found 2 stateful pods, waiting for 3
Jan 25 13:23:21.234: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 13:23:21.234: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 13:23:21.234: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 25 13:23:31.236: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 13:23:31.236: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 13:23:31.236: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan 25 13:23:31.286: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan 25 13:23:41.352: INFO: Updating stateful set ss2
Jan 25 13:23:41.370: INFO: Waiting for Pod statefulset-2685/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 25 13:23:51.386: INFO: Waiting for Pod statefulset-2685/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jan 25 13:24:01.918: INFO: Found 2 stateful pods, waiting for 3
Jan 25 13:24:11.927: INFO: Found 2 stateful pods, waiting for 3
Jan 25 13:24:21.937: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 13:24:21.937: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 13:24:21.937: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan 25 13:24:21.987: INFO: Updating stateful set ss2
Jan 25 13:24:22.011: INFO: Waiting for Pod statefulset-2685/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 25 13:24:32.026: INFO: Waiting for Pod statefulset-2685/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 25 13:24:42.281: INFO: Updating stateful set ss2
Jan 25 13:24:42.610: INFO: Waiting for StatefulSet statefulset-2685/ss2 to complete update
Jan 25 13:24:42.610: INFO: Waiting for Pod statefulset-2685/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 25 13:24:52.645: INFO: Waiting for StatefulSet statefulset-2685/ss2 to complete update
Jan 25 13:24:52.645: INFO: Waiting for Pod statefulset-2685/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 25 13:25:02.634: INFO: Waiting for StatefulSet statefulset-2685/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 25 13:25:12.638: INFO: Deleting all statefulset in ns statefulset-2685
Jan 25 13:25:12.646: INFO: Scaling statefulset ss2 to 0
Jan 25 13:25:42.677: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 13:25:42.700: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:25:42.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2685" for this suite.
Jan 25 13:25:48.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:25:48.833: INFO: namespace statefulset-2685 deletion completed in 6.105067691s

• [SLOW TEST:167.855 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:25:48.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 25 13:25:48.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-7013'
Jan 25 13:25:49.111: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 25 13:25:49.111: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jan 25 13:25:49.124: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Jan 25 13:25:49.140: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan 25 13:25:49.184: INFO: scanned /root for discovery docs: 
Jan 25 13:25:49.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-7013'
Jan 25 13:26:11.536: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 25 13:26:11.536: INFO: stdout: "Created e2e-test-nginx-rc-1f73f6c0869511b9b30710b840003138\nScaling up e2e-test-nginx-rc-1f73f6c0869511b9b30710b840003138 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-1f73f6c0869511b9b30710b840003138 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-1f73f6c0869511b9b30710b840003138 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan 25 13:26:11.536: INFO: stdout: "Created e2e-test-nginx-rc-1f73f6c0869511b9b30710b840003138\nScaling up e2e-test-nginx-rc-1f73f6c0869511b9b30710b840003138 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-1f73f6c0869511b9b30710b840003138 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-1f73f6c0869511b9b30710b840003138 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan 25 13:26:11.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-7013'
Jan 25 13:26:11.730: INFO: stderr: ""
Jan 25 13:26:11.730: INFO: stdout: "e2e-test-nginx-rc-1f73f6c0869511b9b30710b840003138-gld47 e2e-test-nginx-rc-zrw6h "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 25 13:26:16.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-7013'
Jan 25 13:26:16.930: INFO: stderr: ""
Jan 25 13:26:16.930: INFO: stdout: "e2e-test-nginx-rc-1f73f6c0869511b9b30710b840003138-gld47 "
Jan 25 13:26:16.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-1f73f6c0869511b9b30710b840003138-gld47 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7013'
Jan 25 13:26:17.070: INFO: stderr: ""
Jan 25 13:26:17.070: INFO: stdout: "true"
Jan 25 13:26:17.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-1f73f6c0869511b9b30710b840003138-gld47 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7013'
Jan 25 13:26:17.162: INFO: stderr: ""
Jan 25 13:26:17.162: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan 25 13:26:17.162: INFO: e2e-test-nginx-rc-1f73f6c0869511b9b30710b840003138-gld47 is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Jan 25 13:26:17.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-7013'
Jan 25 13:26:17.289: INFO: stderr: ""
Jan 25 13:26:17.289: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:26:17.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7013" for this suite.
Jan 25 13:26:41.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:26:41.442: INFO: namespace kubectl-7013 deletion completed in 24.14848149s

• [SLOW TEST:52.608 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:26:41.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan 25 13:26:41.572: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:26:59.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8282" for this suite.
Jan 25 13:27:21.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:27:21.526: INFO: namespace init-container-8282 deletion completed in 22.134312653s

• [SLOW TEST:40.083 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:27:21.527: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 25 13:27:21.655: INFO: Waiting up to 5m0s for pod "downward-api-f6a07355-0f79-403f-a200-88a3bb558720" in namespace "downward-api-6260" to be "success or failure"
Jan 25 13:27:21.669: INFO: Pod "downward-api-f6a07355-0f79-403f-a200-88a3bb558720": Phase="Pending", Reason="", readiness=false. Elapsed: 13.580129ms
Jan 25 13:27:23.686: INFO: Pod "downward-api-f6a07355-0f79-403f-a200-88a3bb558720": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030921945s
Jan 25 13:27:25.697: INFO: Pod "downward-api-f6a07355-0f79-403f-a200-88a3bb558720": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041444803s
Jan 25 13:27:27.705: INFO: Pod "downward-api-f6a07355-0f79-403f-a200-88a3bb558720": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049890153s
Jan 25 13:27:29.715: INFO: Pod "downward-api-f6a07355-0f79-403f-a200-88a3bb558720": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059559883s
Jan 25 13:27:31.722: INFO: Pod "downward-api-f6a07355-0f79-403f-a200-88a3bb558720": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.066895897s
STEP: Saw pod success
Jan 25 13:27:31.722: INFO: Pod "downward-api-f6a07355-0f79-403f-a200-88a3bb558720" satisfied condition "success or failure"
Jan 25 13:27:31.726: INFO: Trying to get logs from node iruya-node pod downward-api-f6a07355-0f79-403f-a200-88a3bb558720 container dapi-container: 
STEP: delete the pod
Jan 25 13:27:31.796: INFO: Waiting for pod downward-api-f6a07355-0f79-403f-a200-88a3bb558720 to disappear
Jan 25 13:27:31.804: INFO: Pod downward-api-f6a07355-0f79-403f-a200-88a3bb558720 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:27:31.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6260" for this suite.
Jan 25 13:27:37.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:27:38.039: INFO: namespace downward-api-6260 deletion completed in 6.227556974s

• [SLOW TEST:16.512 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:27:38.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-1439
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Jan 25 13:27:38.155: INFO: Found 0 stateful pods, waiting for 3
Jan 25 13:27:48.165: INFO: Found 2 stateful pods, waiting for 3
Jan 25 13:27:58.249: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 13:27:58.249: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 13:27:58.249: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 25 13:28:08.174: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 13:28:08.174: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 13:28:08.174: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 13:28:08.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1439 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 25 13:28:08.800: INFO: stderr: "I0125 13:28:08.415413     901 log.go:172] (0xc0008dc2c0) (0xc00082e6e0) Create stream\nI0125 13:28:08.415818     901 log.go:172] (0xc0008dc2c0) (0xc00082e6e0) Stream added, broadcasting: 1\nI0125 13:28:08.420463     901 log.go:172] (0xc0008dc2c0) Reply frame received for 1\nI0125 13:28:08.420584     901 log.go:172] (0xc0008dc2c0) (0xc0006321e0) Create stream\nI0125 13:28:08.420605     901 log.go:172] (0xc0008dc2c0) (0xc0006321e0) Stream added, broadcasting: 3\nI0125 13:28:08.422069     901 log.go:172] (0xc0008dc2c0) Reply frame received for 3\nI0125 13:28:08.422117     901 log.go:172] (0xc0008dc2c0) (0xc0001fe000) Create stream\nI0125 13:28:08.422130     901 log.go:172] (0xc0008dc2c0) (0xc0001fe000) Stream added, broadcasting: 5\nI0125 13:28:08.423526     901 log.go:172] (0xc0008dc2c0) Reply frame received for 5\nI0125 13:28:08.612971     901 log.go:172] (0xc0008dc2c0) Data frame received for 5\nI0125 13:28:08.613100     901 log.go:172] (0xc0001fe000) (5) Data frame handling\nI0125 13:28:08.613130     901 log.go:172] (0xc0001fe000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0125 13:28:08.691733     901 log.go:172] (0xc0008dc2c0) Data frame received for 3\nI0125 13:28:08.691891     901 log.go:172] (0xc0006321e0) (3) Data frame handling\nI0125 13:28:08.691949     901 log.go:172] (0xc0006321e0) (3) Data frame sent\nI0125 13:28:08.791258     901 log.go:172] (0xc0008dc2c0) Data frame received for 1\nI0125 13:28:08.791407     901 log.go:172] (0xc0008dc2c0) (0xc0006321e0) Stream removed, broadcasting: 3\nI0125 13:28:08.791488     901 log.go:172] (0xc00082e6e0) (1) Data frame handling\nI0125 13:28:08.791514     901 log.go:172] (0xc00082e6e0) (1) Data frame sent\nI0125 13:28:08.791539     901 log.go:172] (0xc0008dc2c0) (0xc0001fe000) Stream removed, broadcasting: 5\nI0125 13:28:08.791569     901 log.go:172] (0xc0008dc2c0) (0xc00082e6e0) Stream removed, broadcasting: 1\nI0125 13:28:08.791587     901 log.go:172] (0xc0008dc2c0) Go away received\nI0125 13:28:08.792708     901 log.go:172] (0xc0008dc2c0) (0xc00082e6e0) Stream removed, broadcasting: 1\nI0125 13:28:08.792817     901 log.go:172] (0xc0008dc2c0) (0xc0006321e0) Stream removed, broadcasting: 3\nI0125 13:28:08.792822     901 log.go:172] (0xc0008dc2c0) (0xc0001fe000) Stream removed, broadcasting: 5\n"
Jan 25 13:28:08.800: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 25 13:28:08.800: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan 25 13:28:18.873: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan 25 13:28:28.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1439 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 25 13:28:29.446: INFO: stderr: "I0125 13:28:29.264345     920 log.go:172] (0xc0006b2b00) (0xc0008aa820) Create stream\nI0125 13:28:29.264696     920 log.go:172] (0xc0006b2b00) (0xc0008aa820) Stream added, broadcasting: 1\nI0125 13:28:29.269889     920 log.go:172] (0xc0006b2b00) Reply frame received for 1\nI0125 13:28:29.270015     920 log.go:172] (0xc0006b2b00) (0xc0008aa8c0) Create stream\nI0125 13:28:29.270028     920 log.go:172] (0xc0006b2b00) (0xc0008aa8c0) Stream added, broadcasting: 3\nI0125 13:28:29.272090     920 log.go:172] (0xc0006b2b00) Reply frame received for 3\nI0125 13:28:29.272169     920 log.go:172] (0xc0006b2b00) (0xc0005de280) Create stream\nI0125 13:28:29.272195     920 log.go:172] (0xc0006b2b00) (0xc0005de280) Stream added, broadcasting: 5\nI0125 13:28:29.273731     920 log.go:172] (0xc0006b2b00) Reply frame received for 5\nI0125 13:28:29.354780     920 log.go:172] (0xc0006b2b00) Data frame received for 3\nI0125 13:28:29.354843     920 log.go:172] (0xc0008aa8c0) (3) Data frame handling\nI0125 13:28:29.354861     920 log.go:172] (0xc0008aa8c0) (3) Data frame sent\nI0125 13:28:29.354907     920 log.go:172] (0xc0006b2b00) Data frame received for 5\nI0125 13:28:29.354916     920 log.go:172] (0xc0005de280) (5) Data frame handling\nI0125 13:28:29.354923     920 log.go:172] (0xc0005de280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0125 13:28:29.434880     920 log.go:172] (0xc0006b2b00) Data frame received for 1\nI0125 13:28:29.435138     920 log.go:172] (0xc0006b2b00) (0xc0005de280) Stream removed, broadcasting: 5\nI0125 13:28:29.435486     920 log.go:172] (0xc0008aa820) (1) Data frame handling\nI0125 13:28:29.435542     920 log.go:172] (0xc0008aa820) (1) Data frame sent\nI0125 13:28:29.435824     920 log.go:172] (0xc0006b2b00) (0xc0008aa820) Stream removed, broadcasting: 1\nI0125 13:28:29.435893     920 log.go:172] (0xc0006b2b00) (0xc0008aa8c0) Stream removed, broadcasting: 3\nI0125 13:28:29.435942     920 log.go:172] (0xc0006b2b00) Go away received\nI0125 13:28:29.437279     920 log.go:172] (0xc0006b2b00) (0xc0008aa820) Stream removed, broadcasting: 1\nI0125 13:28:29.437296     920 log.go:172] (0xc0006b2b00) (0xc0008aa8c0) Stream removed, broadcasting: 3\nI0125 13:28:29.437305     920 log.go:172] (0xc0006b2b00) (0xc0005de280) Stream removed, broadcasting: 5\n"
Jan 25 13:28:29.447: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 25 13:28:29.447: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 25 13:28:39.517: INFO: Waiting for StatefulSet statefulset-1439/ss2 to complete update
Jan 25 13:28:39.518: INFO: Waiting for Pod statefulset-1439/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 25 13:28:39.518: INFO: Waiting for Pod statefulset-1439/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 25 13:28:39.518: INFO: Waiting for Pod statefulset-1439/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 25 13:28:49.548: INFO: Waiting for StatefulSet statefulset-1439/ss2 to complete update
Jan 25 13:28:49.548: INFO: Waiting for Pod statefulset-1439/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 25 13:28:49.548: INFO: Waiting for Pod statefulset-1439/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 25 13:28:59.535: INFO: Waiting for StatefulSet statefulset-1439/ss2 to complete update
Jan 25 13:28:59.535: INFO: Waiting for Pod statefulset-1439/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 25 13:28:59.535: INFO: Waiting for Pod statefulset-1439/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 25 13:29:09.536: INFO: Waiting for StatefulSet statefulset-1439/ss2 to complete update
Jan 25 13:29:09.536: INFO: Waiting for Pod statefulset-1439/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 25 13:29:19.570: INFO: Waiting for StatefulSet statefulset-1439/ss2 to complete update
Jan 25 13:29:19.570: INFO: Waiting for Pod statefulset-1439/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 25 13:29:29.535: INFO: Waiting for StatefulSet statefulset-1439/ss2 to complete update
STEP: Rolling back to a previous revision
Jan 25 13:29:39.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1439 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 25 13:29:40.101: INFO: stderr: "I0125 13:29:39.753924     942 log.go:172] (0xc00076a420) (0xc000800b40) Create stream\nI0125 13:29:39.754191     942 log.go:172] (0xc00076a420) (0xc000800b40) Stream added, broadcasting: 1\nI0125 13:29:39.759500     942 log.go:172] (0xc00076a420) Reply frame received for 1\nI0125 13:29:39.759585     942 log.go:172] (0xc00076a420) (0xc00074c000) Create stream\nI0125 13:29:39.759621     942 log.go:172] (0xc00076a420) (0xc00074c000) Stream added, broadcasting: 3\nI0125 13:29:39.761344     942 log.go:172] (0xc00076a420) Reply frame received for 3\nI0125 13:29:39.761381     942 log.go:172] (0xc00076a420) (0xc00074c0a0) Create stream\nI0125 13:29:39.761398     942 log.go:172] (0xc00076a420) (0xc00074c0a0) Stream added, broadcasting: 5\nI0125 13:29:39.762835     942 log.go:172] (0xc00076a420) Reply frame received for 5\nI0125 13:29:39.920626     942 log.go:172] (0xc00076a420) Data frame received for 5\nI0125 13:29:39.920708     942 log.go:172] (0xc00074c0a0) (5) Data frame handling\nI0125 13:29:39.920732     942 log.go:172] (0xc00074c0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0125 13:29:40.005411     942 log.go:172] (0xc00076a420) Data frame received for 3\nI0125 13:29:40.005600     942 log.go:172] (0xc00074c000) (3) Data frame handling\nI0125 13:29:40.005664     942 log.go:172] (0xc00074c000) (3) Data frame sent\nI0125 13:29:40.089565     942 log.go:172] (0xc00076a420) (0xc00074c0a0) Stream removed, broadcasting: 5\nI0125 13:29:40.089727     942 log.go:172] (0xc00076a420) Data frame received for 1\nI0125 13:29:40.089829     942 log.go:172] (0xc00076a420) (0xc00074c000) Stream removed, broadcasting: 3\nI0125 13:29:40.089889     942 log.go:172] (0xc000800b40) (1) Data frame handling\nI0125 13:29:40.089911     942 log.go:172] (0xc000800b40) (1) Data frame sent\nI0125 13:29:40.089919     942 log.go:172] (0xc00076a420) (0xc000800b40) Stream removed, broadcasting: 1\nI0125 13:29:40.089931     942 log.go:172] (0xc00076a420) Go away received\nI0125 13:29:40.092351     942 log.go:172] (0xc00076a420) (0xc000800b40) Stream removed, broadcasting: 1\nI0125 13:29:40.092543     942 log.go:172] (0xc00076a420) (0xc00074c000) Stream removed, broadcasting: 3\nI0125 13:29:40.092563     942 log.go:172] (0xc00076a420) (0xc00074c0a0) Stream removed, broadcasting: 5\n"
Jan 25 13:29:40.101: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 25 13:29:40.101: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 25 13:29:50.158: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan 25 13:30:00.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1439 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 25 13:30:00.650: INFO: stderr: "I0125 13:30:00.426380     961 log.go:172] (0xc00098e370) (0xc0007e66e0) Create stream\nI0125 13:30:00.426624     961 log.go:172] (0xc00098e370) (0xc0007e66e0) Stream added, broadcasting: 1\nI0125 13:30:00.429757     961 log.go:172] (0xc00098e370) Reply frame received for 1\nI0125 13:30:00.429810     961 log.go:172] (0xc00098e370) (0xc00055e280) Create stream\nI0125 13:30:00.429826     961 log.go:172] (0xc00098e370) (0xc00055e280) Stream added, broadcasting: 3\nI0125 13:30:00.431645     961 log.go:172] (0xc00098e370) Reply frame received for 3\nI0125 13:30:00.431674     961 log.go:172] (0xc00098e370) (0xc000820000) Create stream\nI0125 13:30:00.431686     961 log.go:172] (0xc00098e370) (0xc000820000) Stream added, broadcasting: 5\nI0125 13:30:00.433044     961 log.go:172] (0xc00098e370) Reply frame received for 5\nI0125 13:30:00.535770     961 log.go:172] (0xc00098e370) Data frame received for 5\nI0125 13:30:00.536593     961 log.go:172] (0xc000820000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0125 13:30:00.536814     961 log.go:172] (0xc00098e370) Data frame received for 3\nI0125 13:30:00.536929     961 log.go:172] (0xc000820000) (5) Data frame sent\nI0125 13:30:00.536947     961 log.go:172] (0xc00055e280) (3) Data frame handling\nI0125 13:30:00.536969     961 log.go:172] (0xc00055e280) (3) Data frame sent\nI0125 13:30:00.640866     961 log.go:172] (0xc00098e370) Data frame received for 1\nI0125 13:30:00.640922     961 log.go:172] (0xc0007e66e0) (1) Data frame handling\nI0125 13:30:00.640941     961 log.go:172] (0xc0007e66e0) (1) Data frame sent\nI0125 13:30:00.641575     961 log.go:172] (0xc00098e370) (0xc0007e66e0) Stream removed, broadcasting: 1\nI0125 13:30:00.644021     961 log.go:172] (0xc00098e370) (0xc00055e280) Stream removed, broadcasting: 3\nI0125 13:30:00.644142     961 log.go:172] (0xc00098e370) (0xc000820000) Stream removed, broadcasting: 5\nI0125 13:30:00.644161     961 log.go:172] (0xc00098e370) Go away received\nI0125 13:30:00.644230     961 log.go:172] (0xc00098e370) (0xc0007e66e0) Stream removed, broadcasting: 1\nI0125 13:30:00.644251     961 log.go:172] (0xc00098e370) (0xc00055e280) Stream removed, broadcasting: 3\nI0125 13:30:00.644260     961 log.go:172] (0xc00098e370) (0xc000820000) Stream removed, broadcasting: 5\n"
Jan 25 13:30:00.650: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 25 13:30:00.650: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 25 13:30:10.703: INFO: Waiting for StatefulSet statefulset-1439/ss2 to complete update
Jan 25 13:30:10.703: INFO: Waiting for Pod statefulset-1439/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 25 13:30:10.703: INFO: Waiting for Pod statefulset-1439/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 25 13:30:20.719: INFO: Waiting for StatefulSet statefulset-1439/ss2 to complete update
Jan 25 13:30:20.719: INFO: Waiting for Pod statefulset-1439/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 25 13:30:20.719: INFO: Waiting for Pod statefulset-1439/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 25 13:30:30.727: INFO: Waiting for StatefulSet statefulset-1439/ss2 to complete update
Jan 25 13:30:30.727: INFO: Waiting for Pod statefulset-1439/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 25 13:30:40.719: INFO: Waiting for StatefulSet statefulset-1439/ss2 to complete update
Jan 25 13:30:40.719: INFO: Waiting for Pod statefulset-1439/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 25 13:30:50.726: INFO: Deleting all statefulset in ns statefulset-1439
Jan 25 13:30:50.731: INFO: Scaling statefulset ss2 to 0
Jan 25 13:31:30.755: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 13:31:30.757: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:31:30.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1439" for this suite.
Jan 25 13:31:38.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:31:39.013: INFO: namespace statefulset-1439 deletion completed in 8.233138788s

• [SLOW TEST:240.973 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:31:39.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-4645, will wait for the garbage collector to delete the pods
Jan 25 13:31:49.145: INFO: Deleting Job.batch foo took: 10.711899ms
Jan 25 13:31:49.546: INFO: Terminating Job.batch foo pods took: 400.543605ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:32:36.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4645" for this suite.
Jan 25 13:32:42.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:32:42.827: INFO: namespace job-4645 deletion completed in 6.159776391s

• [SLOW TEST:63.813 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:32:42.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-6d24f087-0569-4281-841f-64e7c9856572
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:32:42.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6781" for this suite.
Jan 25 13:32:49.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:32:49.140: INFO: namespace configmap-6781 deletion completed in 6.164734773s

• [SLOW TEST:6.312 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:32:49.140: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 25 13:32:49.264: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be154d20-c5c3-4f65-85c9-7443e3b59b90" in namespace "downward-api-8427" to be "success or failure"
Jan 25 13:32:49.277: INFO: Pod "downwardapi-volume-be154d20-c5c3-4f65-85c9-7443e3b59b90": Phase="Pending", Reason="", readiness=false. Elapsed: 12.88248ms
Jan 25 13:32:51.296: INFO: Pod "downwardapi-volume-be154d20-c5c3-4f65-85c9-7443e3b59b90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031925665s
Jan 25 13:32:53.308: INFO: Pod "downwardapi-volume-be154d20-c5c3-4f65-85c9-7443e3b59b90": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043524298s
Jan 25 13:32:55.315: INFO: Pod "downwardapi-volume-be154d20-c5c3-4f65-85c9-7443e3b59b90": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050621268s
Jan 25 13:32:57.322: INFO: Pod "downwardapi-volume-be154d20-c5c3-4f65-85c9-7443e3b59b90": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057782027s
Jan 25 13:32:59.330: INFO: Pod "downwardapi-volume-be154d20-c5c3-4f65-85c9-7443e3b59b90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.066117676s
STEP: Saw pod success
Jan 25 13:32:59.330: INFO: Pod "downwardapi-volume-be154d20-c5c3-4f65-85c9-7443e3b59b90" satisfied condition "success or failure"
Jan 25 13:32:59.335: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-be154d20-c5c3-4f65-85c9-7443e3b59b90 container client-container: 
STEP: delete the pod
Jan 25 13:32:59.690: INFO: Waiting for pod downwardapi-volume-be154d20-c5c3-4f65-85c9-7443e3b59b90 to disappear
Jan 25 13:32:59.784: INFO: Pod downwardapi-volume-be154d20-c5c3-4f65-85c9-7443e3b59b90 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:32:59.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8427" for this suite.
Jan 25 13:33:05.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:33:06.013: INFO: namespace downward-api-8427 deletion completed in 6.219704656s

• [SLOW TEST:16.872 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:33:06.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-727012ad-1a21-4a73-870d-beb67a3f861f
STEP: Creating a pod to test consume configMaps
Jan 25 13:33:06.152: INFO: Waiting up to 5m0s for pod "pod-configmaps-90b5c044-9d12-4909-8854-3b4c69241d70" in namespace "configmap-631" to be "success or failure"
Jan 25 13:33:06.190: INFO: Pod "pod-configmaps-90b5c044-9d12-4909-8854-3b4c69241d70": Phase="Pending", Reason="", readiness=false. Elapsed: 37.560841ms
Jan 25 13:33:08.202: INFO: Pod "pod-configmaps-90b5c044-9d12-4909-8854-3b4c69241d70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049867065s
Jan 25 13:33:10.213: INFO: Pod "pod-configmaps-90b5c044-9d12-4909-8854-3b4c69241d70": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060834597s
Jan 25 13:33:12.227: INFO: Pod "pod-configmaps-90b5c044-9d12-4909-8854-3b4c69241d70": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0745439s
Jan 25 13:33:14.243: INFO: Pod "pod-configmaps-90b5c044-9d12-4909-8854-3b4c69241d70": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09104606s
Jan 25 13:33:16.262: INFO: Pod "pod-configmaps-90b5c044-9d12-4909-8854-3b4c69241d70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.109296847s
STEP: Saw pod success
Jan 25 13:33:16.262: INFO: Pod "pod-configmaps-90b5c044-9d12-4909-8854-3b4c69241d70" satisfied condition "success or failure"
Jan 25 13:33:16.275: INFO: Trying to get logs from node iruya-node pod pod-configmaps-90b5c044-9d12-4909-8854-3b4c69241d70 container configmap-volume-test: 
STEP: delete the pod
Jan 25 13:33:16.475: INFO: Waiting for pod pod-configmaps-90b5c044-9d12-4909-8854-3b4c69241d70 to disappear
Jan 25 13:33:16.482: INFO: Pod pod-configmaps-90b5c044-9d12-4909-8854-3b4c69241d70 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:33:16.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-631" for this suite.
Jan 25 13:33:22.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:33:22.730: INFO: namespace configmap-631 deletion completed in 6.23894415s

• [SLOW TEST:16.717 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:33:22.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 25 13:33:22.839: INFO: Creating deployment "test-recreate-deployment"
Jan 25 13:33:22.860: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan 25 13:33:22.902: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Jan 25 13:33:24.915: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan 25 13:33:24.922: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715556003, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715556003, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715556003, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715556002, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 13:33:26.934: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715556003, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715556003, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715556003, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715556002, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 13:33:28.932: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715556003, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715556003, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715556003, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715556002, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 13:33:30.932: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan 25 13:33:30.945: INFO: Updating deployment test-recreate-deployment
Jan 25 13:33:30.945: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 25 13:33:31.392: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-4280,SelfLink:/apis/apps/v1/namespaces/deployment-4280/deployments/test-recreate-deployment,UID:73d579db-89b0-446e-9ecb-755aabd88992,ResourceVersion:21812188,Generation:2,CreationTimestamp:2020-01-25 13:33:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-25 13:33:31 +0000 UTC 2020-01-25 13:33:31 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-25 13:33:31 +0000 UTC 2020-01-25 13:33:22 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan 25 13:33:31.397: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-4280,SelfLink:/apis/apps/v1/namespaces/deployment-4280/replicasets/test-recreate-deployment-5c8c9cc69d,UID:283fa7fa-a777-4a11-a632-983b86ca1956,ResourceVersion:21812187,Generation:1,CreationTimestamp:2020-01-25 13:33:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 73d579db-89b0-446e-9ecb-755aabd88992 0xc002c49217 0xc002c49218}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 25 13:33:31.397: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan 25 13:33:31.398: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-4280,SelfLink:/apis/apps/v1/namespaces/deployment-4280/replicasets/test-recreate-deployment-6df85df6b9,UID:3d2c3699-894a-4208-92f0-4a884b391aeb,ResourceVersion:21812177,Generation:2,CreationTimestamp:2020-01-25 13:33:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 73d579db-89b0-446e-9ecb-755aabd88992 0xc002c492e7 0xc002c492e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 25 13:33:31.401: INFO: Pod "test-recreate-deployment-5c8c9cc69d-57m2z" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-57m2z,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-4280,SelfLink:/api/v1/namespaces/deployment-4280/pods/test-recreate-deployment-5c8c9cc69d-57m2z,UID:08bc761d-d431-4607-b0e8-fb95ebc943c5,ResourceVersion:21812189,Generation:0,CreationTimestamp:2020-01-25 13:33:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 283fa7fa-a777-4a11-a632-983b86ca1956 0xc002c49bd7 0xc002c49bd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9xqrx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9xqrx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9xqrx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c49c50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c49c70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:33:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:33:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:33:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:33:31 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-25 13:33:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:33:31.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4280" for this suite.
Jan 25 13:33:37.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:33:37.777: INFO: namespace deployment-4280 deletion completed in 6.34705703s

• [SLOW TEST:15.047 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:33:37.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-73394468-d831-4c1a-983e-51bc88482578
Jan 25 13:33:38.023: INFO: Pod name my-hostname-basic-73394468-d831-4c1a-983e-51bc88482578: Found 0 pods out of 1
Jan 25 13:33:43.038: INFO: Pod name my-hostname-basic-73394468-d831-4c1a-983e-51bc88482578: Found 1 pods out of 1
Jan 25 13:33:43.038: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-73394468-d831-4c1a-983e-51bc88482578" are running
Jan 25 13:33:49.051: INFO: Pod "my-hostname-basic-73394468-d831-4c1a-983e-51bc88482578-4qv9x" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 13:33:38 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 13:33:38 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-73394468-d831-4c1a-983e-51bc88482578]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 13:33:38 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-73394468-d831-4c1a-983e-51bc88482578]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 13:33:38 +0000 UTC Reason: Message:}])
Jan 25 13:33:49.052: INFO: Trying to dial the pod
Jan 25 13:33:54.112: INFO: Controller my-hostname-basic-73394468-d831-4c1a-983e-51bc88482578: Got expected result from replica 1 [my-hostname-basic-73394468-d831-4c1a-983e-51bc88482578-4qv9x]: "my-hostname-basic-73394468-d831-4c1a-983e-51bc88482578-4qv9x", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:33:54.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5930" for this suite.
Jan 25 13:34:00.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:34:00.367: INFO: namespace replication-controller-5930 deletion completed in 6.235336335s

• [SLOW TEST:22.590 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:34:00.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 25 13:34:00.449: INFO: Waiting up to 5m0s for pod "pod-6b73b9d0-ebf2-4250-b44f-9e9716de0a96" in namespace "emptydir-9106" to be "success or failure"
Jan 25 13:34:00.497: INFO: Pod "pod-6b73b9d0-ebf2-4250-b44f-9e9716de0a96": Phase="Pending", Reason="", readiness=false. Elapsed: 47.264622ms
Jan 25 13:34:02.511: INFO: Pod "pod-6b73b9d0-ebf2-4250-b44f-9e9716de0a96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061750167s
Jan 25 13:34:04.530: INFO: Pod "pod-6b73b9d0-ebf2-4250-b44f-9e9716de0a96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080532781s
Jan 25 13:34:06.562: INFO: Pod "pod-6b73b9d0-ebf2-4250-b44f-9e9716de0a96": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112492135s
Jan 25 13:34:08.590: INFO: Pod "pod-6b73b9d0-ebf2-4250-b44f-9e9716de0a96": Phase="Pending", Reason="", readiness=false. Elapsed: 8.140941845s
Jan 25 13:34:10.614: INFO: Pod "pod-6b73b9d0-ebf2-4250-b44f-9e9716de0a96": Phase="Pending", Reason="", readiness=false. Elapsed: 10.164647945s
Jan 25 13:34:12.626: INFO: Pod "pod-6b73b9d0-ebf2-4250-b44f-9e9716de0a96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.176375495s
STEP: Saw pod success
Jan 25 13:34:12.626: INFO: Pod "pod-6b73b9d0-ebf2-4250-b44f-9e9716de0a96" satisfied condition "success or failure"
Jan 25 13:34:12.645: INFO: Trying to get logs from node iruya-node pod pod-6b73b9d0-ebf2-4250-b44f-9e9716de0a96 container test-container: 
STEP: delete the pod
Jan 25 13:34:12.826: INFO: Waiting for pod pod-6b73b9d0-ebf2-4250-b44f-9e9716de0a96 to disappear
Jan 25 13:34:12.835: INFO: Pod pod-6b73b9d0-ebf2-4250-b44f-9e9716de0a96 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:34:12.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9106" for this suite.
Jan 25 13:34:18.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:34:19.027: INFO: namespace emptydir-9106 deletion completed in 6.186442398s

• [SLOW TEST:18.659 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:34:19.028: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-485416f9-ccd8-4f85-85fa-3b198eaf4df9
STEP: Creating a pod to test consume configMaps
Jan 25 13:34:19.207: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ec24603e-09ae-43b2-a2df-3d42da850988" in namespace "projected-1058" to be "success or failure"
Jan 25 13:34:19.233: INFO: Pod "pod-projected-configmaps-ec24603e-09ae-43b2-a2df-3d42da850988": Phase="Pending", Reason="", readiness=false. Elapsed: 26.536551ms
Jan 25 13:34:21.249: INFO: Pod "pod-projected-configmaps-ec24603e-09ae-43b2-a2df-3d42da850988": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042509517s
Jan 25 13:34:23.260: INFO: Pod "pod-projected-configmaps-ec24603e-09ae-43b2-a2df-3d42da850988": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053041356s
Jan 25 13:34:25.275: INFO: Pod "pod-projected-configmaps-ec24603e-09ae-43b2-a2df-3d42da850988": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06809889s
Jan 25 13:34:27.282: INFO: Pod "pod-projected-configmaps-ec24603e-09ae-43b2-a2df-3d42da850988": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07540857s
Jan 25 13:34:29.292: INFO: Pod "pod-projected-configmaps-ec24603e-09ae-43b2-a2df-3d42da850988": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.08509144s
STEP: Saw pod success
Jan 25 13:34:29.292: INFO: Pod "pod-projected-configmaps-ec24603e-09ae-43b2-a2df-3d42da850988" satisfied condition "success or failure"
Jan 25 13:34:29.296: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-ec24603e-09ae-43b2-a2df-3d42da850988 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 25 13:34:29.818: INFO: Waiting for pod pod-projected-configmaps-ec24603e-09ae-43b2-a2df-3d42da850988 to disappear
Jan 25 13:34:29.830: INFO: Pod pod-projected-configmaps-ec24603e-09ae-43b2-a2df-3d42da850988 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:34:29.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1058" for this suite.
Jan 25 13:34:35.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:34:35.995: INFO: namespace projected-1058 deletion completed in 6.154152204s

• [SLOW TEST:16.968 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:34:35.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 25 13:34:36.096: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d456c53a-6ff6-4edb-a0bd-ddbde5da33fd" in namespace "projected-1260" to be "success or failure"
Jan 25 13:34:36.113: INFO: Pod "downwardapi-volume-d456c53a-6ff6-4edb-a0bd-ddbde5da33fd": Phase="Pending", Reason="", readiness=false. Elapsed: 17.039299ms
Jan 25 13:34:38.124: INFO: Pod "downwardapi-volume-d456c53a-6ff6-4edb-a0bd-ddbde5da33fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02815291s
Jan 25 13:34:40.135: INFO: Pod "downwardapi-volume-d456c53a-6ff6-4edb-a0bd-ddbde5da33fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039621452s
Jan 25 13:34:42.152: INFO: Pod "downwardapi-volume-d456c53a-6ff6-4edb-a0bd-ddbde5da33fd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056714768s
Jan 25 13:34:44.162: INFO: Pod "downwardapi-volume-d456c53a-6ff6-4edb-a0bd-ddbde5da33fd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066158836s
Jan 25 13:34:46.172: INFO: Pod "downwardapi-volume-d456c53a-6ff6-4edb-a0bd-ddbde5da33fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.075884416s
STEP: Saw pod success
Jan 25 13:34:46.172: INFO: Pod "downwardapi-volume-d456c53a-6ff6-4edb-a0bd-ddbde5da33fd" satisfied condition "success or failure"
Jan 25 13:34:46.176: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d456c53a-6ff6-4edb-a0bd-ddbde5da33fd container client-container: 
STEP: delete the pod
Jan 25 13:34:46.401: INFO: Waiting for pod downwardapi-volume-d456c53a-6ff6-4edb-a0bd-ddbde5da33fd to disappear
Jan 25 13:34:46.425: INFO: Pod downwardapi-volume-d456c53a-6ff6-4edb-a0bd-ddbde5da33fd no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:34:46.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1260" for this suite.
Jan 25 13:34:52.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:34:52.659: INFO: namespace projected-1260 deletion completed in 6.229134867s

• [SLOW TEST:16.664 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:34:52.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 25 13:35:08.955: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 13:35:08.971: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 13:35:10.972: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 13:35:10.989: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 13:35:12.972: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 13:35:13.422: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 13:35:14.972: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 13:35:14.979: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 13:35:16.972: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 13:35:16.983: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 13:35:18.972: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 13:35:18.979: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 13:35:20.972: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 13:35:20.999: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 13:35:22.972: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 13:35:23.033: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 13:35:24.972: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 13:35:25.000: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 13:35:26.972: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 13:35:26.984: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:35:26.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4839" for this suite.
Jan 25 13:35:49.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:35:49.189: INFO: namespace container-lifecycle-hook-4839 deletion completed in 22.191520084s

• [SLOW TEST:56.529 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:35:49.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-bdd45b87-740f-402d-a4b1-0fe0daf52bb8
STEP: Creating a pod to test consume secrets
Jan 25 13:35:49.368: INFO: Waiting up to 5m0s for pod "pod-secrets-8b58310c-a180-4400-8d26-aca254534e2a" in namespace "secrets-3487" to be "success or failure"
Jan 25 13:35:49.387: INFO: Pod "pod-secrets-8b58310c-a180-4400-8d26-aca254534e2a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.900002ms
Jan 25 13:35:51.396: INFO: Pod "pod-secrets-8b58310c-a180-4400-8d26-aca254534e2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028216229s
Jan 25 13:35:53.429: INFO: Pod "pod-secrets-8b58310c-a180-4400-8d26-aca254534e2a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060678775s
Jan 25 13:35:55.510: INFO: Pod "pod-secrets-8b58310c-a180-4400-8d26-aca254534e2a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142513681s
Jan 25 13:35:57.520: INFO: Pod "pod-secrets-8b58310c-a180-4400-8d26-aca254534e2a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.151729588s
Jan 25 13:35:59.661: INFO: Pod "pod-secrets-8b58310c-a180-4400-8d26-aca254534e2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.292869554s
STEP: Saw pod success
Jan 25 13:35:59.661: INFO: Pod "pod-secrets-8b58310c-a180-4400-8d26-aca254534e2a" satisfied condition "success or failure"
Jan 25 13:35:59.667: INFO: Trying to get logs from node iruya-node pod pod-secrets-8b58310c-a180-4400-8d26-aca254534e2a container secret-volume-test: 
STEP: delete the pod
Jan 25 13:35:59.906: INFO: Waiting for pod pod-secrets-8b58310c-a180-4400-8d26-aca254534e2a to disappear
Jan 25 13:35:59.951: INFO: Pod pod-secrets-8b58310c-a180-4400-8d26-aca254534e2a no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:35:59.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3487" for this suite.
Jan 25 13:36:05.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:36:06.079: INFO: namespace secrets-3487 deletion completed in 6.117532606s

• [SLOW TEST:16.890 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:36:06.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-71438bbf-8387-4f5e-8c09-b53e7179979b
STEP: Creating a pod to test consume configMaps
Jan 25 13:36:06.255: INFO: Waiting up to 5m0s for pod "pod-configmaps-3be69a12-9de7-4ac0-87e9-1230c8b819fe" in namespace "configmap-2621" to be "success or failure"
Jan 25 13:36:06.271: INFO: Pod "pod-configmaps-3be69a12-9de7-4ac0-87e9-1230c8b819fe": Phase="Pending", Reason="", readiness=false. Elapsed: 16.559579ms
Jan 25 13:36:08.298: INFO: Pod "pod-configmaps-3be69a12-9de7-4ac0-87e9-1230c8b819fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043054455s
Jan 25 13:36:10.367: INFO: Pod "pod-configmaps-3be69a12-9de7-4ac0-87e9-1230c8b819fe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112299813s
Jan 25 13:36:12.380: INFO: Pod "pod-configmaps-3be69a12-9de7-4ac0-87e9-1230c8b819fe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125436885s
Jan 25 13:36:14.391: INFO: Pod "pod-configmaps-3be69a12-9de7-4ac0-87e9-1230c8b819fe": Phase="Pending", Reason="", readiness=false. Elapsed: 8.136218618s
Jan 25 13:36:16.431: INFO: Pod "pod-configmaps-3be69a12-9de7-4ac0-87e9-1230c8b819fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.176680172s
STEP: Saw pod success
Jan 25 13:36:16.431: INFO: Pod "pod-configmaps-3be69a12-9de7-4ac0-87e9-1230c8b819fe" satisfied condition "success or failure"
Jan 25 13:36:16.436: INFO: Trying to get logs from node iruya-node pod pod-configmaps-3be69a12-9de7-4ac0-87e9-1230c8b819fe container configmap-volume-test: 
STEP: delete the pod
Jan 25 13:36:16.580: INFO: Waiting for pod pod-configmaps-3be69a12-9de7-4ac0-87e9-1230c8b819fe to disappear
Jan 25 13:36:16.600: INFO: Pod pod-configmaps-3be69a12-9de7-4ac0-87e9-1230c8b819fe no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:36:16.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2621" for this suite.
Jan 25 13:36:22.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:36:22.821: INFO: namespace configmap-2621 deletion completed in 6.213130723s

• [SLOW TEST:16.741 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:36:22.822: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 25 13:36:31.465: INFO: Successfully updated pod "pod-update-75084ba1-f4c4-4736-8f7a-9321e566af80"
STEP: verifying the updated pod is in kubernetes
Jan 25 13:36:31.527: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:36:31.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7693" for this suite.
Jan 25 13:36:53.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:36:53.740: INFO: namespace pods-7693 deletion completed in 22.207025549s

• [SLOW TEST:30.918 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:36:53.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0125 13:36:57.150762       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 25 13:36:57.150: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:36:57.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7536" for this suite.
Jan 25 13:37:03.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:37:03.878: INFO: namespace gc-7536 deletion completed in 6.722246991s

• [SLOW TEST:10.137 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:37:03.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 25 13:37:04.019: INFO: Waiting up to 5m0s for pod "downwardapi-volume-196697f6-2830-4628-b3ab-b83140b773ab" in namespace "projected-4521" to be "success or failure"
Jan 25 13:37:04.024: INFO: Pod "downwardapi-volume-196697f6-2830-4628-b3ab-b83140b773ab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.762064ms
Jan 25 13:37:06.031: INFO: Pod "downwardapi-volume-196697f6-2830-4628-b3ab-b83140b773ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011493182s
Jan 25 13:37:08.060: INFO: Pod "downwardapi-volume-196697f6-2830-4628-b3ab-b83140b773ab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040506375s
Jan 25 13:37:10.072: INFO: Pod "downwardapi-volume-196697f6-2830-4628-b3ab-b83140b773ab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052934487s
Jan 25 13:37:12.081: INFO: Pod "downwardapi-volume-196697f6-2830-4628-b3ab-b83140b773ab": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061342785s
Jan 25 13:37:14.089: INFO: Pod "downwardapi-volume-196697f6-2830-4628-b3ab-b83140b773ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069705778s
STEP: Saw pod success
Jan 25 13:37:14.089: INFO: Pod "downwardapi-volume-196697f6-2830-4628-b3ab-b83140b773ab" satisfied condition "success or failure"
Jan 25 13:37:14.093: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-196697f6-2830-4628-b3ab-b83140b773ab container client-container: 
STEP: delete the pod
Jan 25 13:37:14.148: INFO: Waiting for pod downwardapi-volume-196697f6-2830-4628-b3ab-b83140b773ab to disappear
Jan 25 13:37:14.154: INFO: Pod downwardapi-volume-196697f6-2830-4628-b3ab-b83140b773ab no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:37:14.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4521" for this suite.
Jan 25 13:37:20.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:37:20.430: INFO: namespace projected-4521 deletion completed in 6.269437249s

• [SLOW TEST:16.552 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:37:20.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan 25 13:37:20.575: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3213,SelfLink:/api/v1/namespaces/watch-3213/configmaps/e2e-watch-test-label-changed,UID:a24fe306-cf20-4693-8df1-28e02d4436d1,ResourceVersion:21812797,Generation:0,CreationTimestamp:2020-01-25 13:37:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 25 13:37:20.576: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3213,SelfLink:/api/v1/namespaces/watch-3213/configmaps/e2e-watch-test-label-changed,UID:a24fe306-cf20-4693-8df1-28e02d4436d1,ResourceVersion:21812798,Generation:0,CreationTimestamp:2020-01-25 13:37:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 25 13:37:20.576: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3213,SelfLink:/api/v1/namespaces/watch-3213/configmaps/e2e-watch-test-label-changed,UID:a24fe306-cf20-4693-8df1-28e02d4436d1,ResourceVersion:21812799,Generation:0,CreationTimestamp:2020-01-25 13:37:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan 25 13:37:30.663: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3213,SelfLink:/api/v1/namespaces/watch-3213/configmaps/e2e-watch-test-label-changed,UID:a24fe306-cf20-4693-8df1-28e02d4436d1,ResourceVersion:21812814,Generation:0,CreationTimestamp:2020-01-25 13:37:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 25 13:37:30.663: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3213,SelfLink:/api/v1/namespaces/watch-3213/configmaps/e2e-watch-test-label-changed,UID:a24fe306-cf20-4693-8df1-28e02d4436d1,ResourceVersion:21812815,Generation:0,CreationTimestamp:2020-01-25 13:37:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan 25 13:37:30.663: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3213,SelfLink:/api/v1/namespaces/watch-3213/configmaps/e2e-watch-test-label-changed,UID:a24fe306-cf20-4693-8df1-28e02d4436d1,ResourceVersion:21812816,Generation:0,CreationTimestamp:2020-01-25 13:37:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:37:30.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3213" for this suite.
Jan 25 13:37:36.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:37:36.850: INFO: namespace watch-3213 deletion completed in 6.177338549s

• [SLOW TEST:16.420 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:37:36.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-9028
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 25 13:37:36.965: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 25 13:38:11.215: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9028 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 13:38:11.215: INFO: >>> kubeConfig: /root/.kube/config
I0125 13:38:11.307893       8 log.go:172] (0xc001206420) (0xc0022aa3c0) Create stream
I0125 13:38:11.307992       8 log.go:172] (0xc001206420) (0xc0022aa3c0) Stream added, broadcasting: 1
I0125 13:38:11.318382       8 log.go:172] (0xc001206420) Reply frame received for 1
I0125 13:38:11.318440       8 log.go:172] (0xc001206420) (0xc00279c000) Create stream
I0125 13:38:11.318452       8 log.go:172] (0xc001206420) (0xc00279c000) Stream added, broadcasting: 3
I0125 13:38:11.320445       8 log.go:172] (0xc001206420) Reply frame received for 3
I0125 13:38:11.320479       8 log.go:172] (0xc001206420) (0xc0022aa460) Create stream
I0125 13:38:11.320490       8 log.go:172] (0xc001206420) (0xc0022aa460) Stream added, broadcasting: 5
I0125 13:38:11.322140       8 log.go:172] (0xc001206420) Reply frame received for 5
I0125 13:38:12.534882       8 log.go:172] (0xc001206420) Data frame received for 3
I0125 13:38:12.535003       8 log.go:172] (0xc00279c000) (3) Data frame handling
I0125 13:38:12.535021       8 log.go:172] (0xc00279c000) (3) Data frame sent
I0125 13:38:12.700038       8 log.go:172] (0xc001206420) (0xc00279c000) Stream removed, broadcasting: 3
I0125 13:38:12.700139       8 log.go:172] (0xc001206420) Data frame received for 1
I0125 13:38:12.700161       8 log.go:172] (0xc0022aa3c0) (1) Data frame handling
I0125 13:38:12.700179       8 log.go:172] (0xc001206420) (0xc0022aa460) Stream removed, broadcasting: 5
I0125 13:38:12.700230       8 log.go:172] (0xc0022aa3c0) (1) Data frame sent
I0125 13:38:12.700257       8 log.go:172] (0xc001206420) (0xc0022aa3c0) Stream removed, broadcasting: 1
I0125 13:38:12.700307       8 log.go:172] (0xc001206420) Go away received
I0125 13:38:12.700414       8 log.go:172] (0xc001206420) (0xc0022aa3c0) Stream removed, broadcasting: 1
I0125 13:38:12.700424       8 log.go:172] (0xc001206420) (0xc00279c000) Stream removed, broadcasting: 3
I0125 13:38:12.700431       8 log.go:172] (0xc001206420) (0xc0022aa460) Stream removed, broadcasting: 5
Jan 25 13:38:12.700: INFO: Found all expected endpoints: [netserver-0]
Jan 25 13:38:12.718: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9028 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 13:38:12.718: INFO: >>> kubeConfig: /root/.kube/config
I0125 13:38:12.765189       8 log.go:172] (0xc0010ea210) (0xc00279c500) Create stream
I0125 13:38:12.765237       8 log.go:172] (0xc0010ea210) (0xc00279c500) Stream added, broadcasting: 1
I0125 13:38:12.770962       8 log.go:172] (0xc0010ea210) Reply frame received for 1
I0125 13:38:12.770986       8 log.go:172] (0xc0010ea210) (0xc0022aa500) Create stream
I0125 13:38:12.770992       8 log.go:172] (0xc0010ea210) (0xc0022aa500) Stream added, broadcasting: 3
I0125 13:38:12.772735       8 log.go:172] (0xc0010ea210) Reply frame received for 3
I0125 13:38:12.772754       8 log.go:172] (0xc0010ea210) (0xc002a3c000) Create stream
I0125 13:38:12.772763       8 log.go:172] (0xc0010ea210) (0xc002a3c000) Stream added, broadcasting: 5
I0125 13:38:12.774095       8 log.go:172] (0xc0010ea210) Reply frame received for 5
I0125 13:38:13.934159       8 log.go:172] (0xc0010ea210) Data frame received for 3
I0125 13:38:13.934336       8 log.go:172] (0xc0022aa500) (3) Data frame handling
I0125 13:38:13.934379       8 log.go:172] (0xc0022aa500) (3) Data frame sent
I0125 13:38:14.123090       8 log.go:172] (0xc0010ea210) (0xc0022aa500) Stream removed, broadcasting: 3
I0125 13:38:14.123287       8 log.go:172] (0xc0010ea210) Data frame received for 1
I0125 13:38:14.123300       8 log.go:172] (0xc00279c500) (1) Data frame handling
I0125 13:38:14.123311       8 log.go:172] (0xc00279c500) (1) Data frame sent
I0125 13:38:14.123779       8 log.go:172] (0xc0010ea210) (0xc00279c500) Stream removed, broadcasting: 1
I0125 13:38:14.124099       8 log.go:172] (0xc0010ea210) (0xc002a3c000) Stream removed, broadcasting: 5
I0125 13:38:14.124161       8 log.go:172] (0xc0010ea210) Go away received
I0125 13:38:14.124388       8 log.go:172] (0xc0010ea210) (0xc00279c500) Stream removed, broadcasting: 1
I0125 13:38:14.124401       8 log.go:172] (0xc0010ea210) (0xc0022aa500) Stream removed, broadcasting: 3
I0125 13:38:14.124420       8 log.go:172] (0xc0010ea210) (0xc002a3c000) Stream removed, broadcasting: 5
Jan 25 13:38:14.124: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:38:14.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9028" for this suite.
Jan 25 13:38:38.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:38:38.327: INFO: namespace pod-network-test-9028 deletion completed in 24.190732419s

• [SLOW TEST:61.476 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:38:38.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-ac2aad0a-81fc-43fe-9074-cd7e1b3ee69d in namespace container-probe-7482
Jan 25 13:38:46.480: INFO: Started pod busybox-ac2aad0a-81fc-43fe-9074-cd7e1b3ee69d in namespace container-probe-7482
STEP: checking the pod's current state and verifying that restartCount is present
Jan 25 13:38:46.489: INFO: Initial restart count of pod busybox-ac2aad0a-81fc-43fe-9074-cd7e1b3ee69d is 0
Jan 25 13:39:40.899: INFO: Restart count of pod container-probe-7482/busybox-ac2aad0a-81fc-43fe-9074-cd7e1b3ee69d is now 1 (54.41039086s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:39:41.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7482" for this suite.
Jan 25 13:39:47.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:39:47.221: INFO: namespace container-probe-7482 deletion completed in 6.185977032s

• [SLOW TEST:68.893 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:39:47.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 25 13:39:47.352: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0e5b814c-ca9e-4e3e-85db-06cf809228cd" in namespace "downward-api-4279" to be "success or failure"
Jan 25 13:39:47.382: INFO: Pod "downwardapi-volume-0e5b814c-ca9e-4e3e-85db-06cf809228cd": Phase="Pending", Reason="", readiness=false. Elapsed: 29.774243ms
Jan 25 13:39:49.391: INFO: Pod "downwardapi-volume-0e5b814c-ca9e-4e3e-85db-06cf809228cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039083361s
Jan 25 13:39:51.402: INFO: Pod "downwardapi-volume-0e5b814c-ca9e-4e3e-85db-06cf809228cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050162638s
Jan 25 13:39:53.414: INFO: Pod "downwardapi-volume-0e5b814c-ca9e-4e3e-85db-06cf809228cd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062171082s
Jan 25 13:39:55.422: INFO: Pod "downwardapi-volume-0e5b814c-ca9e-4e3e-85db-06cf809228cd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070165973s
Jan 25 13:39:57.430: INFO: Pod "downwardapi-volume-0e5b814c-ca9e-4e3e-85db-06cf809228cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.078414386s
STEP: Saw pod success
Jan 25 13:39:57.430: INFO: Pod "downwardapi-volume-0e5b814c-ca9e-4e3e-85db-06cf809228cd" satisfied condition "success or failure"
Jan 25 13:39:57.435: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-0e5b814c-ca9e-4e3e-85db-06cf809228cd container client-container: 
STEP: delete the pod
Jan 25 13:39:57.503: INFO: Waiting for pod downwardapi-volume-0e5b814c-ca9e-4e3e-85db-06cf809228cd to disappear
Jan 25 13:39:57.637: INFO: Pod downwardapi-volume-0e5b814c-ca9e-4e3e-85db-06cf809228cd no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:39:57.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4279" for this suite.
Jan 25 13:40:03.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:40:03.858: INFO: namespace downward-api-4279 deletion completed in 6.212026106s

• [SLOW TEST:16.637 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:40:03.858: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 25 13:40:12.600: INFO: Successfully updated pod "pod-update-activedeadlineseconds-a5cffbb3-e687-460e-987c-b1217f186e21"
Jan 25 13:40:12.600: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-a5cffbb3-e687-460e-987c-b1217f186e21" in namespace "pods-5052" to be "terminated due to deadline exceeded"
Jan 25 13:40:12.679: INFO: Pod "pod-update-activedeadlineseconds-a5cffbb3-e687-460e-987c-b1217f186e21": Phase="Running", Reason="", readiness=true. Elapsed: 79.058716ms
Jan 25 13:40:14.685: INFO: Pod "pod-update-activedeadlineseconds-a5cffbb3-e687-460e-987c-b1217f186e21": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.084610739s
Jan 25 13:40:14.685: INFO: Pod "pod-update-activedeadlineseconds-a5cffbb3-e687-460e-987c-b1217f186e21" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:40:14.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5052" for this suite.
Jan 25 13:40:20.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:40:21.002: INFO: namespace pods-5052 deletion completed in 6.312479566s

• [SLOW TEST:17.144 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:40:21.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-e82264ab-8938-4c08-946f-a16d32f0d3cf
STEP: Creating a pod to test consume secrets
Jan 25 13:40:21.277: INFO: Waiting up to 5m0s for pod "pod-secrets-99e8767e-9c4a-468d-9e0f-26da0a749d56" in namespace "secrets-5711" to be "success or failure"
Jan 25 13:40:21.302: INFO: Pod "pod-secrets-99e8767e-9c4a-468d-9e0f-26da0a749d56": Phase="Pending", Reason="", readiness=false. Elapsed: 24.016536ms
Jan 25 13:40:23.316: INFO: Pod "pod-secrets-99e8767e-9c4a-468d-9e0f-26da0a749d56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038285451s
Jan 25 13:40:25.325: INFO: Pod "pod-secrets-99e8767e-9c4a-468d-9e0f-26da0a749d56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047827762s
Jan 25 13:40:27.336: INFO: Pod "pod-secrets-99e8767e-9c4a-468d-9e0f-26da0a749d56": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05811502s
Jan 25 13:40:29.345: INFO: Pod "pod-secrets-99e8767e-9c4a-468d-9e0f-26da0a749d56": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06788913s
Jan 25 13:40:31.361: INFO: Pod "pod-secrets-99e8767e-9c4a-468d-9e0f-26da0a749d56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.083703871s
STEP: Saw pod success
Jan 25 13:40:31.361: INFO: Pod "pod-secrets-99e8767e-9c4a-468d-9e0f-26da0a749d56" satisfied condition "success or failure"
Jan 25 13:40:31.376: INFO: Trying to get logs from node iruya-node pod pod-secrets-99e8767e-9c4a-468d-9e0f-26da0a749d56 container secret-volume-test: 
STEP: delete the pod
Jan 25 13:40:31.431: INFO: Waiting for pod pod-secrets-99e8767e-9c4a-468d-9e0f-26da0a749d56 to disappear
Jan 25 13:40:31.441: INFO: Pod pod-secrets-99e8767e-9c4a-468d-9e0f-26da0a749d56 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:40:31.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5711" for this suite.
Jan 25 13:40:37.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:40:37.922: INFO: namespace secrets-5711 deletion completed in 6.3976612s
STEP: Destroying namespace "secret-namespace-1326" for this suite.
Jan 25 13:40:43.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:40:44.040: INFO: namespace secret-namespace-1326 deletion completed in 6.117887669s

• [SLOW TEST:23.038 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:40:44.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 25 13:40:44.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-2210'
Jan 25 13:40:46.201: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 25 13:40:46.201: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jan 25 13:40:46.314: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-5zwqx]
Jan 25 13:40:46.314: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-5zwqx" in namespace "kubectl-2210" to be "running and ready"
Jan 25 13:40:46.329: INFO: Pod "e2e-test-nginx-rc-5zwqx": Phase="Pending", Reason="", readiness=false. Elapsed: 14.782699ms
Jan 25 13:40:48.375: INFO: Pod "e2e-test-nginx-rc-5zwqx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060898196s
Jan 25 13:40:50.384: INFO: Pod "e2e-test-nginx-rc-5zwqx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070085024s
Jan 25 13:40:52.390: INFO: Pod "e2e-test-nginx-rc-5zwqx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076158196s
Jan 25 13:40:54.400: INFO: Pod "e2e-test-nginx-rc-5zwqx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.086334754s
Jan 25 13:40:56.418: INFO: Pod "e2e-test-nginx-rc-5zwqx": Phase="Running", Reason="", readiness=true. Elapsed: 10.104051332s
Jan 25 13:40:56.418: INFO: Pod "e2e-test-nginx-rc-5zwqx" satisfied condition "running and ready"
Jan 25 13:40:56.418: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-5zwqx]
Jan 25 13:40:56.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-2210'
Jan 25 13:40:56.662: INFO: stderr: ""
Jan 25 13:40:56.663: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Jan 25 13:40:56.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-2210'
Jan 25 13:40:56.772: INFO: stderr: ""
Jan 25 13:40:56.772: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:40:56.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2210" for this suite.
Jan 25 13:41:18.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:41:18.987: INFO: namespace kubectl-2210 deletion completed in 22.17356313s

• [SLOW TEST:34.947 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:41:18.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Jan 25 13:41:19.213: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:41:36.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8460" for this suite.
Jan 25 13:41:42.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:41:42.840: INFO: namespace pods-8460 deletion completed in 6.262360994s

• [SLOW TEST:23.853 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:41:42.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 25 13:41:43.108: INFO: Number of nodes with available pods: 0
Jan 25 13:41:43.108: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:41:44.122: INFO: Number of nodes with available pods: 0
Jan 25 13:41:44.122: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:41:45.247: INFO: Number of nodes with available pods: 0
Jan 25 13:41:45.247: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:41:46.125: INFO: Number of nodes with available pods: 0
Jan 25 13:41:46.125: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:41:47.131: INFO: Number of nodes with available pods: 0
Jan 25 13:41:47.131: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:41:48.144: INFO: Number of nodes with available pods: 0
Jan 25 13:41:48.144: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:41:49.405: INFO: Number of nodes with available pods: 0
Jan 25 13:41:49.405: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:41:50.232: INFO: Number of nodes with available pods: 0
Jan 25 13:41:50.232: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:41:51.131: INFO: Number of nodes with available pods: 0
Jan 25 13:41:51.131: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:41:52.134: INFO: Number of nodes with available pods: 0
Jan 25 13:41:52.134: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:41:53.125: INFO: Number of nodes with available pods: 0
Jan 25 13:41:53.125: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:41:54.206: INFO: Number of nodes with available pods: 2
Jan 25 13:41:54.206: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan 25 13:41:54.288: INFO: Number of nodes with available pods: 1
Jan 25 13:41:54.288: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 25 13:41:55.308: INFO: Number of nodes with available pods: 1
Jan 25 13:41:55.308: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 25 13:41:56.308: INFO: Number of nodes with available pods: 1
Jan 25 13:41:56.308: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 25 13:41:57.302: INFO: Number of nodes with available pods: 1
Jan 25 13:41:57.302: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 25 13:41:58.578: INFO: Number of nodes with available pods: 1
Jan 25 13:41:58.578: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 25 13:41:59.304: INFO: Number of nodes with available pods: 1
Jan 25 13:41:59.304: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 25 13:42:00.304: INFO: Number of nodes with available pods: 1
Jan 25 13:42:00.304: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 25 13:42:01.303: INFO: Number of nodes with available pods: 1
Jan 25 13:42:01.304: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 25 13:42:02.298: INFO: Number of nodes with available pods: 1
Jan 25 13:42:02.298: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 25 13:42:03.310: INFO: Number of nodes with available pods: 1
Jan 25 13:42:03.310: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 25 13:42:04.309: INFO: Number of nodes with available pods: 1
Jan 25 13:42:04.309: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 25 13:42:05.308: INFO: Number of nodes with available pods: 1
Jan 25 13:42:05.308: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 25 13:42:06.311: INFO: Number of nodes with available pods: 1
Jan 25 13:42:06.311: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 25 13:42:07.306: INFO: Number of nodes with available pods: 1
Jan 25 13:42:07.306: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 25 13:42:08.299: INFO: Number of nodes with available pods: 1
Jan 25 13:42:08.300: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 25 13:42:09.339: INFO: Number of nodes with available pods: 1
Jan 25 13:42:09.339: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 25 13:42:10.303: INFO: Number of nodes with available pods: 1
Jan 25 13:42:10.303: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 25 13:42:11.305: INFO: Number of nodes with available pods: 1
Jan 25 13:42:11.305: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 25 13:42:12.643: INFO: Number of nodes with available pods: 1
Jan 25 13:42:12.643: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 25 13:42:13.398: INFO: Number of nodes with available pods: 1
Jan 25 13:42:13.398: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 25 13:42:14.304: INFO: Number of nodes with available pods: 1
Jan 25 13:42:14.304: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 25 13:42:15.300: INFO: Number of nodes with available pods: 2
Jan 25 13:42:15.300: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9155, will wait for the garbage collector to delete the pods
Jan 25 13:42:15.369: INFO: Deleting DaemonSet.extensions daemon-set took: 14.454401ms
Jan 25 13:42:15.670: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.480256ms
Jan 25 13:42:27.978: INFO: Number of nodes with available pods: 0
Jan 25 13:42:27.978: INFO: Number of running nodes: 0, number of available pods: 0
Jan 25 13:42:27.984: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9155/daemonsets","resourceVersion":"21813514"},"items":null}

Jan 25 13:42:28.002: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9155/pods","resourceVersion":"21813515"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:42:28.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9155" for this suite.
Jan 25 13:42:34.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:42:34.233: INFO: namespace daemonsets-9155 deletion completed in 6.210577232s

• [SLOW TEST:51.392 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:42:34.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 25 13:42:34.399: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3bb8b8c7-32f4-40df-8381-43debb12eaf2" in namespace "downward-api-6667" to be "success or failure"
Jan 25 13:42:34.409: INFO: Pod "downwardapi-volume-3bb8b8c7-32f4-40df-8381-43debb12eaf2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.036641ms
Jan 25 13:42:36.420: INFO: Pod "downwardapi-volume-3bb8b8c7-32f4-40df-8381-43debb12eaf2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020956518s
Jan 25 13:42:38.428: INFO: Pod "downwardapi-volume-3bb8b8c7-32f4-40df-8381-43debb12eaf2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02876199s
Jan 25 13:42:40.439: INFO: Pod "downwardapi-volume-3bb8b8c7-32f4-40df-8381-43debb12eaf2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040243453s
Jan 25 13:42:42.451: INFO: Pod "downwardapi-volume-3bb8b8c7-32f4-40df-8381-43debb12eaf2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051567082s
STEP: Saw pod success
Jan 25 13:42:42.451: INFO: Pod "downwardapi-volume-3bb8b8c7-32f4-40df-8381-43debb12eaf2" satisfied condition "success or failure"
Jan 25 13:42:42.455: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3bb8b8c7-32f4-40df-8381-43debb12eaf2 container client-container: 
STEP: delete the pod
Jan 25 13:42:42.559: INFO: Waiting for pod downwardapi-volume-3bb8b8c7-32f4-40df-8381-43debb12eaf2 to disappear
Jan 25 13:42:42.566: INFO: Pod downwardapi-volume-3bb8b8c7-32f4-40df-8381-43debb12eaf2 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:42:42.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6667" for this suite.
Jan 25 13:42:48.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:42:48.715: INFO: namespace downward-api-6667 deletion completed in 6.142584721s

• [SLOW TEST:14.481 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:42:48.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Jan 25 13:42:48.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1865'
Jan 25 13:42:49.417: INFO: stderr: ""
Jan 25 13:42:49.418: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 25 13:42:49.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1865'
Jan 25 13:42:49.644: INFO: stderr: ""
Jan 25 13:42:49.644: INFO: stdout: "update-demo-nautilus-cdp7h update-demo-nautilus-hhzcr "
Jan 25 13:42:49.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cdp7h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1865'
Jan 25 13:42:49.836: INFO: stderr: ""
Jan 25 13:42:49.836: INFO: stdout: ""
Jan 25 13:42:49.836: INFO: update-demo-nautilus-cdp7h is created but not running
Jan 25 13:42:54.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1865'
Jan 25 13:42:54.980: INFO: stderr: ""
Jan 25 13:42:54.980: INFO: stdout: "update-demo-nautilus-cdp7h update-demo-nautilus-hhzcr "
Jan 25 13:42:54.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cdp7h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1865'
Jan 25 13:42:56.785: INFO: stderr: ""
Jan 25 13:42:56.785: INFO: stdout: ""
Jan 25 13:42:56.785: INFO: update-demo-nautilus-cdp7h is created but not running
Jan 25 13:43:01.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1865'
Jan 25 13:43:01.949: INFO: stderr: ""
Jan 25 13:43:01.949: INFO: stdout: "update-demo-nautilus-cdp7h update-demo-nautilus-hhzcr "
Jan 25 13:43:01.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cdp7h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1865'
Jan 25 13:43:02.085: INFO: stderr: ""
Jan 25 13:43:02.085: INFO: stdout: "true"
Jan 25 13:43:02.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cdp7h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1865'
Jan 25 13:43:02.322: INFO: stderr: ""
Jan 25 13:43:02.323: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 25 13:43:02.323: INFO: validating pod update-demo-nautilus-cdp7h
Jan 25 13:43:02.328: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 25 13:43:02.328: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 25 13:43:02.328: INFO: update-demo-nautilus-cdp7h is verified up and running
Jan 25 13:43:02.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hhzcr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1865'
Jan 25 13:43:02.440: INFO: stderr: ""
Jan 25 13:43:02.440: INFO: stdout: "true"
Jan 25 13:43:02.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hhzcr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1865'
Jan 25 13:43:02.635: INFO: stderr: ""
Jan 25 13:43:02.635: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 25 13:43:02.635: INFO: validating pod update-demo-nautilus-hhzcr
Jan 25 13:43:02.667: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 25 13:43:02.668: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 25 13:43:02.668: INFO: update-demo-nautilus-hhzcr is verified up and running
STEP: rolling-update to new replication controller
Jan 25 13:43:02.673: INFO: scanned /root for discovery docs: 
Jan 25 13:43:02.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-1865'
Jan 25 13:43:34.987: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 25 13:43:34.987: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 25 13:43:34.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1865'
Jan 25 13:43:35.189: INFO: stderr: ""
Jan 25 13:43:35.189: INFO: stdout: "update-demo-kitten-f55vw update-demo-kitten-rslwr "
Jan 25 13:43:35.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-f55vw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1865'
Jan 25 13:43:35.345: INFO: stderr: ""
Jan 25 13:43:35.345: INFO: stdout: "true"
Jan 25 13:43:35.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-f55vw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1865'
Jan 25 13:43:35.537: INFO: stderr: ""
Jan 25 13:43:35.537: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 25 13:43:35.537: INFO: validating pod update-demo-kitten-f55vw
Jan 25 13:43:35.561: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 25 13:43:35.561: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 25 13:43:35.561: INFO: update-demo-kitten-f55vw is verified up and running
Jan 25 13:43:35.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rslwr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1865'
Jan 25 13:43:35.654: INFO: stderr: ""
Jan 25 13:43:35.654: INFO: stdout: "true"
Jan 25 13:43:35.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rslwr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1865'
Jan 25 13:43:35.799: INFO: stderr: ""
Jan 25 13:43:35.799: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 25 13:43:35.799: INFO: validating pod update-demo-kitten-rslwr
Jan 25 13:43:35.831: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 25 13:43:35.832: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 25 13:43:35.832: INFO: update-demo-kitten-rslwr is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:43:35.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1865" for this suite.
Jan 25 13:44:01.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:44:02.041: INFO: namespace kubectl-1865 deletion completed in 26.20457619s

• [SLOW TEST:73.326 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:44:02.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Jan 25 13:44:12.691: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3545 pod-service-account-defff369-e8c7-4eb4-9cde-3abc2ddefcf6 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jan 25 13:44:13.243: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3545 pod-service-account-defff369-e8c7-4eb4-9cde-3abc2ddefcf6 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jan 25 13:44:13.828: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3545 pod-service-account-defff369-e8c7-4eb4-9cde-3abc2ddefcf6 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:44:14.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3545" for this suite.
Jan 25 13:44:20.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:44:20.546: INFO: namespace svcaccounts-3545 deletion completed in 6.15426586s

• [SLOW TEST:18.505 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:44:20.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-536
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-536
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-536
Jan 25 13:44:20.720: INFO: Found 0 stateful pods, waiting for 1
Jan 25 13:44:30.738: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Jan 25 13:44:40.728: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan 25 13:44:40.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-536 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 25 13:44:41.557: INFO: stderr: "I0125 13:44:41.016770    1404 log.go:172] (0xc0009480b0) (0xc0008dc6e0) Create stream\nI0125 13:44:41.017222    1404 log.go:172] (0xc0009480b0) (0xc0008dc6e0) Stream added, broadcasting: 1\nI0125 13:44:41.054921    1404 log.go:172] (0xc0009480b0) Reply frame received for 1\nI0125 13:44:41.055113    1404 log.go:172] (0xc0009480b0) (0xc0003b0320) Create stream\nI0125 13:44:41.055147    1404 log.go:172] (0xc0009480b0) (0xc0003b0320) Stream added, broadcasting: 3\nI0125 13:44:41.059119    1404 log.go:172] (0xc0009480b0) Reply frame received for 3\nI0125 13:44:41.059366    1404 log.go:172] (0xc0009480b0) (0xc00027e000) Create stream\nI0125 13:44:41.059407    1404 log.go:172] (0xc0009480b0) (0xc00027e000) Stream added, broadcasting: 5\nI0125 13:44:41.062004    1404 log.go:172] (0xc0009480b0) Reply frame received for 5\nI0125 13:44:41.256593    1404 log.go:172] (0xc0009480b0) Data frame received for 5\nI0125 13:44:41.256973    1404 log.go:172] (0xc00027e000) (5) Data frame handling\nI0125 13:44:41.257039    1404 log.go:172] (0xc00027e000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0125 13:44:41.380406    1404 log.go:172] (0xc0009480b0) Data frame received for 3\nI0125 13:44:41.380511    1404 log.go:172] (0xc0003b0320) (3) Data frame handling\nI0125 13:44:41.380543    1404 log.go:172] (0xc0003b0320) (3) Data frame sent\nI0125 13:44:41.542734    1404 log.go:172] (0xc0009480b0) (0xc0003b0320) Stream removed, broadcasting: 3\nI0125 13:44:41.542990    1404 log.go:172] (0xc0009480b0) (0xc00027e000) Stream removed, broadcasting: 5\nI0125 13:44:41.543064    1404 log.go:172] (0xc0009480b0) Data frame received for 1\nI0125 13:44:41.543082    1404 log.go:172] (0xc0008dc6e0) (1) Data frame handling\nI0125 13:44:41.543140    1404 log.go:172] (0xc0008dc6e0) (1) Data frame sent\nI0125 13:44:41.543176    1404 log.go:172] (0xc0009480b0) (0xc0008dc6e0) Stream removed, broadcasting: 1\nI0125 13:44:41.544350    1404 log.go:172] (0xc0009480b0) Go away received\nI0125 13:44:41.545005    1404 log.go:172] (0xc0009480b0) (0xc0008dc6e0) Stream removed, broadcasting: 1\nI0125 13:44:41.545036    1404 log.go:172] (0xc0009480b0) (0xc0003b0320) Stream removed, broadcasting: 3\nI0125 13:44:41.545053    1404 log.go:172] (0xc0009480b0) (0xc00027e000) Stream removed, broadcasting: 5\n"
Jan 25 13:44:41.557: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 25 13:44:41.557: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 25 13:44:41.566: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 25 13:44:51.577: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 13:44:51.577: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 13:44:51.610: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 25 13:44:51.610: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:20 +0000 UTC  }]
Jan 25 13:44:51.610: INFO: 
Jan 25 13:44:51.610: INFO: StatefulSet ss has not reached scale 3, at 1
Jan 25 13:44:53.232: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.984126197s
Jan 25 13:44:54.241: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.36246371s
Jan 25 13:44:55.258: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.352656426s
Jan 25 13:44:56.296: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.335990543s
Jan 25 13:44:57.737: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.298318942s
Jan 25 13:44:58.752: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.856491191s
Jan 25 13:44:59.773: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.842028368s
Jan 25 13:45:00.815: INFO: Verifying statefulset ss doesn't scale past 3 for another 820.963191ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-536
Jan 25 13:45:01.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-536 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 25 13:45:03.060: INFO: stderr: "I0125 13:45:02.248852    1424 log.go:172] (0xc0008c0b00) (0xc0008a8c80) Create stream\nI0125 13:45:02.249509    1424 log.go:172] (0xc0008c0b00) (0xc0008a8c80) Stream added, broadcasting: 1\nI0125 13:45:02.290898    1424 log.go:172] (0xc0008c0b00) Reply frame received for 1\nI0125 13:45:02.291181    1424 log.go:172] (0xc0008c0b00) (0xc0008a8000) Create stream\nI0125 13:45:02.291215    1424 log.go:172] (0xc0008c0b00) (0xc0008a8000) Stream added, broadcasting: 3\nI0125 13:45:02.296137    1424 log.go:172] (0xc0008c0b00) Reply frame received for 3\nI0125 13:45:02.296185    1424 log.go:172] (0xc0008c0b00) (0xc0008a80a0) Create stream\nI0125 13:45:02.296201    1424 log.go:172] (0xc0008c0b00) (0xc0008a80a0) Stream added, broadcasting: 5\nI0125 13:45:02.298386    1424 log.go:172] (0xc0008c0b00) Reply frame received for 5\nI0125 13:45:02.778508    1424 log.go:172] (0xc0008c0b00) Data frame received for 5\nI0125 13:45:02.779252    1424 log.go:172] (0xc0008a80a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0125 13:45:02.779557    1424 log.go:172] (0xc0008c0b00) Data frame received for 3\nI0125 13:45:02.780053    1424 log.go:172] (0xc0008a80a0) (5) Data frame sent\nI0125 13:45:02.780200    1424 log.go:172] (0xc0008a8000) (3) Data frame handling\nI0125 13:45:02.780277    1424 log.go:172] (0xc0008a8000) (3) Data frame sent\nI0125 13:45:03.051045    1424 log.go:172] (0xc0008c0b00) (0xc0008a80a0) Stream removed, broadcasting: 5\nI0125 13:45:03.051195    1424 log.go:172] (0xc0008c0b00) (0xc0008a8000) Stream removed, broadcasting: 3\nI0125 13:45:03.051235    1424 log.go:172] (0xc0008c0b00) Data frame received for 1\nI0125 13:45:03.051257    1424 log.go:172] (0xc0008a8c80) (1) Data frame handling\nI0125 13:45:03.051288    1424 log.go:172] (0xc0008a8c80) (1) Data frame sent\nI0125 13:45:03.051301    1424 log.go:172] (0xc0008c0b00) (0xc0008a8c80) Stream removed, broadcasting: 1\nI0125 13:45:03.051316    1424 log.go:172] (0xc0008c0b00) Go away received\nI0125 13:45:03.052885    1424 log.go:172] (0xc0008c0b00) (0xc0008a8c80) Stream removed, broadcasting: 1\nI0125 13:45:03.052949    1424 log.go:172] (0xc0008c0b00) (0xc0008a8000) Stream removed, broadcasting: 3\nI0125 13:45:03.052971    1424 log.go:172] (0xc0008c0b00) (0xc0008a80a0) Stream removed, broadcasting: 5\n"
Jan 25 13:45:03.061: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 25 13:45:03.061: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 25 13:45:03.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-536 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 25 13:45:03.411: INFO: stderr: "I0125 13:45:03.246198    1443 log.go:172] (0xc00094e0b0) (0xc000ae01e0) Create stream\nI0125 13:45:03.246383    1443 log.go:172] (0xc00094e0b0) (0xc000ae01e0) Stream added, broadcasting: 1\nI0125 13:45:03.249726    1443 log.go:172] (0xc00094e0b0) Reply frame received for 1\nI0125 13:45:03.249805    1443 log.go:172] (0xc00094e0b0) (0xc00059c280) Create stream\nI0125 13:45:03.249815    1443 log.go:172] (0xc00094e0b0) (0xc00059c280) Stream added, broadcasting: 3\nI0125 13:45:03.250700    1443 log.go:172] (0xc00094e0b0) Reply frame received for 3\nI0125 13:45:03.250741    1443 log.go:172] (0xc00094e0b0) (0xc0003ce000) Create stream\nI0125 13:45:03.250751    1443 log.go:172] (0xc00094e0b0) (0xc0003ce000) Stream added, broadcasting: 5\nI0125 13:45:03.251475    1443 log.go:172] (0xc00094e0b0) Reply frame received for 5\nI0125 13:45:03.344579    1443 log.go:172] (0xc00094e0b0) Data frame received for 5\nI0125 13:45:03.344670    1443 log.go:172] (0xc0003ce000) (5) Data frame handling\nI0125 13:45:03.344689    1443 log.go:172] (0xc0003ce000) (5) Data frame sent\nI0125 13:45:03.344700    1443 log.go:172] (0xc00094e0b0) Data frame received for 3\nI0125 13:45:03.344708    1443 log.go:172] (0xc00059c280) (3) Data frame handling\nI0125 13:45:03.344715    1443 log.go:172] (0xc00059c280) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0125 13:45:03.402823    1443 log.go:172] (0xc00094e0b0) Data frame received for 1\nI0125 13:45:03.402951    1443 log.go:172] (0xc00094e0b0) (0xc0003ce000) Stream removed, broadcasting: 5\nI0125 13:45:03.403017    1443 log.go:172] (0xc000ae01e0) (1) Data frame handling\nI0125 13:45:03.403038    1443 log.go:172] (0xc000ae01e0) (1) Data frame sent\nI0125 13:45:03.403163    1443 log.go:172] (0xc00094e0b0) (0xc00059c280) Stream removed, broadcasting: 3\nI0125 13:45:03.403187    1443 log.go:172] (0xc00094e0b0) (0xc000ae01e0) Stream removed, broadcasting: 1\nI0125 13:45:03.403201    1443 log.go:172] (0xc00094e0b0) Go away received\nI0125 13:45:03.404405    1443 log.go:172] (0xc00094e0b0) (0xc000ae01e0) Stream removed, broadcasting: 1\nI0125 13:45:03.404438    1443 log.go:172] (0xc00094e0b0) (0xc00059c280) Stream removed, broadcasting: 3\nI0125 13:45:03.404453    1443 log.go:172] (0xc00094e0b0) (0xc0003ce000) Stream removed, broadcasting: 5\n"
Jan 25 13:45:03.412: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 25 13:45:03.412: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 25 13:45:03.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-536 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 25 13:45:03.857: INFO: stderr: "I0125 13:45:03.537129    1466 log.go:172] (0xc000116e70) (0xc00063e6e0) Create stream\nI0125 13:45:03.537395    1466 log.go:172] (0xc000116e70) (0xc00063e6e0) Stream added, broadcasting: 1\nI0125 13:45:03.541931    1466 log.go:172] (0xc000116e70) Reply frame received for 1\nI0125 13:45:03.542003    1466 log.go:172] (0xc000116e70) (0xc000776000) Create stream\nI0125 13:45:03.542013    1466 log.go:172] (0xc000116e70) (0xc000776000) Stream added, broadcasting: 3\nI0125 13:45:03.543557    1466 log.go:172] (0xc000116e70) Reply frame received for 3\nI0125 13:45:03.543596    1466 log.go:172] (0xc000116e70) (0xc00063e780) Create stream\nI0125 13:45:03.543606    1466 log.go:172] (0xc000116e70) (0xc00063e780) Stream added, broadcasting: 5\nI0125 13:45:03.545765    1466 log.go:172] (0xc000116e70) Reply frame received for 5\nI0125 13:45:03.640478    1466 log.go:172] (0xc000116e70) Data frame received for 3\nI0125 13:45:03.640538    1466 log.go:172] (0xc000776000) (3) Data frame handling\nI0125 13:45:03.640567    1466 log.go:172] (0xc000776000) (3) Data frame sent\nI0125 13:45:03.640697    1466 log.go:172] (0xc000116e70) Data frame received for 5\nI0125 13:45:03.640710    1466 log.go:172] (0xc00063e780) (5) Data frame handling\nI0125 13:45:03.640720    1466 log.go:172] (0xc00063e780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0125 13:45:03.835709    1466 log.go:172] (0xc000116e70) Data frame received for 1\nI0125 13:45:03.835924    1466 log.go:172] (0xc00063e6e0) (1) Data frame handling\nI0125 13:45:03.835956    1466 log.go:172] (0xc00063e6e0) (1) Data frame sent\nI0125 13:45:03.836051    1466 log.go:172] (0xc000116e70) (0xc00063e6e0) Stream removed, broadcasting: 1\nI0125 13:45:03.836097    1466 log.go:172] (0xc000116e70) (0xc000776000) Stream removed, broadcasting: 3\nI0125 13:45:03.838764    1466 log.go:172] (0xc000116e70) (0xc00063e780) Stream removed, broadcasting: 5\nI0125 13:45:03.838932    1466 log.go:172] (0xc000116e70) Go away received\nI0125 13:45:03.839362    1466 log.go:172] (0xc000116e70) (0xc00063e6e0) Stream removed, broadcasting: 1\nI0125 13:45:03.839394    1466 log.go:172] (0xc000116e70) (0xc000776000) Stream removed, broadcasting: 3\nI0125 13:45:03.839420    1466 log.go:172] (0xc000116e70) (0xc00063e780) Stream removed, broadcasting: 5\n"
Jan 25 13:45:03.857: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 25 13:45:03.857: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 25 13:45:03.875: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 13:45:03.875: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 13:45:03.875: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 25 13:45:13.891: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 13:45:13.891: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 13:45:13.891: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan 25 13:45:13.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-536 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 25 13:45:14.765: INFO: stderr: "I0125 13:45:14.263595    1484 log.go:172] (0xc00096e420) (0xc00091c640) Create stream\nI0125 13:45:14.264152    1484 log.go:172] (0xc00096e420) (0xc00091c640) Stream added, broadcasting: 1\nI0125 13:45:14.317353    1484 log.go:172] (0xc00096e420) Reply frame received for 1\nI0125 13:45:14.317936    1484 log.go:172] (0xc00096e420) (0xc00091c6e0) Create stream\nI0125 13:45:14.317997    1484 log.go:172] (0xc00096e420) (0xc00091c6e0) Stream added, broadcasting: 3\nI0125 13:45:14.327182    1484 log.go:172] (0xc00096e420) Reply frame received for 3\nI0125 13:45:14.327292    1484 log.go:172] (0xc00096e420) (0xc0009ce000) Create stream\nI0125 13:45:14.327313    1484 log.go:172] (0xc00096e420) (0xc0009ce000) Stream added, broadcasting: 5\nI0125 13:45:14.329908    1484 log.go:172] (0xc00096e420) Reply frame received for 5\nI0125 13:45:14.634973    1484 log.go:172] (0xc00096e420) Data frame received for 3\nI0125 13:45:14.635189    1484 log.go:172] (0xc00091c6e0) (3) Data frame handling\nI0125 13:45:14.635223    1484 log.go:172] (0xc00091c6e0) (3) Data frame sent\nI0125 13:45:14.636347    1484 log.go:172] (0xc00096e420) Data frame received for 5\nI0125 13:45:14.636383    1484 log.go:172] (0xc0009ce000) (5) Data frame handling\nI0125 13:45:14.636423    1484 log.go:172] (0xc0009ce000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0125 13:45:14.755446    1484 log.go:172] (0xc00096e420) Data frame received for 1\nI0125 13:45:14.755613    1484 log.go:172] (0xc00091c640) (1) Data frame handling\nI0125 13:45:14.755642    1484 log.go:172] (0xc00091c640) (1) Data frame sent\nI0125 13:45:14.756078    1484 log.go:172] (0xc00096e420) (0xc00091c6e0) Stream removed, broadcasting: 3\nI0125 13:45:14.756326    1484 log.go:172] (0xc00096e420) (0xc00091c640) Stream removed, broadcasting: 1\nI0125 13:45:14.756530    1484 log.go:172] (0xc00096e420) (0xc0009ce000) Stream removed, broadcasting: 5\nI0125 13:45:14.757439    1484 log.go:172] (0xc00096e420) (0xc00091c640) Stream removed, broadcasting: 1\nI0125 13:45:14.757616    1484 log.go:172] (0xc00096e420) (0xc00091c6e0) Stream removed, broadcasting: 3\nI0125 13:45:14.757720    1484 log.go:172] (0xc00096e420) (0xc0009ce000) Stream removed, broadcasting: 5\n"
Jan 25 13:45:14.765: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 25 13:45:14.765: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 25 13:45:14.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-536 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 25 13:45:15.186: INFO: stderr: "I0125 13:45:14.964789    1505 log.go:172] (0xc0006f4630) (0xc000204b40) Create stream\nI0125 13:45:14.965012    1505 log.go:172] (0xc0006f4630) (0xc000204b40) Stream added, broadcasting: 1\nI0125 13:45:14.969203    1505 log.go:172] (0xc0006f4630) Reply frame received for 1\nI0125 13:45:14.969388    1505 log.go:172] (0xc0006f4630) (0xc0007ec000) Create stream\nI0125 13:45:14.969436    1505 log.go:172] (0xc0006f4630) (0xc0007ec000) Stream added, broadcasting: 3\nI0125 13:45:14.970543    1505 log.go:172] (0xc0006f4630) Reply frame received for 3\nI0125 13:45:14.970611    1505 log.go:172] (0xc0006f4630) (0xc00086e000) Create stream\nI0125 13:45:14.970643    1505 log.go:172] (0xc0006f4630) (0xc00086e000) Stream added, broadcasting: 5\nI0125 13:45:14.971702    1505 log.go:172] (0xc0006f4630) Reply frame received for 5\nI0125 13:45:15.069062    1505 log.go:172] (0xc0006f4630) Data frame received for 5\nI0125 13:45:15.069117    1505 log.go:172] (0xc00086e000) (5) Data frame handling\nI0125 13:45:15.069139    1505 log.go:172] (0xc00086e000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0125 13:45:15.097003    1505 log.go:172] (0xc0006f4630) Data frame received for 3\nI0125 13:45:15.097069    1505 log.go:172] (0xc0007ec000) (3) Data frame handling\nI0125 13:45:15.097091    1505 log.go:172] (0xc0007ec000) (3) Data frame sent\nI0125 13:45:15.175478    1505 log.go:172] (0xc0006f4630) Data frame received for 1\nI0125 13:45:15.176088    1505 log.go:172] (0xc0006f4630) (0xc0007ec000) Stream removed, broadcasting: 3\nI0125 13:45:15.176313    1505 log.go:172] (0xc0006f4630) (0xc00086e000) Stream removed, broadcasting: 5\nI0125 13:45:15.176667    1505 log.go:172] (0xc000204b40) (1) Data frame handling\nI0125 13:45:15.176758    1505 log.go:172] (0xc000204b40) (1) Data frame sent\nI0125 13:45:15.176777    1505 log.go:172] (0xc0006f4630) (0xc000204b40) Stream removed, broadcasting: 1\nI0125 13:45:15.176824    1505 log.go:172] (0xc0006f4630) Go away received\nI0125 13:45:15.178315    1505 log.go:172] (0xc0006f4630) (0xc000204b40) Stream removed, broadcasting: 1\nI0125 13:45:15.178335    1505 log.go:172] (0xc0006f4630) (0xc0007ec000) Stream removed, broadcasting: 3\nI0125 13:45:15.178353    1505 log.go:172] (0xc0006f4630) (0xc00086e000) Stream removed, broadcasting: 5\n"
Jan 25 13:45:15.186: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 25 13:45:15.186: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 25 13:45:15.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-536 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 25 13:45:15.789: INFO: stderr: "I0125 13:45:15.453021    1525 log.go:172] (0xc0009d8420) (0xc00036a820) Create stream\nI0125 13:45:15.453492    1525 log.go:172] (0xc0009d8420) (0xc00036a820) Stream added, broadcasting: 1\nI0125 13:45:15.464054    1525 log.go:172] (0xc0009d8420) Reply frame received for 1\nI0125 13:45:15.464177    1525 log.go:172] (0xc0009d8420) (0xc0005aa3c0) Create stream\nI0125 13:45:15.464207    1525 log.go:172] (0xc0009d8420) (0xc0005aa3c0) Stream added, broadcasting: 3\nI0125 13:45:15.467177    1525 log.go:172] (0xc0009d8420) Reply frame received for 3\nI0125 13:45:15.467225    1525 log.go:172] (0xc0009d8420) (0xc000a14000) Create stream\nI0125 13:45:15.467240    1525 log.go:172] (0xc0009d8420) (0xc000a14000) Stream added, broadcasting: 5\nI0125 13:45:15.469466    1525 log.go:172] (0xc0009d8420) Reply frame received for 5\nI0125 13:45:15.620170    1525 log.go:172] (0xc0009d8420) Data frame received for 5\nI0125 13:45:15.620267    1525 log.go:172] (0xc000a14000) (5) Data frame handling\nI0125 13:45:15.620284    1525 log.go:172] (0xc000a14000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0125 13:45:15.640019    1525 log.go:172] (0xc0009d8420) Data frame received for 3\nI0125 13:45:15.640169    1525 log.go:172] (0xc0005aa3c0) (3) Data frame handling\nI0125 13:45:15.640215    1525 log.go:172] (0xc0005aa3c0) (3) Data frame sent\nI0125 13:45:15.768331    1525 log.go:172] (0xc0009d8420) Data frame received for 1\nI0125 13:45:15.768595    1525 log.go:172] (0xc00036a820) (1) Data frame handling\nI0125 13:45:15.768679    1525 log.go:172] (0xc00036a820) (1) Data frame sent\nI0125 13:45:15.768736    1525 log.go:172] (0xc0009d8420) (0xc00036a820) Stream removed, broadcasting: 1\nI0125 13:45:15.772465    1525 log.go:172] (0xc0009d8420) (0xc0005aa3c0) Stream removed, broadcasting: 3\nI0125 13:45:15.773863    1525 log.go:172] (0xc0009d8420) (0xc000a14000) Stream removed, broadcasting: 5\nI0125 13:45:15.773986    1525 log.go:172] (0xc0009d8420) (0xc00036a820) Stream removed, broadcasting: 1\nI0125 13:45:15.774053    1525 log.go:172] (0xc0009d8420) (0xc0005aa3c0) Stream removed, broadcasting: 3\nI0125 13:45:15.774084    1525 log.go:172] (0xc0009d8420) (0xc000a14000) Stream removed, broadcasting: 5\n"
Jan 25 13:45:15.789: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 25 13:45:15.789: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 25 13:45:15.789: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 13:45:15.800: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan 25 13:45:25.824: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 13:45:25.824: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 13:45:25.824: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 13:45:25.866: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 25 13:45:25.866: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:20 +0000 UTC  }]
Jan 25 13:45:25.866: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:51 +0000 UTC  }]
Jan 25 13:45:25.866: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:51 +0000 UTC  }]
Jan 25 13:45:25.866: INFO: 
Jan 25 13:45:25.866: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 25 13:45:27.697: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 25 13:45:27.697: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:20 +0000 UTC  }]
Jan 25 13:45:27.697: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:51 +0000 UTC  }]
Jan 25 13:45:27.697: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:51 +0000 UTC  }]
Jan 25 13:45:27.698: INFO: 
Jan 25 13:45:27.698: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 25 13:45:28.717: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 25 13:45:28.718: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:20 +0000 UTC  }]
Jan 25 13:45:28.718: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:51 +0000 UTC  }]
Jan 25 13:45:28.718: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:51 +0000 UTC  }]
Jan 25 13:45:28.718: INFO: 
Jan 25 13:45:28.718: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 25 13:45:29.734: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 25 13:45:29.734: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:20 +0000 UTC  }]
Jan 25 13:45:29.734: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:51 +0000 UTC  }]
Jan 25 13:45:29.734: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:51 +0000 UTC  }]
Jan 25 13:45:29.734: INFO: 
Jan 25 13:45:29.734: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 25 13:45:30.754: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 25 13:45:30.754: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:20 +0000 UTC  }]
Jan 25 13:45:30.754: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:51 +0000 UTC  }]
Jan 25 13:45:30.754: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:51 +0000 UTC  }]
Jan 25 13:45:30.754: INFO: 
Jan 25 13:45:30.754: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 25 13:45:31.767: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 25 13:45:31.767: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:20 +0000 UTC  }]
Jan 25 13:45:31.767: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:51 +0000 UTC  }]
Jan 25 13:45:31.767: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:51 +0000 UTC  }]
Jan 25 13:45:31.768: INFO: 
Jan 25 13:45:31.768: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 25 13:45:32.781: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 25 13:45:32.781: INFO: ss-0  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:20 +0000 UTC  }]
Jan 25 13:45:32.781: INFO: ss-2  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:51 +0000 UTC  }]
Jan 25 13:45:32.781: INFO: 
Jan 25 13:45:32.781: INFO: StatefulSet ss has not reached scale 0, at 2
Jan 25 13:45:33.805: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 25 13:45:33.806: INFO: ss-0  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:20 +0000 UTC  }]
Jan 25 13:45:33.806: INFO: ss-2  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:51 +0000 UTC  }]
Jan 25 13:45:33.806: INFO: 
Jan 25 13:45:33.806: INFO: StatefulSet ss has not reached scale 0, at 2
Jan 25 13:45:34.819: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 25 13:45:34.819: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:20 +0000 UTC  }]
Jan 25 13:45:34.819: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:44:51 +0000 UTC  }]
Jan 25 13:45:34.819: INFO: 
Jan 25 13:45:34.819: INFO: StatefulSet ss has not reached scale 0, at 2
Jan 25 13:45:35.832: INFO: Verifying statefulset ss doesn't scale past 0 for another 38.55398ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-536
Jan 25 13:45:36.839: INFO: Scaling statefulset ss to 0
Jan 25 13:45:36.867: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 25 13:45:36.872: INFO: Deleting all statefulset in ns statefulset-536
Jan 25 13:45:36.883: INFO: Scaling statefulset ss to 0
Jan 25 13:45:36.910: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 13:45:36.916: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:45:36.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-536" for this suite.
Jan 25 13:45:43.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:45:43.143: INFO: namespace statefulset-536 deletion completed in 6.16719055s

• [SLOW TEST:82.595 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:45:43.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-080c6ca2-7167-4539-9fc4-25313f5891df
STEP: Creating a pod to test consume secrets
Jan 25 13:45:43.300: INFO: Waiting up to 5m0s for pod "pod-secrets-052e57ff-a008-4198-8dfb-0fdd2234d796" in namespace "secrets-4106" to be "success or failure"
Jan 25 13:45:43.377: INFO: Pod "pod-secrets-052e57ff-a008-4198-8dfb-0fdd2234d796": Phase="Pending", Reason="", readiness=false. Elapsed: 76.572427ms
Jan 25 13:45:45.385: INFO: Pod "pod-secrets-052e57ff-a008-4198-8dfb-0fdd2234d796": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084854348s
Jan 25 13:45:47.394: INFO: Pod "pod-secrets-052e57ff-a008-4198-8dfb-0fdd2234d796": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09347869s
Jan 25 13:45:49.437: INFO: Pod "pod-secrets-052e57ff-a008-4198-8dfb-0fdd2234d796": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136625159s
Jan 25 13:45:51.446: INFO: Pod "pod-secrets-052e57ff-a008-4198-8dfb-0fdd2234d796": Phase="Pending", Reason="", readiness=false. Elapsed: 8.145828186s
Jan 25 13:45:53.459: INFO: Pod "pod-secrets-052e57ff-a008-4198-8dfb-0fdd2234d796": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.158593931s
STEP: Saw pod success
Jan 25 13:45:53.459: INFO: Pod "pod-secrets-052e57ff-a008-4198-8dfb-0fdd2234d796" satisfied condition "success or failure"
Jan 25 13:45:53.465: INFO: Trying to get logs from node iruya-node pod pod-secrets-052e57ff-a008-4198-8dfb-0fdd2234d796 container secret-volume-test: 
STEP: delete the pod
Jan 25 13:45:53.628: INFO: Waiting for pod pod-secrets-052e57ff-a008-4198-8dfb-0fdd2234d796 to disappear
Jan 25 13:45:53.651: INFO: Pod pod-secrets-052e57ff-a008-4198-8dfb-0fdd2234d796 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:45:53.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4106" for this suite.
Jan 25 13:45:59.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:45:59.856: INFO: namespace secrets-4106 deletion completed in 6.194696363s

• [SLOW TEST:16.713 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:45:59.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan 25 13:45:59.957: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:46:14.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2881" for this suite.
Jan 25 13:46:20.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:46:21.074: INFO: namespace init-container-2881 deletion completed in 6.291475633s

• [SLOW TEST:21.217 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:46:21.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 25 13:46:33.506: INFO: Waiting up to 5m0s for pod "client-envvars-cedda287-2299-45a1-b692-3b504d1da014" in namespace "pods-9111" to be "success or failure"
Jan 25 13:46:33.525: INFO: Pod "client-envvars-cedda287-2299-45a1-b692-3b504d1da014": Phase="Pending", Reason="", readiness=false. Elapsed: 19.440538ms
Jan 25 13:46:35.535: INFO: Pod "client-envvars-cedda287-2299-45a1-b692-3b504d1da014": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029191947s
Jan 25 13:46:37.548: INFO: Pod "client-envvars-cedda287-2299-45a1-b692-3b504d1da014": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041563104s
Jan 25 13:46:39.564: INFO: Pod "client-envvars-cedda287-2299-45a1-b692-3b504d1da014": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058044324s
Jan 25 13:46:41.571: INFO: Pod "client-envvars-cedda287-2299-45a1-b692-3b504d1da014": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06467358s
Jan 25 13:46:43.582: INFO: Pod "client-envvars-cedda287-2299-45a1-b692-3b504d1da014": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.075872546s
STEP: Saw pod success
Jan 25 13:46:43.582: INFO: Pod "client-envvars-cedda287-2299-45a1-b692-3b504d1da014" satisfied condition "success or failure"
Jan 25 13:46:43.586: INFO: Trying to get logs from node iruya-node pod client-envvars-cedda287-2299-45a1-b692-3b504d1da014 container env3cont: 
STEP: delete the pod
Jan 25 13:46:43.660: INFO: Waiting for pod client-envvars-cedda287-2299-45a1-b692-3b504d1da014 to disappear
Jan 25 13:46:43.665: INFO: Pod client-envvars-cedda287-2299-45a1-b692-3b504d1da014 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:46:43.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9111" for this suite.
Jan 25 13:47:27.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:47:27.968: INFO: namespace pods-9111 deletion completed in 44.29757439s

• [SLOW TEST:66.894 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:47:27.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Jan 25 13:47:28.062: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-8960" to be "success or failure"
Jan 25 13:47:28.079: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 17.473797ms
Jan 25 13:47:30.090: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028410781s
Jan 25 13:47:32.098: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036214569s
Jan 25 13:47:34.107: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045388524s
Jan 25 13:47:36.114: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05254332s
Jan 25 13:47:38.130: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.068337955s
Jan 25 13:47:40.139: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.077417484s
STEP: Saw pod success
Jan 25 13:47:40.139: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan 25 13:47:40.143: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan 25 13:47:40.302: INFO: Waiting for pod pod-host-path-test to disappear
Jan 25 13:47:40.317: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:47:40.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-8960" for this suite.
Jan 25 13:47:46.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:47:46.516: INFO: namespace hostpath-8960 deletion completed in 6.187867968s

• [SLOW TEST:18.548 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:47:46.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-4348
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-4348
STEP: Creating statefulset with conflicting port in namespace statefulset-4348
STEP: Waiting until pod test-pod will start running in namespace statefulset-4348
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4348
Jan 25 13:47:58.828: INFO: Observed stateful pod in namespace: statefulset-4348, name: ss-0, uid: 10b60c0f-9ce7-4393-ba20-0e2f1ef4f67a, status phase: Pending. Waiting for statefulset controller to delete.
Jan 25 13:48:06.530: INFO: Observed stateful pod in namespace: statefulset-4348, name: ss-0, uid: 10b60c0f-9ce7-4393-ba20-0e2f1ef4f67a, status phase: Failed. Waiting for statefulset controller to delete.
Jan 25 13:48:06.545: INFO: Observed stateful pod in namespace: statefulset-4348, name: ss-0, uid: 10b60c0f-9ce7-4393-ba20-0e2f1ef4f67a, status phase: Failed. Waiting for statefulset controller to delete.
Jan 25 13:48:06.552: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4348
STEP: Removing pod with conflicting port in namespace statefulset-4348
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4348 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 25 13:48:18.745: INFO: Deleting all statefulset in ns statefulset-4348
Jan 25 13:48:18.749: INFO: Scaling statefulset ss to 0
Jan 25 13:48:38.776: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 13:48:38.780: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:48:38.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4348" for this suite.
Jan 25 13:48:46.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:48:46.996: INFO: namespace statefulset-4348 deletion completed in 8.162880019s

• [SLOW TEST:60.479 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:48:46.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan 25 13:48:53.819: INFO: 10 pods remaining
Jan 25 13:48:53.820: INFO: 10 pods has nil DeletionTimestamp
Jan 25 13:48:53.820: INFO: 
Jan 25 13:48:54.833: INFO: 0 pods remaining
Jan 25 13:48:54.833: INFO: 0 pods has nil DeletionTimestamp
Jan 25 13:48:54.833: INFO: 
STEP: Gathering metrics
W0125 13:48:55.534520       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 25 13:48:55.534: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:48:55.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4355" for this suite.
Jan 25 13:49:05.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:49:06.023: INFO: namespace gc-4355 deletion completed in 10.481148698s

• [SLOW TEST:19.026 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:49:06.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 25 13:49:06.129: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan 25 13:49:06.145: INFO: Number of nodes with available pods: 0
Jan 25 13:49:06.145: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan 25 13:49:06.220: INFO: Number of nodes with available pods: 0
Jan 25 13:49:06.220: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:49:07.304: INFO: Number of nodes with available pods: 0
Jan 25 13:49:07.304: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:49:08.232: INFO: Number of nodes with available pods: 0
Jan 25 13:49:08.232: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:49:09.295: INFO: Number of nodes with available pods: 0
Jan 25 13:49:09.295: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:49:10.229: INFO: Number of nodes with available pods: 0
Jan 25 13:49:10.229: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:49:11.237: INFO: Number of nodes with available pods: 0
Jan 25 13:49:11.237: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:49:12.228: INFO: Number of nodes with available pods: 0
Jan 25 13:49:12.228: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:49:13.226: INFO: Number of nodes with available pods: 0
Jan 25 13:49:13.226: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:49:14.231: INFO: Number of nodes with available pods: 0
Jan 25 13:49:14.231: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:49:15.232: INFO: Number of nodes with available pods: 1
Jan 25 13:49:15.233: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan 25 13:49:15.317: INFO: Number of nodes with available pods: 1
Jan 25 13:49:15.317: INFO: Number of running nodes: 0, number of available pods: 1
Jan 25 13:49:16.332: INFO: Number of nodes with available pods: 0
Jan 25 13:49:16.332: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan 25 13:49:16.394: INFO: Number of nodes with available pods: 0
Jan 25 13:49:16.395: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:49:17.432: INFO: Number of nodes with available pods: 0
Jan 25 13:49:17.432: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:49:18.403: INFO: Number of nodes with available pods: 0
Jan 25 13:49:18.403: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:49:19.427: INFO: Number of nodes with available pods: 0
Jan 25 13:49:19.427: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:49:20.404: INFO: Number of nodes with available pods: 0
Jan 25 13:49:20.404: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:49:21.402: INFO: Number of nodes with available pods: 0
Jan 25 13:49:21.402: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:49:22.402: INFO: Number of nodes with available pods: 0
Jan 25 13:49:22.403: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:49:23.418: INFO: Number of nodes with available pods: 0
Jan 25 13:49:23.418: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:49:24.405: INFO: Number of nodes with available pods: 0
Jan 25 13:49:24.405: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:49:25.409: INFO: Number of nodes with available pods: 0
Jan 25 13:49:25.409: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:49:26.406: INFO: Number of nodes with available pods: 0
Jan 25 13:49:26.406: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:49:27.404: INFO: Number of nodes with available pods: 0
Jan 25 13:49:27.404: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:49:28.406: INFO: Number of nodes with available pods: 0
Jan 25 13:49:28.406: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:49:29.407: INFO: Number of nodes with available pods: 0
Jan 25 13:49:29.407: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:49:30.409: INFO: Number of nodes with available pods: 0
Jan 25 13:49:30.409: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:49:31.403: INFO: Number of nodes with available pods: 0
Jan 25 13:49:31.403: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:49:32.402: INFO: Number of nodes with available pods: 0
Jan 25 13:49:32.402: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:49:33.411: INFO: Number of nodes with available pods: 0
Jan 25 13:49:33.411: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:49:34.408: INFO: Number of nodes with available pods: 0
Jan 25 13:49:34.408: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:49:35.878: INFO: Number of nodes with available pods: 1
Jan 25 13:49:35.878: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6951, will wait for the garbage collector to delete the pods
Jan 25 13:49:36.145: INFO: Deleting DaemonSet.extensions daemon-set took: 17.749289ms
Jan 25 13:49:36.546: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.685114ms
Jan 25 13:49:43.050: INFO: Number of nodes with available pods: 0
Jan 25 13:49:43.050: INFO: Number of running nodes: 0, number of available pods: 0
Jan 25 13:49:43.055: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6951/daemonsets","resourceVersion":"21814894"},"items":null}

Jan 25 13:49:43.057: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6951/pods","resourceVersion":"21814894"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:49:43.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6951" for this suite.
Jan 25 13:49:49.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:49:49.304: INFO: namespace daemonsets-6951 deletion completed in 6.185766082s

• [SLOW TEST:43.281 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:49:49.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan 25 13:49:49.397: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 25 13:49:49.407: INFO: Waiting for terminating namespaces to be deleted...
Jan 25 13:49:49.411: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan 25 13:49:49.423: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan 25 13:49:49.423: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 25 13:49:49.423: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan 25 13:49:49.423: INFO: 	Container weave ready: true, restart count 0
Jan 25 13:49:49.423: INFO: 	Container weave-npc ready: true, restart count 0
Jan 25 13:49:49.423: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan 25 13:49:49.440: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan 25 13:49:49.440: INFO: 	Container kube-scheduler ready: true, restart count 13
Jan 25 13:49:49.440: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 25 13:49:49.440: INFO: 	Container coredns ready: true, restart count 0
Jan 25 13:49:49.440: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan 25 13:49:49.440: INFO: 	Container etcd ready: true, restart count 0
Jan 25 13:49:49.440: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan 25 13:49:49.440: INFO: 	Container weave ready: true, restart count 0
Jan 25 13:49:49.440: INFO: 	Container weave-npc ready: true, restart count 0
Jan 25 13:49:49.440: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 25 13:49:49.440: INFO: 	Container coredns ready: true, restart count 0
Jan 25 13:49:49.440: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan 25 13:49:49.440: INFO: 	Container kube-controller-manager ready: true, restart count 19
Jan 25 13:49:49.440: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan 25 13:49:49.440: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 25 13:49:49.440: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan 25 13:49:49.440: INFO: 	Container kube-apiserver ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Jan 25 13:49:49.579: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan 25 13:49:49.579: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan 25 13:49:49.579: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Jan 25 13:49:49.579: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Jan 25 13:49:49.579: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Jan 25 13:49:49.579: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Jan 25 13:49:49.579: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Jan 25 13:49:49.579: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan 25 13:49:49.579: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Jan 25 13:49:49.579: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6b0d557d-fc3d-42b7-8455-64c5040f92d8.15ed256bec938221], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6625/filler-pod-6b0d557d-fc3d-42b7-8455-64c5040f92d8 to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6b0d557d-fc3d-42b7-8455-64c5040f92d8.15ed256d2bc6b01d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6b0d557d-fc3d-42b7-8455-64c5040f92d8.15ed256de93930f4], Reason = [Created], Message = [Created container filler-pod-6b0d557d-fc3d-42b7-8455-64c5040f92d8]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6b0d557d-fc3d-42b7-8455-64c5040f92d8.15ed256e0b339411], Reason = [Started], Message = [Started container filler-pod-6b0d557d-fc3d-42b7-8455-64c5040f92d8]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ded5ae85-fa6e-4dd9-a657-ea3cdff2c6dd.15ed256beab6f866], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6625/filler-pod-ded5ae85-fa6e-4dd9-a657-ea3cdff2c6dd to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ded5ae85-fa6e-4dd9-a657-ea3cdff2c6dd.15ed256d4917f408], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ded5ae85-fa6e-4dd9-a657-ea3cdff2c6dd.15ed256e2925d91f], Reason = [Created], Message = [Created container filler-pod-ded5ae85-fa6e-4dd9-a657-ea3cdff2c6dd]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ded5ae85-fa6e-4dd9-a657-ea3cdff2c6dd.15ed256e4db458ea], Reason = [Started], Message = [Started container filler-pod-ded5ae85-fa6e-4dd9-a657-ea3cdff2c6dd]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15ed256ebcda87d1], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:50:02.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6625" for this suite.
Jan 25 13:50:09.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:50:09.403: INFO: namespace sched-pred-6625 deletion completed in 6.448468005s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:20.099 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:50:09.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Jan 25 13:50:24.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-38ba0c41-c783-492b-aaa4-5112b7a8c4f3 -c busybox-main-container --namespace=emptydir-6115 -- cat /usr/share/volumeshare/shareddata.txt'
Jan 25 13:50:24.687: INFO: stderr: "I0125 13:50:24.187840    1545 log.go:172] (0xc00013a790) (0xc0008be0a0) Create stream\nI0125 13:50:24.188285    1545 log.go:172] (0xc00013a790) (0xc0008be0a0) Stream added, broadcasting: 1\nI0125 13:50:24.193531    1545 log.go:172] (0xc00013a790) Reply frame received for 1\nI0125 13:50:24.193572    1545 log.go:172] (0xc00013a790) (0xc00061c280) Create stream\nI0125 13:50:24.193580    1545 log.go:172] (0xc00013a790) (0xc00061c280) Stream added, broadcasting: 3\nI0125 13:50:24.194702    1545 log.go:172] (0xc00013a790) Reply frame received for 3\nI0125 13:50:24.194727    1545 log.go:172] (0xc00013a790) (0xc00061c320) Create stream\nI0125 13:50:24.194736    1545 log.go:172] (0xc00013a790) (0xc00061c320) Stream added, broadcasting: 5\nI0125 13:50:24.196113    1545 log.go:172] (0xc00013a790) Reply frame received for 5\nI0125 13:50:24.420396    1545 log.go:172] (0xc00013a790) Data frame received for 3\nI0125 13:50:24.420533    1545 log.go:172] (0xc00061c280) (3) Data frame handling\nI0125 13:50:24.420557    1545 log.go:172] (0xc00061c280) (3) Data frame sent\nI0125 13:50:24.673345    1545 log.go:172] (0xc00013a790) Data frame received for 1\nI0125 13:50:24.673698    1545 log.go:172] (0xc00013a790) (0xc00061c280) Stream removed, broadcasting: 3\nI0125 13:50:24.673819    1545 log.go:172] (0xc0008be0a0) (1) Data frame handling\nI0125 13:50:24.673852    1545 log.go:172] (0xc0008be0a0) (1) Data frame sent\nI0125 13:50:24.673868    1545 log.go:172] (0xc00013a790) (0xc0008be0a0) Stream removed, broadcasting: 1\nI0125 13:50:24.674417    1545 log.go:172] (0xc00013a790) (0xc00061c320) Stream removed, broadcasting: 5\nI0125 13:50:24.674823    1545 log.go:172] (0xc00013a790) Go away received\nI0125 13:50:24.675217    1545 log.go:172] (0xc00013a790) (0xc0008be0a0) Stream removed, broadcasting: 1\nI0125 13:50:24.675247    1545 log.go:172] (0xc00013a790) (0xc00061c280) Stream removed, broadcasting: 3\nI0125 13:50:24.675261    1545 log.go:172] (0xc00013a790) (0xc00061c320) Stream removed, broadcasting: 5\n"
Jan 25 13:50:24.687: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:50:24.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6115" for this suite.
Jan 25 13:50:30.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:50:30.910: INFO: namespace emptydir-6115 deletion completed in 6.213036322s

• [SLOW TEST:21.507 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:50:30.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan 25 13:50:41.617: INFO: Successfully updated pod "annotationupdatebebd65a8-7d13-47f7-a326-d895a41ffeb3"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:50:43.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2711" for this suite.
Jan 25 13:51:05.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:51:05.919: INFO: namespace downward-api-2711 deletion completed in 22.207886111s

• [SLOW TEST:35.008 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:51:05.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 25 13:51:06.070: INFO: Waiting up to 5m0s for pod "downward-api-f5e7affd-0212-41b6-af35-550906ffa3c2" in namespace "downward-api-9429" to be "success or failure"
Jan 25 13:51:06.077: INFO: Pod "downward-api-f5e7affd-0212-41b6-af35-550906ffa3c2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.095033ms
Jan 25 13:51:08.104: INFO: Pod "downward-api-f5e7affd-0212-41b6-af35-550906ffa3c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034349309s
Jan 25 13:51:10.114: INFO: Pod "downward-api-f5e7affd-0212-41b6-af35-550906ffa3c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044419276s
Jan 25 13:51:12.125: INFO: Pod "downward-api-f5e7affd-0212-41b6-af35-550906ffa3c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055354348s
Jan 25 13:51:14.136: INFO: Pod "downward-api-f5e7affd-0212-41b6-af35-550906ffa3c2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066681565s
Jan 25 13:51:16.152: INFO: Pod "downward-api-f5e7affd-0212-41b6-af35-550906ffa3c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.081876529s
STEP: Saw pod success
Jan 25 13:51:16.152: INFO: Pod "downward-api-f5e7affd-0212-41b6-af35-550906ffa3c2" satisfied condition "success or failure"
Jan 25 13:51:16.157: INFO: Trying to get logs from node iruya-node pod downward-api-f5e7affd-0212-41b6-af35-550906ffa3c2 container dapi-container: 
STEP: delete the pod
Jan 25 13:51:16.375: INFO: Waiting for pod downward-api-f5e7affd-0212-41b6-af35-550906ffa3c2 to disappear
Jan 25 13:51:16.384: INFO: Pod downward-api-f5e7affd-0212-41b6-af35-550906ffa3c2 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:51:16.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9429" for this suite.
Jan 25 13:51:22.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:51:22.583: INFO: namespace downward-api-9429 deletion completed in 6.18633937s

• [SLOW TEST:16.663 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:51:22.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0125 13:51:34.627177       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 25 13:51:34.627: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:51:34.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5766" for this suite.
Jan 25 13:51:47.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:51:49.959: INFO: namespace gc-5766 deletion completed in 15.32746427s

• [SLOW TEST:27.375 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:51:49.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 25 13:51:50.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:52:02.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5699" for this suite.
Jan 25 13:52:46.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:52:46.453: INFO: namespace pods-5699 deletion completed in 44.187988911s

• [SLOW TEST:56.493 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:52:46.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan 25 13:52:46.613: INFO: PodSpec: initContainers in spec.initContainers
Jan 25 13:53:52.374: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-5437cf61-3559-4c07-86d5-994452f53251", GenerateName:"", Namespace:"init-container-9987", SelfLink:"/api/v1/namespaces/init-container-9987/pods/pod-init-5437cf61-3559-4c07-86d5-994452f53251", UID:"7fc176eb-07e4-41f1-b53f-ab145bae90fd", ResourceVersion:"21815543", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715557166, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"613230084"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-9qpsv", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002587300), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9qpsv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9qpsv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9qpsv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00261a908), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002a97200), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00261a990)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00261a9b0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00261a9b8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00261a9bc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715557166, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715557166, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715557166, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715557166, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc00141dbe0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002157f80)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002054000)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://b4880573c4f2ed1482086288b05ceb1b26d78ee25b642260c3b31f95a03adb5c"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00141dc20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00141dc00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:53:52.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9987" for this suite.
Jan 25 13:54:14.450: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:54:14.621: INFO: namespace init-container-9987 deletion completed in 22.232407748s

• [SLOW TEST:88.168 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:54:14.622: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-9832
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-9832
STEP: Deleting pre-stop pod
Jan 25 13:54:37.878: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:54:37.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-9832" for this suite.
Jan 25 13:55:17.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:55:18.086: INFO: namespace prestop-9832 deletion completed in 40.149553423s

• [SLOW TEST:63.464 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:55:18.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 25 13:55:18.197: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f8412ee0-529b-4b9a-b532-9472a9c42277" in namespace "downward-api-3576" to be "success or failure"
Jan 25 13:55:18.207: INFO: Pod "downwardapi-volume-f8412ee0-529b-4b9a-b532-9472a9c42277": Phase="Pending", Reason="", readiness=false. Elapsed: 9.476959ms
Jan 25 13:55:20.216: INFO: Pod "downwardapi-volume-f8412ee0-529b-4b9a-b532-9472a9c42277": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018493829s
Jan 25 13:55:22.231: INFO: Pod "downwardapi-volume-f8412ee0-529b-4b9a-b532-9472a9c42277": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033390611s
Jan 25 13:55:24.237: INFO: Pod "downwardapi-volume-f8412ee0-529b-4b9a-b532-9472a9c42277": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039960243s
Jan 25 13:55:26.244: INFO: Pod "downwardapi-volume-f8412ee0-529b-4b9a-b532-9472a9c42277": Phase="Running", Reason="", readiness=true. Elapsed: 8.046280109s
Jan 25 13:55:28.255: INFO: Pod "downwardapi-volume-f8412ee0-529b-4b9a-b532-9472a9c42277": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.057314972s
STEP: Saw pod success
Jan 25 13:55:28.255: INFO: Pod "downwardapi-volume-f8412ee0-529b-4b9a-b532-9472a9c42277" satisfied condition "success or failure"
Jan 25 13:55:28.259: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f8412ee0-529b-4b9a-b532-9472a9c42277 container client-container: 
STEP: delete the pod
Jan 25 13:55:28.362: INFO: Waiting for pod downwardapi-volume-f8412ee0-529b-4b9a-b532-9472a9c42277 to disappear
Jan 25 13:55:28.372: INFO: Pod downwardapi-volume-f8412ee0-529b-4b9a-b532-9472a9c42277 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:55:28.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3576" for this suite.
Jan 25 13:55:34.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:55:34.467: INFO: namespace downward-api-3576 deletion completed in 6.088319958s

• [SLOW TEST:16.380 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:55:34.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 25 13:55:34.575: INFO: Waiting up to 5m0s for pod "downwardapi-volume-075939ec-3f66-4ff0-9138-aad17c9edc0d" in namespace "projected-5975" to be "success or failure"
Jan 25 13:55:34.588: INFO: Pod "downwardapi-volume-075939ec-3f66-4ff0-9138-aad17c9edc0d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.55277ms
Jan 25 13:55:36.609: INFO: Pod "downwardapi-volume-075939ec-3f66-4ff0-9138-aad17c9edc0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033713887s
Jan 25 13:55:38.619: INFO: Pod "downwardapi-volume-075939ec-3f66-4ff0-9138-aad17c9edc0d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04368919s
Jan 25 13:55:40.627: INFO: Pod "downwardapi-volume-075939ec-3f66-4ff0-9138-aad17c9edc0d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052285001s
Jan 25 13:55:42.641: INFO: Pod "downwardapi-volume-075939ec-3f66-4ff0-9138-aad17c9edc0d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066094053s
Jan 25 13:55:44.653: INFO: Pod "downwardapi-volume-075939ec-3f66-4ff0-9138-aad17c9edc0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.078193852s
STEP: Saw pod success
Jan 25 13:55:44.653: INFO: Pod "downwardapi-volume-075939ec-3f66-4ff0-9138-aad17c9edc0d" satisfied condition "success or failure"
Jan 25 13:55:44.659: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-075939ec-3f66-4ff0-9138-aad17c9edc0d container client-container: 
STEP: delete the pod
Jan 25 13:55:44.775: INFO: Waiting for pod downwardapi-volume-075939ec-3f66-4ff0-9138-aad17c9edc0d to disappear
Jan 25 13:55:44.783: INFO: Pod downwardapi-volume-075939ec-3f66-4ff0-9138-aad17c9edc0d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:55:44.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5975" for this suite.
Jan 25 13:55:50.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:55:51.009: INFO: namespace projected-5975 deletion completed in 6.220244724s

• [SLOW TEST:16.542 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:55:51.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-1334d90b-b98d-4ca4-b086-368ac717518d
STEP: Creating a pod to test consume secrets
Jan 25 13:55:51.085: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9c851f5e-8ea6-46cb-bf15-6436a7414ce6" in namespace "projected-2883" to be "success or failure"
Jan 25 13:55:51.092: INFO: Pod "pod-projected-secrets-9c851f5e-8ea6-46cb-bf15-6436a7414ce6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.843945ms
Jan 25 13:55:53.102: INFO: Pod "pod-projected-secrets-9c851f5e-8ea6-46cb-bf15-6436a7414ce6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01685033s
Jan 25 13:55:55.134: INFO: Pod "pod-projected-secrets-9c851f5e-8ea6-46cb-bf15-6436a7414ce6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048577814s
Jan 25 13:55:57.150: INFO: Pod "pod-projected-secrets-9c851f5e-8ea6-46cb-bf15-6436a7414ce6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06429341s
Jan 25 13:55:59.155: INFO: Pod "pod-projected-secrets-9c851f5e-8ea6-46cb-bf15-6436a7414ce6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069045987s
STEP: Saw pod success
Jan 25 13:55:59.155: INFO: Pod "pod-projected-secrets-9c851f5e-8ea6-46cb-bf15-6436a7414ce6" satisfied condition "success or failure"
Jan 25 13:55:59.158: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-9c851f5e-8ea6-46cb-bf15-6436a7414ce6 container projected-secret-volume-test: 
STEP: delete the pod
Jan 25 13:55:59.229: INFO: Waiting for pod pod-projected-secrets-9c851f5e-8ea6-46cb-bf15-6436a7414ce6 to disappear
Jan 25 13:55:59.238: INFO: Pod pod-projected-secrets-9c851f5e-8ea6-46cb-bf15-6436a7414ce6 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:55:59.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2883" for this suite.
Jan 25 13:56:05.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:56:05.403: INFO: namespace projected-2883 deletion completed in 6.159012843s

• [SLOW TEST:14.393 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:56:05.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 25 13:56:05.461: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:56:06.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7273" for this suite.
Jan 25 13:56:12.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:56:13.015: INFO: namespace custom-resource-definition-7273 deletion completed in 6.327432959s

• [SLOW TEST:7.612 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:56:13.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 25 13:56:13.196: INFO: Waiting up to 5m0s for pod "downwardapi-volume-de4e6c43-dbed-434e-9970-e8725fe5c8fe" in namespace "projected-8133" to be "success or failure"
Jan 25 13:56:13.215: INFO: Pod "downwardapi-volume-de4e6c43-dbed-434e-9970-e8725fe5c8fe": Phase="Pending", Reason="", readiness=false. Elapsed: 18.850168ms
Jan 25 13:56:15.221: INFO: Pod "downwardapi-volume-de4e6c43-dbed-434e-9970-e8725fe5c8fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024844662s
Jan 25 13:56:17.227: INFO: Pod "downwardapi-volume-de4e6c43-dbed-434e-9970-e8725fe5c8fe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030954813s
Jan 25 13:56:19.267: INFO: Pod "downwardapi-volume-de4e6c43-dbed-434e-9970-e8725fe5c8fe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070483826s
Jan 25 13:56:21.275: INFO: Pod "downwardapi-volume-de4e6c43-dbed-434e-9970-e8725fe5c8fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.07839521s
STEP: Saw pod success
Jan 25 13:56:21.275: INFO: Pod "downwardapi-volume-de4e6c43-dbed-434e-9970-e8725fe5c8fe" satisfied condition "success or failure"
Jan 25 13:56:21.277: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-de4e6c43-dbed-434e-9970-e8725fe5c8fe container client-container: 
STEP: delete the pod
Jan 25 13:56:21.467: INFO: Waiting for pod downwardapi-volume-de4e6c43-dbed-434e-9970-e8725fe5c8fe to disappear
Jan 25 13:56:21.473: INFO: Pod downwardapi-volume-de4e6c43-dbed-434e-9970-e8725fe5c8fe no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:56:21.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8133" for this suite.
Jan 25 13:56:27.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:56:27.656: INFO: namespace projected-8133 deletion completed in 6.176026855s

• [SLOW TEST:14.641 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:56:27.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 25 13:56:27.885: INFO: Number of nodes with available pods: 0
Jan 25 13:56:27.885: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:56:28.910: INFO: Number of nodes with available pods: 0
Jan 25 13:56:28.910: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:56:30.098: INFO: Number of nodes with available pods: 0
Jan 25 13:56:30.098: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:56:30.960: INFO: Number of nodes with available pods: 0
Jan 25 13:56:30.960: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:56:31.912: INFO: Number of nodes with available pods: 0
Jan 25 13:56:31.913: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:56:32.900: INFO: Number of nodes with available pods: 0
Jan 25 13:56:32.900: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:56:35.288: INFO: Number of nodes with available pods: 0
Jan 25 13:56:35.288: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:56:35.921: INFO: Number of nodes with available pods: 0
Jan 25 13:56:35.921: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:56:36.907: INFO: Number of nodes with available pods: 0
Jan 25 13:56:36.907: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:56:37.976: INFO: Number of nodes with available pods: 1
Jan 25 13:56:37.976: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:56:38.907: INFO: Number of nodes with available pods: 1
Jan 25 13:56:38.907: INFO: Node iruya-node is running more than one daemon pod
Jan 25 13:56:39.907: INFO: Number of nodes with available pods: 2
Jan 25 13:56:39.907: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan 25 13:56:40.035: INFO: Number of nodes with available pods: 1
Jan 25 13:56:40.035: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 25 13:56:41.586: INFO: Number of nodes with available pods: 1
Jan 25 13:56:41.586: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 25 13:56:42.048: INFO: Number of nodes with available pods: 1
Jan 25 13:56:42.048: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 25 13:56:43.051: INFO: Number of nodes with available pods: 1
Jan 25 13:56:43.051: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 25 13:56:44.075: INFO: Number of nodes with available pods: 1
Jan 25 13:56:44.075: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 25 13:56:45.066: INFO: Number of nodes with available pods: 1
Jan 25 13:56:45.066: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 25 13:56:46.564: INFO: Number of nodes with available pods: 1
Jan 25 13:56:46.564: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 25 13:56:47.056: INFO: Number of nodes with available pods: 1
Jan 25 13:56:47.056: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 25 13:56:48.060: INFO: Number of nodes with available pods: 1
Jan 25 13:56:48.061: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 25 13:56:49.052: INFO: Number of nodes with available pods: 2
Jan 25 13:56:49.052: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7454, will wait for the garbage collector to delete the pods
Jan 25 13:56:49.145: INFO: Deleting DaemonSet.extensions daemon-set took: 18.956018ms
Jan 25 13:56:49.446: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.53587ms
Jan 25 13:56:56.364: INFO: Number of nodes with available pods: 0
Jan 25 13:56:56.365: INFO: Number of running nodes: 0, number of available pods: 0
Jan 25 13:56:56.371: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7454/daemonsets","resourceVersion":"21816012"},"items":null}

Jan 25 13:56:56.376: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7454/pods","resourceVersion":"21816012"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:56:56.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7454" for this suite.
Jan 25 13:57:02.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:57:02.542: INFO: namespace daemonsets-7454 deletion completed in 6.143866231s

• [SLOW TEST:34.885 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:57:02.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-da014dbd-89fb-4a59-98cd-60279bdc29fe
STEP: Creating a pod to test consume secrets
Jan 25 13:57:02.661: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1befd8cf-0b77-4564-b1ce-64c93723a263" in namespace "projected-5909" to be "success or failure"
Jan 25 13:57:02.689: INFO: Pod "pod-projected-secrets-1befd8cf-0b77-4564-b1ce-64c93723a263": Phase="Pending", Reason="", readiness=false. Elapsed: 27.477621ms
Jan 25 13:57:04.700: INFO: Pod "pod-projected-secrets-1befd8cf-0b77-4564-b1ce-64c93723a263": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038713878s
Jan 25 13:57:06.716: INFO: Pod "pod-projected-secrets-1befd8cf-0b77-4564-b1ce-64c93723a263": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055244247s
Jan 25 13:57:08.732: INFO: Pod "pod-projected-secrets-1befd8cf-0b77-4564-b1ce-64c93723a263": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070448092s
Jan 25 13:57:10.749: INFO: Pod "pod-projected-secrets-1befd8cf-0b77-4564-b1ce-64c93723a263": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.087497873s
STEP: Saw pod success
Jan 25 13:57:10.749: INFO: Pod "pod-projected-secrets-1befd8cf-0b77-4564-b1ce-64c93723a263" satisfied condition "success or failure"
Jan 25 13:57:10.754: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-1befd8cf-0b77-4564-b1ce-64c93723a263 container projected-secret-volume-test: 
STEP: delete the pod
Jan 25 13:57:10.931: INFO: Waiting for pod pod-projected-secrets-1befd8cf-0b77-4564-b1ce-64c93723a263 to disappear
Jan 25 13:57:10.975: INFO: Pod pod-projected-secrets-1befd8cf-0b77-4564-b1ce-64c93723a263 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:57:10.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5909" for this suite.
Jan 25 13:57:17.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:57:17.215: INFO: namespace projected-5909 deletion completed in 6.227786931s

• [SLOW TEST:14.672 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:57:17.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:58:17.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5193" for this suite.
Jan 25 13:58:39.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:58:39.635: INFO: namespace container-probe-5193 deletion completed in 22.334369881s

• [SLOW TEST:82.420 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:58:39.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 25 13:58:39.939: INFO: Waiting up to 5m0s for pod "downward-api-7a476be8-5668-4199-8bb1-8a8b4b673363" in namespace "downward-api-6055" to be "success or failure"
Jan 25 13:58:39.955: INFO: Pod "downward-api-7a476be8-5668-4199-8bb1-8a8b4b673363": Phase="Pending", Reason="", readiness=false. Elapsed: 16.026725ms
Jan 25 13:58:41.966: INFO: Pod "downward-api-7a476be8-5668-4199-8bb1-8a8b4b673363": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027150242s
Jan 25 13:58:43.982: INFO: Pod "downward-api-7a476be8-5668-4199-8bb1-8a8b4b673363": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042663944s
Jan 25 13:58:45.991: INFO: Pod "downward-api-7a476be8-5668-4199-8bb1-8a8b4b673363": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0523297s
Jan 25 13:58:48.001: INFO: Pod "downward-api-7a476be8-5668-4199-8bb1-8a8b4b673363": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061945734s
Jan 25 13:58:50.011: INFO: Pod "downward-api-7a476be8-5668-4199-8bb1-8a8b4b673363": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.07211082s
STEP: Saw pod success
Jan 25 13:58:50.011: INFO: Pod "downward-api-7a476be8-5668-4199-8bb1-8a8b4b673363" satisfied condition "success or failure"
Jan 25 13:58:50.016: INFO: Trying to get logs from node iruya-node pod downward-api-7a476be8-5668-4199-8bb1-8a8b4b673363 container dapi-container: 
STEP: delete the pod
Jan 25 13:58:50.175: INFO: Waiting for pod downward-api-7a476be8-5668-4199-8bb1-8a8b4b673363 to disappear
Jan 25 13:58:50.186: INFO: Pod downward-api-7a476be8-5668-4199-8bb1-8a8b4b673363 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:58:50.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6055" for this suite.
Jan 25 13:58:56.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:58:56.409: INFO: namespace downward-api-6055 deletion completed in 6.215436655s

• [SLOW TEST:16.773 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:58:56.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-b5d26fef-1500-4e77-8e94-3a41d24487f1
STEP: Creating secret with name s-test-opt-upd-7cf6b845-fff2-4cbe-87ff-160433974ff5
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-b5d26fef-1500-4e77-8e94-3a41d24487f1
STEP: Updating secret s-test-opt-upd-7cf6b845-fff2-4cbe-87ff-160433974ff5
STEP: Creating secret with name s-test-opt-create-d7f1fec6-a6f2-4f7a-b59b-277dfb2a17be
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:59:11.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8098" for this suite.
Jan 25 13:59:33.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:59:33.160: INFO: namespace secrets-8098 deletion completed in 22.145295845s

• [SLOW TEST:36.750 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:59:33.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan 25 13:59:33.298: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 13:59:34.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3835" for this suite.
Jan 25 13:59:40.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:59:40.779: INFO: namespace replication-controller-3835 deletion completed in 6.339465242s

• [SLOW TEST:7.620 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 13:59:40.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-6588
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 25 13:59:40.970: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 25 14:00:15.310: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6588 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 14:00:15.310: INFO: >>> kubeConfig: /root/.kube/config
I0125 14:00:15.387630       8 log.go:172] (0xc001206420) (0xc001378500) Create stream
I0125 14:00:15.387691       8 log.go:172] (0xc001206420) (0xc001378500) Stream added, broadcasting: 1
I0125 14:00:15.394968       8 log.go:172] (0xc001206420) Reply frame received for 1
I0125 14:00:15.395031       8 log.go:172] (0xc001206420) (0xc0014800a0) Create stream
I0125 14:00:15.395055       8 log.go:172] (0xc001206420) (0xc0014800a0) Stream added, broadcasting: 3
I0125 14:00:15.398515       8 log.go:172] (0xc001206420) Reply frame received for 3
I0125 14:00:15.398613       8 log.go:172] (0xc001206420) (0xc0025360a0) Create stream
I0125 14:00:15.398642       8 log.go:172] (0xc001206420) (0xc0025360a0) Stream added, broadcasting: 5
I0125 14:00:15.401205       8 log.go:172] (0xc001206420) Reply frame received for 5
I0125 14:00:15.585777       8 log.go:172] (0xc001206420) Data frame received for 3
I0125 14:00:15.585867       8 log.go:172] (0xc0014800a0) (3) Data frame handling
I0125 14:00:15.585906       8 log.go:172] (0xc0014800a0) (3) Data frame sent
I0125 14:00:15.738973       8 log.go:172] (0xc001206420) Data frame received for 1
I0125 14:00:15.739162       8 log.go:172] (0xc001206420) (0xc0014800a0) Stream removed, broadcasting: 3
I0125 14:00:15.739257       8 log.go:172] (0xc001378500) (1) Data frame handling
I0125 14:00:15.739299       8 log.go:172] (0xc001378500) (1) Data frame sent
I0125 14:00:15.739622       8 log.go:172] (0xc001206420) (0xc0025360a0) Stream removed, broadcasting: 5
I0125 14:00:15.739695       8 log.go:172] (0xc001206420) (0xc001378500) Stream removed, broadcasting: 1
I0125 14:00:15.739726       8 log.go:172] (0xc001206420) Go away received
I0125 14:00:15.740720       8 log.go:172] (0xc001206420) (0xc001378500) Stream removed, broadcasting: 1
I0125 14:00:15.740798       8 log.go:172] (0xc001206420) (0xc0014800a0) Stream removed, broadcasting: 3
I0125 14:00:15.740823       8 log.go:172] (0xc001206420) (0xc0025360a0) Stream removed, broadcasting: 5
Jan 25 14:00:15.740: INFO: Found all expected endpoints: [netserver-0]
Jan 25 14:00:15.751: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6588 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 14:00:15.751: INFO: >>> kubeConfig: /root/.kube/config
I0125 14:00:15.812971       8 log.go:172] (0xc00090ef20) (0xc002536640) Create stream
I0125 14:00:15.813055       8 log.go:172] (0xc00090ef20) (0xc002536640) Stream added, broadcasting: 1
I0125 14:00:15.820451       8 log.go:172] (0xc00090ef20) Reply frame received for 1
I0125 14:00:15.820762       8 log.go:172] (0xc00090ef20) (0xc001134320) Create stream
I0125 14:00:15.820816       8 log.go:172] (0xc00090ef20) (0xc001134320) Stream added, broadcasting: 3
I0125 14:00:15.823306       8 log.go:172] (0xc00090ef20) Reply frame received for 3
I0125 14:00:15.823334       8 log.go:172] (0xc00090ef20) (0xc0025366e0) Create stream
I0125 14:00:15.823341       8 log.go:172] (0xc00090ef20) (0xc0025366e0) Stream added, broadcasting: 5
I0125 14:00:15.824323       8 log.go:172] (0xc00090ef20) Reply frame received for 5
I0125 14:00:15.926285       8 log.go:172] (0xc00090ef20) Data frame received for 3
I0125 14:00:15.926417       8 log.go:172] (0xc001134320) (3) Data frame handling
I0125 14:00:15.926446       8 log.go:172] (0xc001134320) (3) Data frame sent
I0125 14:00:16.052793       8 log.go:172] (0xc00090ef20) (0xc001134320) Stream removed, broadcasting: 3
I0125 14:00:16.053156       8 log.go:172] (0xc00090ef20) Data frame received for 1
I0125 14:00:16.053180       8 log.go:172] (0xc002536640) (1) Data frame handling
I0125 14:00:16.053262       8 log.go:172] (0xc002536640) (1) Data frame sent
I0125 14:00:16.053315       8 log.go:172] (0xc00090ef20) (0xc0025366e0) Stream removed, broadcasting: 5
I0125 14:00:16.053437       8 log.go:172] (0xc00090ef20) (0xc002536640) Stream removed, broadcasting: 1
I0125 14:00:16.053476       8 log.go:172] (0xc00090ef20) Go away received
I0125 14:00:16.054695       8 log.go:172] (0xc00090ef20) (0xc002536640) Stream removed, broadcasting: 1
I0125 14:00:16.054750       8 log.go:172] (0xc00090ef20) (0xc001134320) Stream removed, broadcasting: 3
I0125 14:00:16.054772       8 log.go:172] (0xc00090ef20) (0xc0025366e0) Stream removed, broadcasting: 5
Jan 25 14:00:16.054: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:00:16.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6588" for this suite.
Jan 25 14:00:40.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:00:40.237: INFO: namespace pod-network-test-6588 deletion completed in 24.169169673s

• [SLOW TEST:59.457 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:00:40.238: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 25 14:00:40.341: INFO: Waiting up to 5m0s for pod "pod-88bbbbd2-bc8e-4c8d-bb88-e57db639d580" in namespace "emptydir-9085" to be "success or failure"
Jan 25 14:00:40.408: INFO: Pod "pod-88bbbbd2-bc8e-4c8d-bb88-e57db639d580": Phase="Pending", Reason="", readiness=false. Elapsed: 67.098993ms
Jan 25 14:00:42.416: INFO: Pod "pod-88bbbbd2-bc8e-4c8d-bb88-e57db639d580": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075486307s
Jan 25 14:00:44.448: INFO: Pod "pod-88bbbbd2-bc8e-4c8d-bb88-e57db639d580": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107147724s
Jan 25 14:00:46.461: INFO: Pod "pod-88bbbbd2-bc8e-4c8d-bb88-e57db639d580": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120700999s
Jan 25 14:00:48.474: INFO: Pod "pod-88bbbbd2-bc8e-4c8d-bb88-e57db639d580": Phase="Running", Reason="", readiness=true. Elapsed: 8.133319232s
Jan 25 14:00:50.492: INFO: Pod "pod-88bbbbd2-bc8e-4c8d-bb88-e57db639d580": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.15124047s
STEP: Saw pod success
Jan 25 14:00:50.492: INFO: Pod "pod-88bbbbd2-bc8e-4c8d-bb88-e57db639d580" satisfied condition "success or failure"
Jan 25 14:00:50.504: INFO: Trying to get logs from node iruya-node pod pod-88bbbbd2-bc8e-4c8d-bb88-e57db639d580 container test-container: 
STEP: delete the pod
Jan 25 14:00:51.713: INFO: Waiting for pod pod-88bbbbd2-bc8e-4c8d-bb88-e57db639d580 to disappear
Jan 25 14:00:51.733: INFO: Pod pod-88bbbbd2-bc8e-4c8d-bb88-e57db639d580 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:00:51.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9085" for this suite.
Jan 25 14:00:57.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:00:58.004: INFO: namespace emptydir-9085 deletion completed in 6.259899646s

• [SLOW TEST:17.766 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:00:58.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 25 14:00:58.171: INFO: Waiting up to 5m0s for pod "downwardapi-volume-70362969-cb01-4432-b9d6-49039b1ba02d" in namespace "downward-api-5694" to be "success or failure"
Jan 25 14:00:58.181: INFO: Pod "downwardapi-volume-70362969-cb01-4432-b9d6-49039b1ba02d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.222024ms
Jan 25 14:01:00.186: INFO: Pod "downwardapi-volume-70362969-cb01-4432-b9d6-49039b1ba02d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014372367s
Jan 25 14:01:02.246: INFO: Pod "downwardapi-volume-70362969-cb01-4432-b9d6-49039b1ba02d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074312413s
Jan 25 14:01:04.256: INFO: Pod "downwardapi-volume-70362969-cb01-4432-b9d6-49039b1ba02d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084042146s
Jan 25 14:01:06.302: INFO: Pod "downwardapi-volume-70362969-cb01-4432-b9d6-49039b1ba02d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.130238175s
STEP: Saw pod success
Jan 25 14:01:06.302: INFO: Pod "downwardapi-volume-70362969-cb01-4432-b9d6-49039b1ba02d" satisfied condition "success or failure"
Jan 25 14:01:06.307: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-70362969-cb01-4432-b9d6-49039b1ba02d container client-container: 
STEP: delete the pod
Jan 25 14:01:06.354: INFO: Waiting for pod downwardapi-volume-70362969-cb01-4432-b9d6-49039b1ba02d to disappear
Jan 25 14:01:06.360: INFO: Pod downwardapi-volume-70362969-cb01-4432-b9d6-49039b1ba02d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:01:06.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5694" for this suite.
Jan 25 14:01:12.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:01:12.489: INFO: namespace downward-api-5694 deletion completed in 6.123677117s

• [SLOW TEST:14.485 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:01:12.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Jan 25 14:01:12.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan 25 14:01:14.698: INFO: stderr: ""
Jan 25 14:01:14.698: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:01:14.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4590" for this suite.
Jan 25 14:01:20.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:01:20.895: INFO: namespace kubectl-4590 deletion completed in 6.181832249s

• [SLOW TEST:8.405 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:01:20.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2973.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2973.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 25 14:01:33.096: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-2973/dns-test-c8487d42-c799-4b28-a974-210cf548aa57: the server could not find the requested resource (get pods dns-test-c8487d42-c799-4b28-a974-210cf548aa57)
Jan 25 14:01:33.111: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-2973/dns-test-c8487d42-c799-4b28-a974-210cf548aa57: the server could not find the requested resource (get pods dns-test-c8487d42-c799-4b28-a974-210cf548aa57)
Jan 25 14:01:33.121: INFO: Unable to read wheezy_udp@PodARecord from pod dns-2973/dns-test-c8487d42-c799-4b28-a974-210cf548aa57: the server could not find the requested resource (get pods dns-test-c8487d42-c799-4b28-a974-210cf548aa57)
Jan 25 14:01:33.128: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-2973/dns-test-c8487d42-c799-4b28-a974-210cf548aa57: the server could not find the requested resource (get pods dns-test-c8487d42-c799-4b28-a974-210cf548aa57)
Jan 25 14:01:33.133: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-2973/dns-test-c8487d42-c799-4b28-a974-210cf548aa57: the server could not find the requested resource (get pods dns-test-c8487d42-c799-4b28-a974-210cf548aa57)
Jan 25 14:01:33.140: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-2973/dns-test-c8487d42-c799-4b28-a974-210cf548aa57: the server could not find the requested resource (get pods dns-test-c8487d42-c799-4b28-a974-210cf548aa57)
Jan 25 14:01:33.145: INFO: Unable to read jessie_udp@PodARecord from pod dns-2973/dns-test-c8487d42-c799-4b28-a974-210cf548aa57: the server could not find the requested resource (get pods dns-test-c8487d42-c799-4b28-a974-210cf548aa57)
Jan 25 14:01:33.150: INFO: Unable to read jessie_tcp@PodARecord from pod dns-2973/dns-test-c8487d42-c799-4b28-a974-210cf548aa57: the server could not find the requested resource (get pods dns-test-c8487d42-c799-4b28-a974-210cf548aa57)
Jan 25 14:01:33.150: INFO: Lookups using dns-2973/dns-test-c8487d42-c799-4b28-a974-210cf548aa57 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan 25 14:01:38.260: INFO: DNS probes using dns-2973/dns-test-c8487d42-c799-4b28-a974-210cf548aa57 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:01:38.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2973" for this suite.
Jan 25 14:01:44.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:01:44.520: INFO: namespace dns-2973 deletion completed in 6.19229701s

• [SLOW TEST:23.621 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:01:44.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-31a3a8c4-47d8-4412-a6af-5f1a58cf24b4
STEP: Creating a pod to test consume configMaps
Jan 25 14:01:44.627: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0a40e812-fde8-43c7-a7cd-338dc5734f0e" in namespace "projected-1885" to be "success or failure"
Jan 25 14:01:44.660: INFO: Pod "pod-projected-configmaps-0a40e812-fde8-43c7-a7cd-338dc5734f0e": Phase="Pending", Reason="", readiness=false. Elapsed: 32.968426ms
Jan 25 14:01:46.688: INFO: Pod "pod-projected-configmaps-0a40e812-fde8-43c7-a7cd-338dc5734f0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06026894s
Jan 25 14:01:48.699: INFO: Pod "pod-projected-configmaps-0a40e812-fde8-43c7-a7cd-338dc5734f0e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072073292s
Jan 25 14:01:50.706: INFO: Pod "pod-projected-configmaps-0a40e812-fde8-43c7-a7cd-338dc5734f0e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078848463s
Jan 25 14:01:52.744: INFO: Pod "pod-projected-configmaps-0a40e812-fde8-43c7-a7cd-338dc5734f0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.117138775s
STEP: Saw pod success
Jan 25 14:01:52.745: INFO: Pod "pod-projected-configmaps-0a40e812-fde8-43c7-a7cd-338dc5734f0e" satisfied condition "success or failure"
Jan 25 14:01:52.749: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-0a40e812-fde8-43c7-a7cd-338dc5734f0e container projected-configmap-volume-test: 
STEP: delete the pod
Jan 25 14:01:52.917: INFO: Waiting for pod pod-projected-configmaps-0a40e812-fde8-43c7-a7cd-338dc5734f0e to disappear
Jan 25 14:01:52.924: INFO: Pod pod-projected-configmaps-0a40e812-fde8-43c7-a7cd-338dc5734f0e no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:01:52.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1885" for this suite.
Jan 25 14:01:58.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:01:59.087: INFO: namespace projected-1885 deletion completed in 6.156531665s

• [SLOW TEST:14.567 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:01:59.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-wd8l
STEP: Creating a pod to test atomic-volume-subpath
Jan 25 14:01:59.280: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-wd8l" in namespace "subpath-7618" to be "success or failure"
Jan 25 14:01:59.300: INFO: Pod "pod-subpath-test-configmap-wd8l": Phase="Pending", Reason="", readiness=false. Elapsed: 19.840661ms
Jan 25 14:02:01.368: INFO: Pod "pod-subpath-test-configmap-wd8l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087354521s
Jan 25 14:02:03.379: INFO: Pod "pod-subpath-test-configmap-wd8l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098230489s
Jan 25 14:02:05.430: INFO: Pod "pod-subpath-test-configmap-wd8l": Phase="Pending", Reason="", readiness=false. Elapsed: 6.149655557s
Jan 25 14:02:07.444: INFO: Pod "pod-subpath-test-configmap-wd8l": Phase="Pending", Reason="", readiness=false. Elapsed: 8.164015278s
Jan 25 14:02:09.463: INFO: Pod "pod-subpath-test-configmap-wd8l": Phase="Running", Reason="", readiness=true. Elapsed: 10.182333802s
Jan 25 14:02:11.494: INFO: Pod "pod-subpath-test-configmap-wd8l": Phase="Running", Reason="", readiness=true. Elapsed: 12.214034514s
Jan 25 14:02:13.508: INFO: Pod "pod-subpath-test-configmap-wd8l": Phase="Running", Reason="", readiness=true. Elapsed: 14.227285829s
Jan 25 14:02:15.549: INFO: Pod "pod-subpath-test-configmap-wd8l": Phase="Running", Reason="", readiness=true. Elapsed: 16.268737406s
Jan 25 14:02:17.584: INFO: Pod "pod-subpath-test-configmap-wd8l": Phase="Running", Reason="", readiness=true. Elapsed: 18.303255946s
Jan 25 14:02:19.600: INFO: Pod "pod-subpath-test-configmap-wd8l": Phase="Running", Reason="", readiness=true. Elapsed: 20.319345268s
Jan 25 14:02:21.618: INFO: Pod "pod-subpath-test-configmap-wd8l": Phase="Running", Reason="", readiness=true. Elapsed: 22.337405431s
Jan 25 14:02:23.637: INFO: Pod "pod-subpath-test-configmap-wd8l": Phase="Running", Reason="", readiness=true. Elapsed: 24.356714226s
Jan 25 14:02:25.646: INFO: Pod "pod-subpath-test-configmap-wd8l": Phase="Running", Reason="", readiness=true. Elapsed: 26.365673509s
Jan 25 14:02:27.674: INFO: Pod "pod-subpath-test-configmap-wd8l": Phase="Running", Reason="", readiness=true. Elapsed: 28.394049845s
Jan 25 14:02:29.692: INFO: Pod "pod-subpath-test-configmap-wd8l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.412203526s
STEP: Saw pod success
Jan 25 14:02:29.693: INFO: Pod "pod-subpath-test-configmap-wd8l" satisfied condition "success or failure"
Jan 25 14:02:29.710: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-wd8l container test-container-subpath-configmap-wd8l: 
STEP: delete the pod
Jan 25 14:02:29.976: INFO: Waiting for pod pod-subpath-test-configmap-wd8l to disappear
Jan 25 14:02:29.988: INFO: Pod pod-subpath-test-configmap-wd8l no longer exists
STEP: Deleting pod pod-subpath-test-configmap-wd8l
Jan 25 14:02:29.988: INFO: Deleting pod "pod-subpath-test-configmap-wd8l" in namespace "subpath-7618"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:02:29.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7618" for this suite.
Jan 25 14:02:36.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:02:36.149: INFO: namespace subpath-7618 deletion completed in 6.151758438s

• [SLOW TEST:37.062 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:02:36.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 25 14:02:44.464: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:02:44.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4836" for this suite.
Jan 25 14:02:52.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:02:52.802: INFO: namespace container-runtime-4836 deletion completed in 8.246631433s

• [SLOW TEST:16.652 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:02:52.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-ce754591-385a-416d-8ab6-a1fd77410d0d
STEP: Creating a pod to test consume configMaps
Jan 25 14:02:52.973: INFO: Waiting up to 5m0s for pod "pod-configmaps-e12eb7f1-ce25-4e24-94b6-f219e32b537d" in namespace "configmap-1278" to be "success or failure"
Jan 25 14:02:52.990: INFO: Pod "pod-configmaps-e12eb7f1-ce25-4e24-94b6-f219e32b537d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.354741ms
Jan 25 14:02:54.998: INFO: Pod "pod-configmaps-e12eb7f1-ce25-4e24-94b6-f219e32b537d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02510158s
Jan 25 14:02:57.029: INFO: Pod "pod-configmaps-e12eb7f1-ce25-4e24-94b6-f219e32b537d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05585155s
Jan 25 14:02:59.036: INFO: Pod "pod-configmaps-e12eb7f1-ce25-4e24-94b6-f219e32b537d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062588411s
Jan 25 14:03:01.576: INFO: Pod "pod-configmaps-e12eb7f1-ce25-4e24-94b6-f219e32b537d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.602626428s
Jan 25 14:03:03.585: INFO: Pod "pod-configmaps-e12eb7f1-ce25-4e24-94b6-f219e32b537d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.611885171s
STEP: Saw pod success
Jan 25 14:03:03.585: INFO: Pod "pod-configmaps-e12eb7f1-ce25-4e24-94b6-f219e32b537d" satisfied condition "success or failure"
Jan 25 14:03:03.592: INFO: Trying to get logs from node iruya-node pod pod-configmaps-e12eb7f1-ce25-4e24-94b6-f219e32b537d container configmap-volume-test: 
STEP: delete the pod
Jan 25 14:03:03.935: INFO: Waiting for pod pod-configmaps-e12eb7f1-ce25-4e24-94b6-f219e32b537d to disappear
Jan 25 14:03:03.948: INFO: Pod pod-configmaps-e12eb7f1-ce25-4e24-94b6-f219e32b537d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:03:03.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1278" for this suite.
Jan 25 14:03:10.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:03:10.390: INFO: namespace configmap-1278 deletion completed in 6.435652833s

• [SLOW TEST:17.588 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:03:10.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-48231871-5e5d-411f-bac9-7b83537d98cc
STEP: Creating a pod to test consume secrets
Jan 25 14:03:10.531: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-22f77c60-8352-4079-bc64-960d1e2d405d" in namespace "projected-856" to be "success or failure"
Jan 25 14:03:10.545: INFO: Pod "pod-projected-secrets-22f77c60-8352-4079-bc64-960d1e2d405d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.055898ms
Jan 25 14:03:12.564: INFO: Pod "pod-projected-secrets-22f77c60-8352-4079-bc64-960d1e2d405d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032519152s
Jan 25 14:03:14.576: INFO: Pod "pod-projected-secrets-22f77c60-8352-4079-bc64-960d1e2d405d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044427113s
Jan 25 14:03:16.599: INFO: Pod "pod-projected-secrets-22f77c60-8352-4079-bc64-960d1e2d405d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067614576s
Jan 25 14:03:18.608: INFO: Pod "pod-projected-secrets-22f77c60-8352-4079-bc64-960d1e2d405d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076383039s
Jan 25 14:03:20.625: INFO: Pod "pod-projected-secrets-22f77c60-8352-4079-bc64-960d1e2d405d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.093629327s
STEP: Saw pod success
Jan 25 14:03:20.625: INFO: Pod "pod-projected-secrets-22f77c60-8352-4079-bc64-960d1e2d405d" satisfied condition "success or failure"
Jan 25 14:03:20.631: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-22f77c60-8352-4079-bc64-960d1e2d405d container projected-secret-volume-test: 
STEP: delete the pod
Jan 25 14:03:20.737: INFO: Waiting for pod pod-projected-secrets-22f77c60-8352-4079-bc64-960d1e2d405d to disappear
Jan 25 14:03:20.742: INFO: Pod pod-projected-secrets-22f77c60-8352-4079-bc64-960d1e2d405d no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:03:20.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-856" for this suite.
Jan 25 14:03:26.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:03:26.974: INFO: namespace projected-856 deletion completed in 6.156297795s

• [SLOW TEST:16.583 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:03:26.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:03:37.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-648" for this suite.
Jan 25 14:04:29.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:04:29.299: INFO: namespace kubelet-test-648 deletion completed in 52.145870021s

• [SLOW TEST:62.325 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:04:29.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 25 14:04:29.410: INFO: Waiting up to 5m0s for pod "pod-4b86580f-4aa3-4afa-8e96-78b3302702df" in namespace "emptydir-4473" to be "success or failure"
Jan 25 14:04:29.428: INFO: Pod "pod-4b86580f-4aa3-4afa-8e96-78b3302702df": Phase="Pending", Reason="", readiness=false. Elapsed: 17.936111ms
Jan 25 14:04:31.439: INFO: Pod "pod-4b86580f-4aa3-4afa-8e96-78b3302702df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029295392s
Jan 25 14:04:33.449: INFO: Pod "pod-4b86580f-4aa3-4afa-8e96-78b3302702df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039521677s
Jan 25 14:04:35.458: INFO: Pod "pod-4b86580f-4aa3-4afa-8e96-78b3302702df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048763337s
Jan 25 14:04:37.468: INFO: Pod "pod-4b86580f-4aa3-4afa-8e96-78b3302702df": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058513328s
Jan 25 14:04:39.475: INFO: Pod "pod-4b86580f-4aa3-4afa-8e96-78b3302702df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065247378s
STEP: Saw pod success
Jan 25 14:04:39.475: INFO: Pod "pod-4b86580f-4aa3-4afa-8e96-78b3302702df" satisfied condition "success or failure"
Jan 25 14:04:39.478: INFO: Trying to get logs from node iruya-node pod pod-4b86580f-4aa3-4afa-8e96-78b3302702df container test-container: 
STEP: delete the pod
Jan 25 14:04:39.523: INFO: Waiting for pod pod-4b86580f-4aa3-4afa-8e96-78b3302702df to disappear
Jan 25 14:04:39.542: INFO: Pod pod-4b86580f-4aa3-4afa-8e96-78b3302702df no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:04:39.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4473" for this suite.
Jan 25 14:04:45.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:04:45.873: INFO: namespace emptydir-4473 deletion completed in 6.325211725s

• [SLOW TEST:16.573 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:04:45.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 25 14:04:46.076: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6375ad74-5597-4c33-af3e-3c79e9636396" in namespace "downward-api-9435" to be "success or failure"
Jan 25 14:04:46.107: INFO: Pod "downwardapi-volume-6375ad74-5597-4c33-af3e-3c79e9636396": Phase="Pending", Reason="", readiness=false. Elapsed: 30.292428ms
Jan 25 14:04:48.112: INFO: Pod "downwardapi-volume-6375ad74-5597-4c33-af3e-3c79e9636396": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03603035s
Jan 25 14:04:50.125: INFO: Pod "downwardapi-volume-6375ad74-5597-4c33-af3e-3c79e9636396": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048467268s
Jan 25 14:04:52.138: INFO: Pod "downwardapi-volume-6375ad74-5597-4c33-af3e-3c79e9636396": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061356052s
Jan 25 14:04:54.153: INFO: Pod "downwardapi-volume-6375ad74-5597-4c33-af3e-3c79e9636396": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076579924s
Jan 25 14:04:56.164: INFO: Pod "downwardapi-volume-6375ad74-5597-4c33-af3e-3c79e9636396": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.087056564s
STEP: Saw pod success
Jan 25 14:04:56.164: INFO: Pod "downwardapi-volume-6375ad74-5597-4c33-af3e-3c79e9636396" satisfied condition "success or failure"
Jan 25 14:04:56.168: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-6375ad74-5597-4c33-af3e-3c79e9636396 container client-container: 
STEP: delete the pod
Jan 25 14:04:56.399: INFO: Waiting for pod downwardapi-volume-6375ad74-5597-4c33-af3e-3c79e9636396 to disappear
Jan 25 14:04:56.477: INFO: Pod downwardapi-volume-6375ad74-5597-4c33-af3e-3c79e9636396 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:04:56.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9435" for this suite.
Jan 25 14:05:02.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:05:02.672: INFO: namespace downward-api-9435 deletion completed in 6.18254409s

• [SLOW TEST:16.799 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:05:02.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan 25 14:05:22.918: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4882 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 14:05:22.918: INFO: >>> kubeConfig: /root/.kube/config
I0125 14:05:23.048002       8 log.go:172] (0xc001e94dc0) (0xc000ea74a0) Create stream
I0125 14:05:23.048274       8 log.go:172] (0xc001e94dc0) (0xc000ea74a0) Stream added, broadcasting: 1
I0125 14:05:23.082745       8 log.go:172] (0xc001e94dc0) Reply frame received for 1
I0125 14:05:23.082982       8 log.go:172] (0xc001e94dc0) (0xc002536be0) Create stream
I0125 14:05:23.083122       8 log.go:172] (0xc001e94dc0) (0xc002536be0) Stream added, broadcasting: 3
I0125 14:05:23.095671       8 log.go:172] (0xc001e94dc0) Reply frame received for 3
I0125 14:05:23.095845       8 log.go:172] (0xc001e94dc0) (0xc000ea75e0) Create stream
I0125 14:05:23.095879       8 log.go:172] (0xc001e94dc0) (0xc000ea75e0) Stream added, broadcasting: 5
I0125 14:05:23.102247       8 log.go:172] (0xc001e94dc0) Reply frame received for 5
I0125 14:05:23.231818       8 log.go:172] (0xc001e94dc0) Data frame received for 3
I0125 14:05:23.231912       8 log.go:172] (0xc002536be0) (3) Data frame handling
I0125 14:05:23.231941       8 log.go:172] (0xc002536be0) (3) Data frame sent
I0125 14:05:23.401358       8 log.go:172] (0xc001e94dc0) Data frame received for 1
I0125 14:05:23.401486       8 log.go:172] (0xc001e94dc0) (0xc002536be0) Stream removed, broadcasting: 3
I0125 14:05:23.401548       8 log.go:172] (0xc000ea74a0) (1) Data frame handling
I0125 14:05:23.401581       8 log.go:172] (0xc000ea74a0) (1) Data frame sent
I0125 14:05:23.401595       8 log.go:172] (0xc001e94dc0) (0xc000ea75e0) Stream removed, broadcasting: 5
I0125 14:05:23.401646       8 log.go:172] (0xc001e94dc0) (0xc000ea74a0) Stream removed, broadcasting: 1
I0125 14:05:23.401670       8 log.go:172] (0xc001e94dc0) Go away received
I0125 14:05:23.402084       8 log.go:172] (0xc001e94dc0) (0xc000ea74a0) Stream removed, broadcasting: 1
I0125 14:05:23.402111       8 log.go:172] (0xc001e94dc0) (0xc002536be0) Stream removed, broadcasting: 3
I0125 14:05:23.402135       8 log.go:172] (0xc001e94dc0) (0xc000ea75e0) Stream removed, broadcasting: 5
Jan 25 14:05:23.402: INFO: Exec stderr: ""
Jan 25 14:05:23.402: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4882 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 14:05:23.402: INFO: >>> kubeConfig: /root/.kube/config
I0125 14:05:23.472021       8 log.go:172] (0xc001877130) (0xc001379f40) Create stream
I0125 14:05:23.472074       8 log.go:172] (0xc001877130) (0xc001379f40) Stream added, broadcasting: 1
I0125 14:05:23.479658       8 log.go:172] (0xc001877130) Reply frame received for 1
I0125 14:05:23.479708       8 log.go:172] (0xc001877130) (0xc00279df40) Create stream
I0125 14:05:23.479720       8 log.go:172] (0xc001877130) (0xc00279df40) Stream added, broadcasting: 3
I0125 14:05:23.481495       8 log.go:172] (0xc001877130) Reply frame received for 3
I0125 14:05:23.481530       8 log.go:172] (0xc001877130) (0xc002a3c000) Create stream
I0125 14:05:23.481545       8 log.go:172] (0xc001877130) (0xc002a3c000) Stream added, broadcasting: 5
I0125 14:05:23.484492       8 log.go:172] (0xc001877130) Reply frame received for 5
I0125 14:05:23.600579       8 log.go:172] (0xc001877130) Data frame received for 3
I0125 14:05:23.600707       8 log.go:172] (0xc00279df40) (3) Data frame handling
I0125 14:05:23.600748       8 log.go:172] (0xc00279df40) (3) Data frame sent
I0125 14:05:23.803360       8 log.go:172] (0xc001877130) Data frame received for 1
I0125 14:05:23.803696       8 log.go:172] (0xc001877130) (0xc002a3c000) Stream removed, broadcasting: 5
I0125 14:05:23.803795       8 log.go:172] (0xc001379f40) (1) Data frame handling
I0125 14:05:23.803842       8 log.go:172] (0xc001379f40) (1) Data frame sent
I0125 14:05:23.803905       8 log.go:172] (0xc001877130) (0xc00279df40) Stream removed, broadcasting: 3
I0125 14:05:23.803977       8 log.go:172] (0xc001877130) (0xc001379f40) Stream removed, broadcasting: 1
I0125 14:05:23.804010       8 log.go:172] (0xc001877130) Go away received
I0125 14:05:23.805389       8 log.go:172] (0xc001877130) (0xc001379f40) Stream removed, broadcasting: 1
I0125 14:05:23.805442       8 log.go:172] (0xc001877130) (0xc00279df40) Stream removed, broadcasting: 3
I0125 14:05:23.805457       8 log.go:172] (0xc001877130) (0xc002a3c000) Stream removed, broadcasting: 5
Jan 25 14:05:23.805: INFO: Exec stderr: ""
Jan 25 14:05:23.805: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4882 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 14:05:23.805: INFO: >>> kubeConfig: /root/.kube/config
I0125 14:05:23.911619       8 log.go:172] (0xc001e95ce0) (0xc000ea7a40) Create stream
I0125 14:05:23.911887       8 log.go:172] (0xc001e95ce0) (0xc000ea7a40) Stream added, broadcasting: 1
I0125 14:05:23.940945       8 log.go:172] (0xc001e95ce0) Reply frame received for 1
I0125 14:05:23.941218       8 log.go:172] (0xc001e95ce0) (0xc001d66000) Create stream
I0125 14:05:23.941233       8 log.go:172] (0xc001e95ce0) (0xc001d66000) Stream added, broadcasting: 3
I0125 14:05:23.947476       8 log.go:172] (0xc001e95ce0) Reply frame received for 3
I0125 14:05:23.947724       8 log.go:172] (0xc001e95ce0) (0xc002a3c1e0) Create stream
I0125 14:05:23.947763       8 log.go:172] (0xc001e95ce0) (0xc002a3c1e0) Stream added, broadcasting: 5
I0125 14:05:23.950796       8 log.go:172] (0xc001e95ce0) Reply frame received for 5
I0125 14:05:24.215941       8 log.go:172] (0xc001e95ce0) Data frame received for 3
I0125 14:05:24.216069       8 log.go:172] (0xc001d66000) (3) Data frame handling
I0125 14:05:24.216096       8 log.go:172] (0xc001d66000) (3) Data frame sent
I0125 14:05:24.345035       8 log.go:172] (0xc001e95ce0) Data frame received for 1
I0125 14:05:24.345272       8 log.go:172] (0xc001e95ce0) (0xc002a3c1e0) Stream removed, broadcasting: 5
I0125 14:05:24.345339       8 log.go:172] (0xc000ea7a40) (1) Data frame handling
I0125 14:05:24.345383       8 log.go:172] (0xc000ea7a40) (1) Data frame sent
I0125 14:05:24.345428       8 log.go:172] (0xc001e95ce0) (0xc001d66000) Stream removed, broadcasting: 3
I0125 14:05:24.345484       8 log.go:172] (0xc001e95ce0) (0xc000ea7a40) Stream removed, broadcasting: 1
I0125 14:05:24.345496       8 log.go:172] (0xc001e95ce0) Go away received
I0125 14:05:24.346098       8 log.go:172] (0xc001e95ce0) (0xc000ea7a40) Stream removed, broadcasting: 1
I0125 14:05:24.346112       8 log.go:172] (0xc001e95ce0) (0xc001d66000) Stream removed, broadcasting: 3
I0125 14:05:24.346120       8 log.go:172] (0xc001e95ce0) (0xc002a3c1e0) Stream removed, broadcasting: 5
Jan 25 14:05:24.346: INFO: Exec stderr: ""
Jan 25 14:05:24.346: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4882 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 14:05:24.346: INFO: >>> kubeConfig: /root/.kube/config
I0125 14:05:24.389352       8 log.go:172] (0xc001807ce0) (0xc001d66460) Create stream
I0125 14:05:24.389392       8 log.go:172] (0xc001807ce0) (0xc001d66460) Stream added, broadcasting: 1
I0125 14:05:24.394649       8 log.go:172] (0xc001807ce0) Reply frame received for 1
I0125 14:05:24.394683       8 log.go:172] (0xc001807ce0) (0xc002536c80) Create stream
I0125 14:05:24.394689       8 log.go:172] (0xc001807ce0) (0xc002536c80) Stream added, broadcasting: 3
I0125 14:05:24.396147       8 log.go:172] (0xc001807ce0) Reply frame received for 3
I0125 14:05:24.396173       8 log.go:172] (0xc001807ce0) (0xc002c6c460) Create stream
I0125 14:05:24.396180       8 log.go:172] (0xc001807ce0) (0xc002c6c460) Stream added, broadcasting: 5
I0125 14:05:24.397556       8 log.go:172] (0xc001807ce0) Reply frame received for 5
I0125 14:05:24.494023       8 log.go:172] (0xc001807ce0) Data frame received for 3
I0125 14:05:24.494104       8 log.go:172] (0xc002536c80) (3) Data frame handling
I0125 14:05:24.494122       8 log.go:172] (0xc002536c80) (3) Data frame sent
I0125 14:05:24.742748       8 log.go:172] (0xc001807ce0) Data frame received for 1
I0125 14:05:24.742931       8 log.go:172] (0xc001807ce0) (0xc002c6c460) Stream removed, broadcasting: 5
I0125 14:05:24.743025       8 log.go:172] (0xc001d66460) (1) Data frame handling
I0125 14:05:24.743069       8 log.go:172] (0xc001d66460) (1) Data frame sent
I0125 14:05:24.743152       8 log.go:172] (0xc001807ce0) (0xc002536c80) Stream removed, broadcasting: 3
I0125 14:05:24.743200       8 log.go:172] (0xc001807ce0) (0xc001d66460) Stream removed, broadcasting: 1
I0125 14:05:24.743249       8 log.go:172] (0xc001807ce0) Go away received
I0125 14:05:24.743557       8 log.go:172] (0xc001807ce0) (0xc001d66460) Stream removed, broadcasting: 1
I0125 14:05:24.743582       8 log.go:172] (0xc001807ce0) (0xc002536c80) Stream removed, broadcasting: 3
I0125 14:05:24.743597       8 log.go:172] (0xc001807ce0) (0xc002c6c460) Stream removed, broadcasting: 5
Jan 25 14:05:24.743: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan 25 14:05:24.743: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4882 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 14:05:24.743: INFO: >>> kubeConfig: /root/.kube/config
I0125 14:05:24.791048       8 log.go:172] (0xc0019438c0) (0xc002536fa0) Create stream
I0125 14:05:24.791127       8 log.go:172] (0xc0019438c0) (0xc002536fa0) Stream added, broadcasting: 1
I0125 14:05:24.796195       8 log.go:172] (0xc0019438c0) Reply frame received for 1
I0125 14:05:24.796267       8 log.go:172] (0xc0019438c0) (0xc001d66500) Create stream
I0125 14:05:24.796289       8 log.go:172] (0xc0019438c0) (0xc001d66500) Stream added, broadcasting: 3
I0125 14:05:24.797656       8 log.go:172] (0xc0019438c0) Reply frame received for 3
I0125 14:05:24.797678       8 log.go:172] (0xc0019438c0) (0xc002537040) Create stream
I0125 14:05:24.797685       8 log.go:172] (0xc0019438c0) (0xc002537040) Stream added, broadcasting: 5
I0125 14:05:24.800403       8 log.go:172] (0xc0019438c0) Reply frame received for 5
I0125 14:05:24.873969       8 log.go:172] (0xc0019438c0) Data frame received for 3
I0125 14:05:24.874024       8 log.go:172] (0xc001d66500) (3) Data frame handling
I0125 14:05:24.874044       8 log.go:172] (0xc001d66500) (3) Data frame sent
I0125 14:05:24.974415       8 log.go:172] (0xc0019438c0) (0xc001d66500) Stream removed, broadcasting: 3
I0125 14:05:24.974585       8 log.go:172] (0xc0019438c0) Data frame received for 1
I0125 14:05:24.974636       8 log.go:172] (0xc002536fa0) (1) Data frame handling
I0125 14:05:24.974667       8 log.go:172] (0xc002536fa0) (1) Data frame sent
I0125 14:05:24.974683       8 log.go:172] (0xc0019438c0) (0xc002536fa0) Stream removed, broadcasting: 1
I0125 14:05:24.974777       8 log.go:172] (0xc0019438c0) (0xc002537040) Stream removed, broadcasting: 5
I0125 14:05:24.974807       8 log.go:172] (0xc0019438c0) Go away received
I0125 14:05:24.974987       8 log.go:172] (0xc0019438c0) (0xc002536fa0) Stream removed, broadcasting: 1
I0125 14:05:24.975003       8 log.go:172] (0xc0019438c0) (0xc001d66500) Stream removed, broadcasting: 3
I0125 14:05:24.975014       8 log.go:172] (0xc0019438c0) (0xc002537040) Stream removed, broadcasting: 5
Jan 25 14:05:24.975: INFO: Exec stderr: ""
Jan 25 14:05:24.975: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4882 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 14:05:24.975: INFO: >>> kubeConfig: /root/.kube/config
I0125 14:05:25.028935       8 log.go:172] (0xc001343a20) (0xc002c6c780) Create stream
I0125 14:05:25.028973       8 log.go:172] (0xc001343a20) (0xc002c6c780) Stream added, broadcasting: 1
I0125 14:05:25.033165       8 log.go:172] (0xc001343a20) Reply frame received for 1
I0125 14:05:25.033211       8 log.go:172] (0xc001343a20) (0xc002a3c320) Create stream
I0125 14:05:25.033220       8 log.go:172] (0xc001343a20) (0xc002a3c320) Stream added, broadcasting: 3
I0125 14:05:25.034494       8 log.go:172] (0xc001343a20) Reply frame received for 3
I0125 14:05:25.034532       8 log.go:172] (0xc001343a20) (0xc0025370e0) Create stream
I0125 14:05:25.034543       8 log.go:172] (0xc001343a20) (0xc0025370e0) Stream added, broadcasting: 5
I0125 14:05:25.035871       8 log.go:172] (0xc001343a20) Reply frame received for 5
I0125 14:05:25.112722       8 log.go:172] (0xc001343a20) Data frame received for 3
I0125 14:05:25.112775       8 log.go:172] (0xc002a3c320) (3) Data frame handling
I0125 14:05:25.112799       8 log.go:172] (0xc002a3c320) (3) Data frame sent
I0125 14:05:25.214457       8 log.go:172] (0xc001343a20) (0xc002a3c320) Stream removed, broadcasting: 3
I0125 14:05:25.214638       8 log.go:172] (0xc001343a20) Data frame received for 1
I0125 14:05:25.214669       8 log.go:172] (0xc002c6c780) (1) Data frame handling
I0125 14:05:25.214698       8 log.go:172] (0xc002c6c780) (1) Data frame sent
I0125 14:05:25.214794       8 log.go:172] (0xc001343a20) (0xc002c6c780) Stream removed, broadcasting: 1
I0125 14:05:25.214948       8 log.go:172] (0xc001343a20) (0xc0025370e0) Stream removed, broadcasting: 5
I0125 14:05:25.215056       8 log.go:172] (0xc001343a20) (0xc002c6c780) Stream removed, broadcasting: 1
I0125 14:05:25.215088       8 log.go:172] (0xc001343a20) (0xc002a3c320) Stream removed, broadcasting: 3
I0125 14:05:25.215100       8 log.go:172] (0xc001343a20) (0xc0025370e0) Stream removed, broadcasting: 5
Jan 25 14:05:25.215: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
I0125 14:05:25.215234       8 log.go:172] (0xc001343a20) Go away received
Jan 25 14:05:25.215: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4882 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 14:05:25.215: INFO: >>> kubeConfig: /root/.kube/config
I0125 14:05:25.268796       8 log.go:172] (0xc00191f3f0) (0xc001d66960) Create stream
I0125 14:05:25.268849       8 log.go:172] (0xc00191f3f0) (0xc001d66960) Stream added, broadcasting: 1
I0125 14:05:25.275206       8 log.go:172] (0xc00191f3f0) Reply frame received for 1
I0125 14:05:25.275240       8 log.go:172] (0xc00191f3f0) (0xc001d66b40) Create stream
I0125 14:05:25.275250       8 log.go:172] (0xc00191f3f0) (0xc001d66b40) Stream added, broadcasting: 3
I0125 14:05:25.276337       8 log.go:172] (0xc00191f3f0) Reply frame received for 3
I0125 14:05:25.276355       8 log.go:172] (0xc00191f3f0) (0xc002a3c3c0) Create stream
I0125 14:05:25.276361       8 log.go:172] (0xc00191f3f0) (0xc002a3c3c0) Stream added, broadcasting: 5
I0125 14:05:25.277415       8 log.go:172] (0xc00191f3f0) Reply frame received for 5
I0125 14:05:25.348390       8 log.go:172] (0xc00191f3f0) Data frame received for 3
I0125 14:05:25.348499       8 log.go:172] (0xc001d66b40) (3) Data frame handling
I0125 14:05:25.348545       8 log.go:172] (0xc001d66b40) (3) Data frame sent
I0125 14:05:25.448758       8 log.go:172] (0xc00191f3f0) (0xc001d66b40) Stream removed, broadcasting: 3
I0125 14:05:25.448960       8 log.go:172] (0xc00191f3f0) Data frame received for 1
I0125 14:05:25.448993       8 log.go:172] (0xc001d66960) (1) Data frame handling
I0125 14:05:25.449017       8 log.go:172] (0xc001d66960) (1) Data frame sent
I0125 14:05:25.449028       8 log.go:172] (0xc00191f3f0) (0xc002a3c3c0) Stream removed, broadcasting: 5
I0125 14:05:25.449079       8 log.go:172] (0xc00191f3f0) (0xc001d66960) Stream removed, broadcasting: 1
I0125 14:05:25.449100       8 log.go:172] (0xc00191f3f0) Go away received
I0125 14:05:25.449390       8 log.go:172] (0xc00191f3f0) (0xc001d66960) Stream removed, broadcasting: 1
I0125 14:05:25.449419       8 log.go:172] (0xc00191f3f0) (0xc001d66b40) Stream removed, broadcasting: 3
I0125 14:05:25.449469       8 log.go:172] (0xc00191f3f0) (0xc002a3c3c0) Stream removed, broadcasting: 5
Jan 25 14:05:25.449: INFO: Exec stderr: ""
Jan 25 14:05:25.449: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4882 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 14:05:25.449: INFO: >>> kubeConfig: /root/.kube/config
I0125 14:05:25.499004       8 log.go:172] (0xc00276c4d0) (0xc002c6caa0) Create stream
I0125 14:05:25.499052       8 log.go:172] (0xc00276c4d0) (0xc002c6caa0) Stream added, broadcasting: 1
I0125 14:05:25.504542       8 log.go:172] (0xc00276c4d0) Reply frame received for 1
I0125 14:05:25.504579       8 log.go:172] (0xc00276c4d0) (0xc002a3c460) Create stream
I0125 14:05:25.504591       8 log.go:172] (0xc00276c4d0) (0xc002a3c460) Stream added, broadcasting: 3
I0125 14:05:25.505623       8 log.go:172] (0xc00276c4d0) Reply frame received for 3
I0125 14:05:25.505658       8 log.go:172] (0xc00276c4d0) (0xc002a3c500) Create stream
I0125 14:05:25.505670       8 log.go:172] (0xc00276c4d0) (0xc002a3c500) Stream added, broadcasting: 5
I0125 14:05:25.506785       8 log.go:172] (0xc00276c4d0) Reply frame received for 5
I0125 14:05:25.584397       8 log.go:172] (0xc00276c4d0) Data frame received for 3
I0125 14:05:25.584513       8 log.go:172] (0xc002a3c460) (3) Data frame handling
I0125 14:05:25.584539       8 log.go:172] (0xc002a3c460) (3) Data frame sent
I0125 14:05:25.764000       8 log.go:172] (0xc00276c4d0) Data frame received for 1
I0125 14:05:25.764119       8 log.go:172] (0xc00276c4d0) (0xc002a3c460) Stream removed, broadcasting: 3
I0125 14:05:25.764195       8 log.go:172] (0xc002c6caa0) (1) Data frame handling
I0125 14:05:25.764223       8 log.go:172] (0xc002c6caa0) (1) Data frame sent
I0125 14:05:25.764554       8 log.go:172] (0xc00276c4d0) (0xc002a3c500) Stream removed, broadcasting: 5
I0125 14:05:25.764593       8 log.go:172] (0xc00276c4d0) (0xc002c6caa0) Stream removed, broadcasting: 1
I0125 14:05:25.764617       8 log.go:172] (0xc00276c4d0) Go away received
I0125 14:05:25.764990       8 log.go:172] (0xc00276c4d0) (0xc002c6caa0) Stream removed, broadcasting: 1
I0125 14:05:25.765021       8 log.go:172] (0xc00276c4d0) (0xc002a3c460) Stream removed, broadcasting: 3
I0125 14:05:25.765040       8 log.go:172] (0xc00276c4d0) (0xc002a3c500) Stream removed, broadcasting: 5
Jan 25 14:05:25.765: INFO: Exec stderr: ""
Jan 25 14:05:25.765: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4882 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 14:05:25.765: INFO: >>> kubeConfig: /root/.kube/config
I0125 14:05:25.842724       8 log.go:172] (0xc00276d290) (0xc002c6cdc0) Create stream
I0125 14:05:25.842778       8 log.go:172] (0xc00276d290) (0xc002c6cdc0) Stream added, broadcasting: 1
I0125 14:05:25.850916       8 log.go:172] (0xc00276d290) Reply frame received for 1
I0125 14:05:25.850991       8 log.go:172] (0xc00276d290) (0xc002c6ce60) Create stream
I0125 14:05:25.851001       8 log.go:172] (0xc00276d290) (0xc002c6ce60) Stream added, broadcasting: 3
I0125 14:05:25.852803       8 log.go:172] (0xc00276d290) Reply frame received for 3
I0125 14:05:25.852889       8 log.go:172] (0xc00276d290) (0xc002a3c5a0) Create stream
I0125 14:05:25.852896       8 log.go:172] (0xc00276d290) (0xc002a3c5a0) Stream added, broadcasting: 5
I0125 14:05:25.854390       8 log.go:172] (0xc00276d290) Reply frame received for 5
I0125 14:05:25.974714       8 log.go:172] (0xc00276d290) Data frame received for 3
I0125 14:05:25.974767       8 log.go:172] (0xc002c6ce60) (3) Data frame handling
I0125 14:05:25.974792       8 log.go:172] (0xc002c6ce60) (3) Data frame sent
I0125 14:05:26.119887       8 log.go:172] (0xc00276d290) Data frame received for 1
I0125 14:05:26.119989       8 log.go:172] (0xc00276d290) (0xc002a3c5a0) Stream removed, broadcasting: 5
I0125 14:05:26.120056       8 log.go:172] (0xc002c6cdc0) (1) Data frame handling
I0125 14:05:26.120084       8 log.go:172] (0xc002c6cdc0) (1) Data frame sent
I0125 14:05:26.120203       8 log.go:172] (0xc00276d290) (0xc002c6ce60) Stream removed, broadcasting: 3
I0125 14:05:26.120287       8 log.go:172] (0xc00276d290) (0xc002c6cdc0) Stream removed, broadcasting: 1
I0125 14:05:26.120632       8 log.go:172] (0xc00276d290) (0xc002c6cdc0) Stream removed, broadcasting: 1
I0125 14:05:26.120645       8 log.go:172] (0xc00276d290) (0xc002c6ce60) Stream removed, broadcasting: 3
I0125 14:05:26.120650       8 log.go:172] (0xc00276d290) (0xc002a3c5a0) Stream removed, broadcasting: 5
I0125 14:05:26.121650       8 log.go:172] (0xc00276d290) Go away received
Jan 25 14:05:26.121: INFO: Exec stderr: ""
Jan 25 14:05:26.122: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4882 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 14:05:26.122: INFO: >>> kubeConfig: /root/.kube/config
I0125 14:05:26.191401       8 log.go:172] (0xc0025649a0) (0xc002537400) Create stream
I0125 14:05:26.191473       8 log.go:172] (0xc0025649a0) (0xc002537400) Stream added, broadcasting: 1
I0125 14:05:26.200110       8 log.go:172] (0xc0025649a0) Reply frame received for 1
I0125 14:05:26.200135       8 log.go:172] (0xc0025649a0) (0xc002c6cfa0) Create stream
I0125 14:05:26.200142       8 log.go:172] (0xc0025649a0) (0xc002c6cfa0) Stream added, broadcasting: 3
I0125 14:05:26.202156       8 log.go:172] (0xc0025649a0) Reply frame received for 3
I0125 14:05:26.202221       8 log.go:172] (0xc0025649a0) (0xc001d66be0) Create stream
I0125 14:05:26.202236       8 log.go:172] (0xc0025649a0) (0xc001d66be0) Stream added, broadcasting: 5
I0125 14:05:26.203900       8 log.go:172] (0xc0025649a0) Reply frame received for 5
I0125 14:05:26.292837       8 log.go:172] (0xc0025649a0) Data frame received for 3
I0125 14:05:26.292920       8 log.go:172] (0xc002c6cfa0) (3) Data frame handling
I0125 14:05:26.292943       8 log.go:172] (0xc002c6cfa0) (3) Data frame sent
I0125 14:05:26.435425       8 log.go:172] (0xc0025649a0) Data frame received for 1
I0125 14:05:26.435530       8 log.go:172] (0xc002537400) (1) Data frame handling
I0125 14:05:26.435552       8 log.go:172] (0xc002537400) (1) Data frame sent
I0125 14:05:26.435855       8 log.go:172] (0xc0025649a0) (0xc001d66be0) Stream removed, broadcasting: 5
I0125 14:05:26.435935       8 log.go:172] (0xc0025649a0) (0xc002537400) Stream removed, broadcasting: 1
I0125 14:05:26.436223       8 log.go:172] (0xc0025649a0) (0xc002c6cfa0) Stream removed, broadcasting: 3
I0125 14:05:26.436387       8 log.go:172] (0xc0025649a0) Go away received
I0125 14:05:26.436468       8 log.go:172] (0xc0025649a0) (0xc002537400) Stream removed, broadcasting: 1
I0125 14:05:26.436524       8 log.go:172] (0xc0025649a0) (0xc002c6cfa0) Stream removed, broadcasting: 3
I0125 14:05:26.436547       8 log.go:172] (0xc0025649a0) (0xc001d66be0) Stream removed, broadcasting: 5
Jan 25 14:05:26.436: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:05:26.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-4882" for this suite.
Jan 25 14:06:18.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:06:18.638: INFO: namespace e2e-kubelet-etc-hosts-4882 deletion completed in 52.189972822s

• [SLOW TEST:75.966 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:06:18.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-9fmt
STEP: Creating a pod to test atomic-volume-subpath
Jan 25 14:06:18.818: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-9fmt" in namespace "subpath-4084" to be "success or failure"
Jan 25 14:06:18.827: INFO: Pod "pod-subpath-test-configmap-9fmt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.284806ms
Jan 25 14:06:20.836: INFO: Pod "pod-subpath-test-configmap-9fmt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018171826s
Jan 25 14:06:22.856: INFO: Pod "pod-subpath-test-configmap-9fmt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038054598s
Jan 25 14:06:24.945: INFO: Pod "pod-subpath-test-configmap-9fmt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126482448s
Jan 25 14:06:26.965: INFO: Pod "pod-subpath-test-configmap-9fmt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.146291902s
Jan 25 14:06:28.993: INFO: Pod "pod-subpath-test-configmap-9fmt": Phase="Running", Reason="", readiness=true. Elapsed: 10.174459394s
Jan 25 14:06:31.002: INFO: Pod "pod-subpath-test-configmap-9fmt": Phase="Running", Reason="", readiness=true. Elapsed: 12.184195411s
Jan 25 14:06:33.011: INFO: Pod "pod-subpath-test-configmap-9fmt": Phase="Running", Reason="", readiness=true. Elapsed: 14.192672401s
Jan 25 14:06:35.081: INFO: Pod "pod-subpath-test-configmap-9fmt": Phase="Running", Reason="", readiness=true. Elapsed: 16.26220638s
Jan 25 14:06:37.088: INFO: Pod "pod-subpath-test-configmap-9fmt": Phase="Running", Reason="", readiness=true. Elapsed: 18.269203888s
Jan 25 14:06:39.096: INFO: Pod "pod-subpath-test-configmap-9fmt": Phase="Running", Reason="", readiness=true. Elapsed: 20.277772636s
Jan 25 14:06:41.166: INFO: Pod "pod-subpath-test-configmap-9fmt": Phase="Running", Reason="", readiness=true. Elapsed: 22.347963392s
Jan 25 14:06:43.178: INFO: Pod "pod-subpath-test-configmap-9fmt": Phase="Running", Reason="", readiness=true. Elapsed: 24.359219052s
Jan 25 14:06:45.185: INFO: Pod "pod-subpath-test-configmap-9fmt": Phase="Running", Reason="", readiness=true. Elapsed: 26.366731331s
Jan 25 14:06:47.191: INFO: Pod "pod-subpath-test-configmap-9fmt": Phase="Running", Reason="", readiness=true. Elapsed: 28.373189692s
Jan 25 14:06:49.201: INFO: Pod "pod-subpath-test-configmap-9fmt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.382233465s
STEP: Saw pod success
Jan 25 14:06:49.201: INFO: Pod "pod-subpath-test-configmap-9fmt" satisfied condition "success or failure"
Jan 25 14:06:49.205: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-9fmt container test-container-subpath-configmap-9fmt: 
STEP: delete the pod
Jan 25 14:06:49.293: INFO: Waiting for pod pod-subpath-test-configmap-9fmt to disappear
Jan 25 14:06:49.301: INFO: Pod pod-subpath-test-configmap-9fmt no longer exists
STEP: Deleting pod pod-subpath-test-configmap-9fmt
Jan 25 14:06:49.301: INFO: Deleting pod "pod-subpath-test-configmap-9fmt" in namespace "subpath-4084"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:06:49.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4084" for this suite.
Jan 25 14:06:55.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:06:55.473: INFO: namespace subpath-4084 deletion completed in 6.164875858s

• [SLOW TEST:36.833 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:06:55.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 25 14:06:55.613: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.634627ms)
Jan 25 14:06:55.620: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.616986ms)
Jan 25 14:06:55.627: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.985028ms)
Jan 25 14:06:55.635: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.689298ms)
Jan 25 14:06:55.643: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.093ms)
Jan 25 14:06:55.654: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.94841ms)
Jan 25 14:06:55.664: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.065907ms)
Jan 25 14:06:55.672: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.361461ms)
Jan 25 14:06:55.678: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.496053ms)
Jan 25 14:06:55.685: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.786653ms)
Jan 25 14:06:55.691: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.015ms)
Jan 25 14:06:55.697: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.613523ms)
Jan 25 14:06:55.703: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.101333ms)
Jan 25 14:06:55.710: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.959725ms)
Jan 25 14:06:55.715: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.022029ms)
Jan 25 14:06:55.720: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.24581ms)
Jan 25 14:06:55.728: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.330653ms)
Jan 25 14:06:55.735: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.877754ms)
Jan 25 14:06:55.760: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 25.055703ms)
Jan 25 14:06:55.784: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 23.579741ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:06:55.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7276" for this suite.
Jan 25 14:07:01.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:07:01.970: INFO: namespace proxy-7276 deletion completed in 6.180787304s

• [SLOW TEST:6.497 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:07:01.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6901.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6901.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6901.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6901.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6901.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6901.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6901.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6901.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6901.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6901.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6901.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6901.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6901.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 178.40.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.40.178_udp@PTR;check="$$(dig +tcp +noall +answer +search 178.40.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.40.178_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6901.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6901.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6901.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6901.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6901.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6901.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6901.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6901.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6901.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6901.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6901.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6901.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6901.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 178.40.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.40.178_udp@PTR;check="$$(dig +tcp +noall +answer +search 178.40.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.40.178_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 25 14:07:16.327: INFO: Unable to read wheezy_udp@dns-test-service.dns-6901.svc.cluster.local from pod dns-6901/dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51: the server could not find the requested resource (get pods dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51)
Jan 25 14:07:16.339: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6901.svc.cluster.local from pod dns-6901/dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51: the server could not find the requested resource (get pods dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51)
Jan 25 14:07:16.346: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6901.svc.cluster.local from pod dns-6901/dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51: the server could not find the requested resource (get pods dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51)
Jan 25 14:07:16.357: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6901.svc.cluster.local from pod dns-6901/dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51: the server could not find the requested resource (get pods dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51)
Jan 25 14:07:16.367: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-6901.svc.cluster.local from pod dns-6901/dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51: the server could not find the requested resource (get pods dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51)
Jan 25 14:07:16.375: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-6901.svc.cluster.local from pod dns-6901/dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51: the server could not find the requested resource (get pods dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51)
Jan 25 14:07:16.381: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6901/dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51: the server could not find the requested resource (get pods dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51)
Jan 25 14:07:16.386: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6901/dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51: the server could not find the requested resource (get pods dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51)
Jan 25 14:07:16.391: INFO: Unable to read 10.100.40.178_udp@PTR from pod dns-6901/dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51: the server could not find the requested resource (get pods dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51)
Jan 25 14:07:16.399: INFO: Unable to read 10.100.40.178_tcp@PTR from pod dns-6901/dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51: the server could not find the requested resource (get pods dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51)
Jan 25 14:07:16.407: INFO: Unable to read jessie_udp@dns-test-service.dns-6901.svc.cluster.local from pod dns-6901/dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51: the server could not find the requested resource (get pods dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51)
Jan 25 14:07:16.422: INFO: Unable to read jessie_tcp@dns-test-service.dns-6901.svc.cluster.local from pod dns-6901/dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51: the server could not find the requested resource (get pods dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51)
Jan 25 14:07:16.440: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6901.svc.cluster.local from pod dns-6901/dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51: the server could not find the requested resource (get pods dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51)
Jan 25 14:07:16.455: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6901.svc.cluster.local from pod dns-6901/dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51: the server could not find the requested resource (get pods dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51)
Jan 25 14:07:16.466: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-6901.svc.cluster.local from pod dns-6901/dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51: the server could not find the requested resource (get pods dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51)
Jan 25 14:07:16.471: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-6901.svc.cluster.local from pod dns-6901/dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51: the server could not find the requested resource (get pods dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51)
Jan 25 14:07:16.481: INFO: Unable to read jessie_udp@PodARecord from pod dns-6901/dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51: the server could not find the requested resource (get pods dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51)
Jan 25 14:07:16.486: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6901/dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51: the server could not find the requested resource (get pods dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51)
Jan 25 14:07:16.492: INFO: Unable to read 10.100.40.178_udp@PTR from pod dns-6901/dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51: the server could not find the requested resource (get pods dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51)
Jan 25 14:07:16.496: INFO: Unable to read 10.100.40.178_tcp@PTR from pod dns-6901/dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51: the server could not find the requested resource (get pods dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51)
Jan 25 14:07:16.496: INFO: Lookups using dns-6901/dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51 failed for: [wheezy_udp@dns-test-service.dns-6901.svc.cluster.local wheezy_tcp@dns-test-service.dns-6901.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6901.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6901.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-6901.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-6901.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.100.40.178_udp@PTR 10.100.40.178_tcp@PTR jessie_udp@dns-test-service.dns-6901.svc.cluster.local jessie_tcp@dns-test-service.dns-6901.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6901.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6901.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-6901.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-6901.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.100.40.178_udp@PTR 10.100.40.178_tcp@PTR]

Jan 25 14:07:21.627: INFO: DNS probes using dns-6901/dns-test-ab9f5bf0-f875-46ce-a5e7-e3cfcb424b51 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:07:22.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6901" for this suite.
Jan 25 14:07:28.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:07:28.283: INFO: namespace dns-6901 deletion completed in 6.175543881s

• [SLOW TEST:26.312 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:07:28.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan 25 14:07:28.389: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7881,SelfLink:/api/v1/namespaces/watch-7881/configmaps/e2e-watch-test-configmap-a,UID:49f95482-9a55-48f2-9e47-64195d0c9e25,ResourceVersion:21817541,Generation:0,CreationTimestamp:2020-01-25 14:07:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 25 14:07:28.389: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7881,SelfLink:/api/v1/namespaces/watch-7881/configmaps/e2e-watch-test-configmap-a,UID:49f95482-9a55-48f2-9e47-64195d0c9e25,ResourceVersion:21817541,Generation:0,CreationTimestamp:2020-01-25 14:07:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan 25 14:07:38.403: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7881,SelfLink:/api/v1/namespaces/watch-7881/configmaps/e2e-watch-test-configmap-a,UID:49f95482-9a55-48f2-9e47-64195d0c9e25,ResourceVersion:21817555,Generation:0,CreationTimestamp:2020-01-25 14:07:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 25 14:07:38.403: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7881,SelfLink:/api/v1/namespaces/watch-7881/configmaps/e2e-watch-test-configmap-a,UID:49f95482-9a55-48f2-9e47-64195d0c9e25,ResourceVersion:21817555,Generation:0,CreationTimestamp:2020-01-25 14:07:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan 25 14:07:48.420: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7881,SelfLink:/api/v1/namespaces/watch-7881/configmaps/e2e-watch-test-configmap-a,UID:49f95482-9a55-48f2-9e47-64195d0c9e25,ResourceVersion:21817569,Generation:0,CreationTimestamp:2020-01-25 14:07:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 25 14:07:48.420: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7881,SelfLink:/api/v1/namespaces/watch-7881/configmaps/e2e-watch-test-configmap-a,UID:49f95482-9a55-48f2-9e47-64195d0c9e25,ResourceVersion:21817569,Generation:0,CreationTimestamp:2020-01-25 14:07:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan 25 14:07:58.443: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7881,SelfLink:/api/v1/namespaces/watch-7881/configmaps/e2e-watch-test-configmap-a,UID:49f95482-9a55-48f2-9e47-64195d0c9e25,ResourceVersion:21817584,Generation:0,CreationTimestamp:2020-01-25 14:07:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 25 14:07:58.444: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7881,SelfLink:/api/v1/namespaces/watch-7881/configmaps/e2e-watch-test-configmap-a,UID:49f95482-9a55-48f2-9e47-64195d0c9e25,ResourceVersion:21817584,Generation:0,CreationTimestamp:2020-01-25 14:07:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan 25 14:08:08.466: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7881,SelfLink:/api/v1/namespaces/watch-7881/configmaps/e2e-watch-test-configmap-b,UID:d00ad535-9092-4341-bc06-1fbc02d6d610,ResourceVersion:21817599,Generation:0,CreationTimestamp:2020-01-25 14:08:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 25 14:08:08.467: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7881,SelfLink:/api/v1/namespaces/watch-7881/configmaps/e2e-watch-test-configmap-b,UID:d00ad535-9092-4341-bc06-1fbc02d6d610,ResourceVersion:21817599,Generation:0,CreationTimestamp:2020-01-25 14:08:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan 25 14:08:18.484: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7881,SelfLink:/api/v1/namespaces/watch-7881/configmaps/e2e-watch-test-configmap-b,UID:d00ad535-9092-4341-bc06-1fbc02d6d610,ResourceVersion:21817613,Generation:0,CreationTimestamp:2020-01-25 14:08:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 25 14:08:18.484: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7881,SelfLink:/api/v1/namespaces/watch-7881/configmaps/e2e-watch-test-configmap-b,UID:d00ad535-9092-4341-bc06-1fbc02d6d610,ResourceVersion:21817613,Generation:0,CreationTimestamp:2020-01-25 14:08:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:08:28.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7881" for this suite.
Jan 25 14:08:34.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:08:34.640: INFO: namespace watch-7881 deletion completed in 6.138426132s

• [SLOW TEST:66.357 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:08:34.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-486/configmap-test-d4cc350e-487c-43af-9a21-99f053445cca
STEP: Creating a pod to test consume configMaps
Jan 25 14:08:34.951: INFO: Waiting up to 5m0s for pod "pod-configmaps-ee07653c-6779-4288-a7b9-2b1afd5152f1" in namespace "configmap-486" to be "success or failure"
Jan 25 14:08:34.964: INFO: Pod "pod-configmaps-ee07653c-6779-4288-a7b9-2b1afd5152f1": Phase="Pending", Reason="", readiness=false. Elapsed: 13.216366ms
Jan 25 14:08:36.974: INFO: Pod "pod-configmaps-ee07653c-6779-4288-a7b9-2b1afd5152f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022563748s
Jan 25 14:08:38.980: INFO: Pod "pod-configmaps-ee07653c-6779-4288-a7b9-2b1afd5152f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029267755s
Jan 25 14:08:40.988: INFO: Pod "pod-configmaps-ee07653c-6779-4288-a7b9-2b1afd5152f1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036406627s
Jan 25 14:08:42.995: INFO: Pod "pod-configmaps-ee07653c-6779-4288-a7b9-2b1afd5152f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.043825131s
STEP: Saw pod success
Jan 25 14:08:42.995: INFO: Pod "pod-configmaps-ee07653c-6779-4288-a7b9-2b1afd5152f1" satisfied condition "success or failure"
Jan 25 14:08:42.998: INFO: Trying to get logs from node iruya-node pod pod-configmaps-ee07653c-6779-4288-a7b9-2b1afd5152f1 container env-test: 
STEP: delete the pod
Jan 25 14:08:43.042: INFO: Waiting for pod pod-configmaps-ee07653c-6779-4288-a7b9-2b1afd5152f1 to disappear
Jan 25 14:08:43.144: INFO: Pod pod-configmaps-ee07653c-6779-4288-a7b9-2b1afd5152f1 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:08:43.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-486" for this suite.
Jan 25 14:08:49.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:08:49.338: INFO: namespace configmap-486 deletion completed in 6.173757092s

• [SLOW TEST:14.696 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:08:49.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan 25 14:08:49.485: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 25 14:08:49.554: INFO: Waiting for terminating namespaces to be deleted...
Jan 25 14:08:49.558: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan 25 14:08:49.569: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan 25 14:08:49.569: INFO: 	Container weave ready: true, restart count 0
Jan 25 14:08:49.569: INFO: 	Container weave-npc ready: true, restart count 0
Jan 25 14:08:49.569: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan 25 14:08:49.569: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 25 14:08:49.569: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan 25 14:08:49.580: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan 25 14:08:49.580: INFO: 	Container weave ready: true, restart count 0
Jan 25 14:08:49.580: INFO: 	Container weave-npc ready: true, restart count 0
Jan 25 14:08:49.580: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 25 14:08:49.580: INFO: 	Container coredns ready: true, restart count 0
Jan 25 14:08:49.580: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan 25 14:08:49.580: INFO: 	Container etcd ready: true, restart count 0
Jan 25 14:08:49.580: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan 25 14:08:49.580: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 25 14:08:49.580: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan 25 14:08:49.580: INFO: 	Container kube-controller-manager ready: true, restart count 19
Jan 25 14:08:49.580: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan 25 14:08:49.580: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan 25 14:08:49.580: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 25 14:08:49.580: INFO: 	Container coredns ready: true, restart count 0
Jan 25 14:08:49.580: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan 25 14:08:49.580: INFO: 	Container kube-scheduler ready: true, restart count 13
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-c8939ca4-0668-4260-acee-0efb1272a122 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-c8939ca4-0668-4260-acee-0efb1272a122 off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-c8939ca4-0668-4260-acee-0efb1272a122
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:09:07.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6436" for this suite.
Jan 25 14:09:22.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:09:22.269: INFO: namespace sched-pred-6436 deletion completed in 14.268681306s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:32.930 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:09:22.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 25 14:09:22.487: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"ef64f335-5f9c-47bc-9fee-500dab86ecb8", Controller:(*bool)(0xc002a29352), BlockOwnerDeletion:(*bool)(0xc002a29353)}}
Jan 25 14:09:22.509: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"cab9f3f5-583a-4c65-8d8b-59dd6aa32cb5", Controller:(*bool)(0xc0026ae21a), BlockOwnerDeletion:(*bool)(0xc0026ae21b)}}
Jan 25 14:09:22.585: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"f02fc615-2c7c-4beb-a74b-178175021d99", Controller:(*bool)(0xc0026ae3da), BlockOwnerDeletion:(*bool)(0xc0026ae3db)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:09:27.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-898" for this suite.
Jan 25 14:09:33.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:09:33.807: INFO: namespace gc-898 deletion completed in 6.182292398s

• [SLOW TEST:11.538 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:09:33.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 25 14:09:42.164: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:09:42.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5303" for this suite.
Jan 25 14:09:48.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:09:48.443: INFO: namespace container-runtime-5303 deletion completed in 6.218843118s

• [SLOW TEST:14.635 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:09:48.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 25 14:09:56.728: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:09:56.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8686" for this suite.
Jan 25 14:10:02.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:10:02.993: INFO: namespace container-runtime-8686 deletion completed in 6.192927276s

• [SLOW TEST:14.550 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:10:02.994: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-36b941a7-faaa-4672-a650-66d454b72064 in namespace container-probe-1174
Jan 25 14:10:11.103: INFO: Started pod busybox-36b941a7-faaa-4672-a650-66d454b72064 in namespace container-probe-1174
STEP: checking the pod's current state and verifying that restartCount is present
Jan 25 14:10:11.109: INFO: Initial restart count of pod busybox-36b941a7-faaa-4672-a650-66d454b72064 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:14:12.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1174" for this suite.
Jan 25 14:14:18.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:14:19.022: INFO: namespace container-probe-1174 deletion completed in 6.189675495s

• [SLOW TEST:256.028 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:14:19.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-581524ff-40e6-479a-8b41-93349108504a
STEP: Creating a pod to test consume secrets
Jan 25 14:14:19.219: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f9286be1-5211-4444-aceb-428ce56a6d08" in namespace "projected-8895" to be "success or failure"
Jan 25 14:14:19.224: INFO: Pod "pod-projected-secrets-f9286be1-5211-4444-aceb-428ce56a6d08": Phase="Pending", Reason="", readiness=false. Elapsed: 5.18461ms
Jan 25 14:14:21.234: INFO: Pod "pod-projected-secrets-f9286be1-5211-4444-aceb-428ce56a6d08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015437812s
Jan 25 14:14:23.242: INFO: Pod "pod-projected-secrets-f9286be1-5211-4444-aceb-428ce56a6d08": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023037271s
Jan 25 14:14:25.250: INFO: Pod "pod-projected-secrets-f9286be1-5211-4444-aceb-428ce56a6d08": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031285381s
Jan 25 14:14:27.262: INFO: Pod "pod-projected-secrets-f9286be1-5211-4444-aceb-428ce56a6d08": Phase="Pending", Reason="", readiness=false. Elapsed: 8.043043127s
Jan 25 14:14:29.276: INFO: Pod "pod-projected-secrets-f9286be1-5211-4444-aceb-428ce56a6d08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.056929996s
STEP: Saw pod success
Jan 25 14:14:29.276: INFO: Pod "pod-projected-secrets-f9286be1-5211-4444-aceb-428ce56a6d08" satisfied condition "success or failure"
Jan 25 14:14:29.283: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-f9286be1-5211-4444-aceb-428ce56a6d08 container secret-volume-test: 
STEP: delete the pod
Jan 25 14:14:29.378: INFO: Waiting for pod pod-projected-secrets-f9286be1-5211-4444-aceb-428ce56a6d08 to disappear
Jan 25 14:14:29.394: INFO: Pod pod-projected-secrets-f9286be1-5211-4444-aceb-428ce56a6d08 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:14:29.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8895" for this suite.
Jan 25 14:14:35.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:14:35.566: INFO: namespace projected-8895 deletion completed in 6.167000316s

• [SLOW TEST:16.543 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:14:35.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Jan 25 14:14:35.634: INFO: namespace kubectl-8450
Jan 25 14:14:35.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8450'
Jan 25 14:14:37.896: INFO: stderr: ""
Jan 25 14:14:37.896: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 25 14:14:38.917: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 14:14:38.918: INFO: Found 0 / 1
Jan 25 14:14:39.905: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 14:14:39.905: INFO: Found 0 / 1
Jan 25 14:14:40.906: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 14:14:40.906: INFO: Found 0 / 1
Jan 25 14:14:41.906: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 14:14:41.906: INFO: Found 0 / 1
Jan 25 14:14:42.904: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 14:14:42.905: INFO: Found 0 / 1
Jan 25 14:14:43.916: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 14:14:43.916: INFO: Found 0 / 1
Jan 25 14:14:44.905: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 14:14:44.905: INFO: Found 0 / 1
Jan 25 14:14:45.906: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 14:14:45.906: INFO: Found 0 / 1
Jan 25 14:14:46.908: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 14:14:46.908: INFO: Found 1 / 1
Jan 25 14:14:46.908: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 25 14:14:46.918: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 14:14:46.918: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 25 14:14:46.918: INFO: wait on redis-master startup in kubectl-8450 
Jan 25 14:14:46.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-cc2ph redis-master --namespace=kubectl-8450'
Jan 25 14:14:47.164: INFO: stderr: ""
Jan 25 14:14:47.165: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 25 Jan 14:14:44.855 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 25 Jan 14:14:44.855 # Server started, Redis version 3.2.12\n1:M 25 Jan 14:14:44.856 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 25 Jan 14:14:44.856 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan 25 14:14:47.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8450'
Jan 25 14:14:47.368: INFO: stderr: ""
Jan 25 14:14:47.368: INFO: stdout: "service/rm2 exposed\n"
Jan 25 14:14:47.418: INFO: Service rm2 in namespace kubectl-8450 found.
STEP: exposing service
Jan 25 14:14:49.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8450'
Jan 25 14:14:49.674: INFO: stderr: ""
Jan 25 14:14:49.674: INFO: stdout: "service/rm3 exposed\n"
Jan 25 14:14:49.721: INFO: Service rm3 in namespace kubectl-8450 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:14:51.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8450" for this suite.
Jan 25 14:15:13.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:15:13.945: INFO: namespace kubectl-8450 deletion completed in 22.200607952s

• [SLOW TEST:38.379 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:15:13.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 25 14:15:14.042: INFO: Waiting up to 5m0s for pod "pod-2b7056cf-7f4f-41c6-87a7-e2fb8a5315df" in namespace "emptydir-4961" to be "success or failure"
Jan 25 14:15:14.052: INFO: Pod "pod-2b7056cf-7f4f-41c6-87a7-e2fb8a5315df": Phase="Pending", Reason="", readiness=false. Elapsed: 10.301517ms
Jan 25 14:15:16.059: INFO: Pod "pod-2b7056cf-7f4f-41c6-87a7-e2fb8a5315df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017063069s
Jan 25 14:15:18.068: INFO: Pod "pod-2b7056cf-7f4f-41c6-87a7-e2fb8a5315df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026428404s
Jan 25 14:15:20.077: INFO: Pod "pod-2b7056cf-7f4f-41c6-87a7-e2fb8a5315df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035554686s
Jan 25 14:15:22.097: INFO: Pod "pod-2b7056cf-7f4f-41c6-87a7-e2fb8a5315df": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055719584s
Jan 25 14:15:24.114: INFO: Pod "pod-2b7056cf-7f4f-41c6-87a7-e2fb8a5315df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072794139s
STEP: Saw pod success
Jan 25 14:15:24.115: INFO: Pod "pod-2b7056cf-7f4f-41c6-87a7-e2fb8a5315df" satisfied condition "success or failure"
Jan 25 14:15:24.118: INFO: Trying to get logs from node iruya-node pod pod-2b7056cf-7f4f-41c6-87a7-e2fb8a5315df container test-container: 
STEP: delete the pod
Jan 25 14:15:24.347: INFO: Waiting for pod pod-2b7056cf-7f4f-41c6-87a7-e2fb8a5315df to disappear
Jan 25 14:15:24.369: INFO: Pod pod-2b7056cf-7f4f-41c6-87a7-e2fb8a5315df no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:15:24.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4961" for this suite.
Jan 25 14:15:30.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:15:30.620: INFO: namespace emptydir-4961 deletion completed in 6.241342692s

• [SLOW TEST:16.674 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:15:30.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 25 14:15:58.753: INFO: Container started at 2020-01-25 14:15:36 +0000 UTC, pod became ready at 2020-01-25 14:15:57 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:15:58.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2367" for this suite.
Jan 25 14:16:20.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:16:21.004: INFO: namespace container-probe-2367 deletion completed in 22.243548592s

• [SLOW TEST:50.383 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:16:21.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 25 14:16:21.131: INFO: Waiting up to 5m0s for pod "downward-api-dc4859c1-f2d5-490e-8c7e-a6acb5ff0d0e" in namespace "downward-api-9904" to be "success or failure"
Jan 25 14:16:21.145: INFO: Pod "downward-api-dc4859c1-f2d5-490e-8c7e-a6acb5ff0d0e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.864536ms
Jan 25 14:16:23.155: INFO: Pod "downward-api-dc4859c1-f2d5-490e-8c7e-a6acb5ff0d0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023385265s
Jan 25 14:16:25.229: INFO: Pod "downward-api-dc4859c1-f2d5-490e-8c7e-a6acb5ff0d0e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097746409s
Jan 25 14:16:27.237: INFO: Pod "downward-api-dc4859c1-f2d5-490e-8c7e-a6acb5ff0d0e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105075807s
Jan 25 14:16:29.248: INFO: Pod "downward-api-dc4859c1-f2d5-490e-8c7e-a6acb5ff0d0e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116132964s
Jan 25 14:16:31.255: INFO: Pod "downward-api-dc4859c1-f2d5-490e-8c7e-a6acb5ff0d0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.123660978s
STEP: Saw pod success
Jan 25 14:16:31.255: INFO: Pod "downward-api-dc4859c1-f2d5-490e-8c7e-a6acb5ff0d0e" satisfied condition "success or failure"
Jan 25 14:16:31.265: INFO: Trying to get logs from node iruya-node pod downward-api-dc4859c1-f2d5-490e-8c7e-a6acb5ff0d0e container dapi-container: 
STEP: delete the pod
Jan 25 14:16:31.408: INFO: Waiting for pod downward-api-dc4859c1-f2d5-490e-8c7e-a6acb5ff0d0e to disappear
Jan 25 14:16:31.419: INFO: Pod downward-api-dc4859c1-f2d5-490e-8c7e-a6acb5ff0d0e no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:16:31.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9904" for this suite.
Jan 25 14:16:37.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:16:37.591: INFO: namespace downward-api-9904 deletion completed in 6.155758968s

• [SLOW TEST:16.587 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:16:37.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jan 25 14:16:37.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-133'
Jan 25 14:16:38.046: INFO: stderr: ""
Jan 25 14:16:38.046: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 25 14:16:38.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-133'
Jan 25 14:16:38.248: INFO: stderr: ""
Jan 25 14:16:38.248: INFO: stdout: "update-demo-nautilus-7rzdl update-demo-nautilus-q55k2 "
Jan 25 14:16:38.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7rzdl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-133'
Jan 25 14:16:38.361: INFO: stderr: ""
Jan 25 14:16:38.361: INFO: stdout: ""
Jan 25 14:16:38.361: INFO: update-demo-nautilus-7rzdl is created but not running
Jan 25 14:16:43.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-133'
Jan 25 14:16:45.061: INFO: stderr: ""
Jan 25 14:16:45.061: INFO: stdout: "update-demo-nautilus-7rzdl update-demo-nautilus-q55k2 "
Jan 25 14:16:45.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7rzdl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-133'
Jan 25 14:16:45.475: INFO: stderr: ""
Jan 25 14:16:45.475: INFO: stdout: ""
Jan 25 14:16:45.475: INFO: update-demo-nautilus-7rzdl is created but not running
Jan 25 14:16:50.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-133'
Jan 25 14:16:50.652: INFO: stderr: ""
Jan 25 14:16:50.652: INFO: stdout: "update-demo-nautilus-7rzdl update-demo-nautilus-q55k2 "
Jan 25 14:16:50.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7rzdl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-133'
Jan 25 14:16:50.761: INFO: stderr: ""
Jan 25 14:16:50.762: INFO: stdout: "true"
Jan 25 14:16:50.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7rzdl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-133'
Jan 25 14:16:50.896: INFO: stderr: ""
Jan 25 14:16:50.896: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 25 14:16:50.896: INFO: validating pod update-demo-nautilus-7rzdl
Jan 25 14:16:50.910: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 25 14:16:50.910: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 25 14:16:50.910: INFO: update-demo-nautilus-7rzdl is verified up and running
Jan 25 14:16:50.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q55k2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-133'
Jan 25 14:16:51.039: INFO: stderr: ""
Jan 25 14:16:51.039: INFO: stdout: "true"
Jan 25 14:16:51.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q55k2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-133'
Jan 25 14:16:51.149: INFO: stderr: ""
Jan 25 14:16:51.149: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 25 14:16:51.150: INFO: validating pod update-demo-nautilus-q55k2
Jan 25 14:16:51.157: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 25 14:16:51.157: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 25 14:16:51.157: INFO: update-demo-nautilus-q55k2 is verified up and running
STEP: scaling down the replication controller
Jan 25 14:16:51.160: INFO: scanned /root for discovery docs: 
Jan 25 14:16:51.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-133'
Jan 25 14:16:52.337: INFO: stderr: ""
Jan 25 14:16:52.337: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 25 14:16:52.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-133'
Jan 25 14:16:52.458: INFO: stderr: ""
Jan 25 14:16:52.458: INFO: stdout: "update-demo-nautilus-7rzdl update-demo-nautilus-q55k2 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 25 14:16:57.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-133'
Jan 25 14:16:57.673: INFO: stderr: ""
Jan 25 14:16:57.674: INFO: stdout: "update-demo-nautilus-7rzdl update-demo-nautilus-q55k2 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 25 14:17:02.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-133'
Jan 25 14:17:02.871: INFO: stderr: ""
Jan 25 14:17:02.871: INFO: stdout: "update-demo-nautilus-7rzdl update-demo-nautilus-q55k2 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 25 14:17:07.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-133'
Jan 25 14:17:08.110: INFO: stderr: ""
Jan 25 14:17:08.110: INFO: stdout: "update-demo-nautilus-q55k2 "
Jan 25 14:17:08.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q55k2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-133'
Jan 25 14:17:08.195: INFO: stderr: ""
Jan 25 14:17:08.195: INFO: stdout: "true"
Jan 25 14:17:08.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q55k2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-133'
Jan 25 14:17:08.288: INFO: stderr: ""
Jan 25 14:17:08.288: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 25 14:17:08.288: INFO: validating pod update-demo-nautilus-q55k2
Jan 25 14:17:08.293: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 25 14:17:08.293: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 25 14:17:08.293: INFO: update-demo-nautilus-q55k2 is verified up and running
STEP: scaling up the replication controller
Jan 25 14:17:08.295: INFO: scanned /root for discovery docs: 
Jan 25 14:17:08.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-133'
Jan 25 14:17:09.472: INFO: stderr: ""
Jan 25 14:17:09.472: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 25 14:17:09.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-133'
Jan 25 14:17:09.618: INFO: stderr: ""
Jan 25 14:17:09.618: INFO: stdout: "update-demo-nautilus-7kljg update-demo-nautilus-q55k2 "
Jan 25 14:17:09.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7kljg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-133'
Jan 25 14:17:09.781: INFO: stderr: ""
Jan 25 14:17:09.781: INFO: stdout: ""
Jan 25 14:17:09.781: INFO: update-demo-nautilus-7kljg is created but not running
Jan 25 14:17:14.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-133'
Jan 25 14:17:14.961: INFO: stderr: ""
Jan 25 14:17:14.961: INFO: stdout: "update-demo-nautilus-7kljg update-demo-nautilus-q55k2 "
Jan 25 14:17:14.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7kljg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-133'
Jan 25 14:17:15.194: INFO: stderr: ""
Jan 25 14:17:15.194: INFO: stdout: ""
Jan 25 14:17:15.194: INFO: update-demo-nautilus-7kljg is created but not running
Jan 25 14:17:20.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-133'
Jan 25 14:17:20.366: INFO: stderr: ""
Jan 25 14:17:20.367: INFO: stdout: "update-demo-nautilus-7kljg update-demo-nautilus-q55k2 "
Jan 25 14:17:20.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7kljg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-133'
Jan 25 14:17:20.489: INFO: stderr: ""
Jan 25 14:17:20.489: INFO: stdout: "true"
Jan 25 14:17:20.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7kljg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-133'
Jan 25 14:17:20.604: INFO: stderr: ""
Jan 25 14:17:20.604: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 25 14:17:20.604: INFO: validating pod update-demo-nautilus-7kljg
Jan 25 14:17:20.624: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 25 14:17:20.624: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 25 14:17:20.624: INFO: update-demo-nautilus-7kljg is verified up and running
Jan 25 14:17:20.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q55k2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-133'
Jan 25 14:17:20.700: INFO: stderr: ""
Jan 25 14:17:20.700: INFO: stdout: "true"
Jan 25 14:17:20.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q55k2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-133'
Jan 25 14:17:20.825: INFO: stderr: ""
Jan 25 14:17:20.825: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 25 14:17:20.825: INFO: validating pod update-demo-nautilus-q55k2
Jan 25 14:17:20.829: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 25 14:17:20.829: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 25 14:17:20.829: INFO: update-demo-nautilus-q55k2 is verified up and running
STEP: using delete to clean up resources
Jan 25 14:17:20.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-133'
Jan 25 14:17:20.938: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 25 14:17:20.938: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 25 14:17:20.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-133'
Jan 25 14:17:21.085: INFO: stderr: "No resources found.\n"
Jan 25 14:17:21.085: INFO: stdout: ""
Jan 25 14:17:21.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-133 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 25 14:17:21.289: INFO: stderr: ""
Jan 25 14:17:21.290: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:17:21.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-133" for this suite.
Jan 25 14:17:43.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:17:43.444: INFO: namespace kubectl-133 deletion completed in 22.141215676s

• [SLOW TEST:65.853 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:17:43.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Jan 25 14:17:43.497: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jan 25 14:17:43.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8107'
Jan 25 14:17:43.870: INFO: stderr: ""
Jan 25 14:17:43.870: INFO: stdout: "service/redis-slave created\n"
Jan 25 14:17:43.871: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jan 25 14:17:43.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8107'
Jan 25 14:17:44.693: INFO: stderr: ""
Jan 25 14:17:44.693: INFO: stdout: "service/redis-master created\n"
Jan 25 14:17:44.694: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan 25 14:17:44.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8107'
Jan 25 14:17:45.062: INFO: stderr: ""
Jan 25 14:17:45.062: INFO: stdout: "service/frontend created\n"
Jan 25 14:17:45.063: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jan 25 14:17:45.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8107'
Jan 25 14:17:45.626: INFO: stderr: ""
Jan 25 14:17:45.626: INFO: stdout: "deployment.apps/frontend created\n"
Jan 25 14:17:45.627: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 25 14:17:45.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8107'
Jan 25 14:17:46.094: INFO: stderr: ""
Jan 25 14:17:46.095: INFO: stdout: "deployment.apps/redis-master created\n"
Jan 25 14:17:46.096: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jan 25 14:17:46.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8107'
Jan 25 14:17:48.065: INFO: stderr: ""
Jan 25 14:17:48.065: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Jan 25 14:17:48.065: INFO: Waiting for all frontend pods to be Running.
Jan 25 14:18:13.117: INFO: Waiting for frontend to serve content.
Jan 25 14:18:13.318: INFO: Trying to add a new entry to the guestbook.
Jan 25 14:18:13.400: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jan 25 14:18:13.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8107'
Jan 25 14:18:13.710: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 25 14:18:13.710: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan 25 14:18:13.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8107'
Jan 25 14:18:13.895: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 25 14:18:13.895: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 25 14:18:13.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8107'
Jan 25 14:18:14.291: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 25 14:18:14.291: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 25 14:18:14.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8107'
Jan 25 14:18:14.421: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 25 14:18:14.421: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 25 14:18:14.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8107'
Jan 25 14:18:14.582: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 25 14:18:14.582: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 25 14:18:14.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8107'
Jan 25 14:18:14.735: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 25 14:18:14.736: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:18:14.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8107" for this suite.
Jan 25 14:18:58.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:18:58.995: INFO: namespace kubectl-8107 deletion completed in 44.247473693s

• [SLOW TEST:75.551 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:18:58.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 25 14:18:59.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-569'
Jan 25 14:18:59.251: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 25 14:18:59.252: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Jan 25 14:19:01.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-569'
Jan 25 14:19:01.681: INFO: stderr: ""
Jan 25 14:19:01.681: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:19:01.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-569" for this suite.
Jan 25 14:19:07.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:19:07.885: INFO: namespace kubectl-569 deletion completed in 6.198978618s

• [SLOW TEST:8.889 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:19:07.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:19:08.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7705" for this suite.
Jan 25 14:19:30.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:19:30.235: INFO: namespace pods-7705 deletion completed in 22.203649939s

• [SLOW TEST:22.350 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:19:30.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 25 14:19:30.387: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan 25 14:19:35.396: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 25 14:19:39.409: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan 25 14:19:41.415: INFO: Creating deployment "test-rollover-deployment"
Jan 25 14:19:41.429: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan 25 14:19:43.447: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan 25 14:19:43.462: INFO: Ensure that both replica sets have 1 created replica
Jan 25 14:19:43.475: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan 25 14:19:43.489: INFO: Updating deployment test-rollover-deployment
Jan 25 14:19:43.489: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan 25 14:19:45.512: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan 25 14:19:45.534: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan 25 14:19:45.547: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 14:19:45.547: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558781, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558781, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558783, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558781, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 14:19:47.568: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 14:19:47.568: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558781, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558781, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558783, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558781, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 14:19:49.566: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 14:19:49.566: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558781, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558781, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558783, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558781, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 14:19:51.561: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 14:19:51.562: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558781, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558781, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558783, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558781, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 14:19:53.563: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 14:19:53.563: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558781, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558781, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558792, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558781, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 14:19:55.565: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 14:19:55.565: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558781, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558781, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558792, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558781, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 14:19:57.562: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 14:19:57.562: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558781, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558781, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558792, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558781, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 14:19:59.562: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 14:19:59.562: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558781, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558781, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558792, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558781, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 14:20:01.566: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 14:20:01.567: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558781, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558781, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558792, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558781, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 14:20:03.558: INFO: 
Jan 25 14:20:03.558: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 25 14:20:03.565: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-8577,SelfLink:/apis/apps/v1/namespaces/deployment-8577/deployments/test-rollover-deployment,UID:67d80f42-03ea-4e5d-a11a-c0fb6e759faa,ResourceVersion:21819256,Generation:2,CreationTimestamp:2020-01-25 14:19:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-25 14:19:41 +0000 UTC 2020-01-25 14:19:41 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-25 14:20:02 +0000 UTC 2020-01-25 14:19:41 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan 25 14:20:03.568: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-8577,SelfLink:/apis/apps/v1/namespaces/deployment-8577/replicasets/test-rollover-deployment-854595fc44,UID:0616bebc-0e04-4c5b-9a38-c1b541ea817b,ResourceVersion:21819245,Generation:2,CreationTimestamp:2020-01-25 14:19:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 67d80f42-03ea-4e5d-a11a-c0fb6e759faa 0xc0024db347 0xc0024db348}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 25 14:20:03.568: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan 25 14:20:03.568: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-8577,SelfLink:/apis/apps/v1/namespaces/deployment-8577/replicasets/test-rollover-controller,UID:bf89241f-8820-48c7-8909-9470e222afdc,ResourceVersion:21819253,Generation:2,CreationTimestamp:2020-01-25 14:19:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 67d80f42-03ea-4e5d-a11a-c0fb6e759faa 0xc0024db09f 0xc0024db0d0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 25 14:20:03.568: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-8577,SelfLink:/apis/apps/v1/namespaces/deployment-8577/replicasets/test-rollover-deployment-9b8b997cf,UID:f4c8844e-cc50-4a85-8d1d-fac9bc7c18c6,ResourceVersion:21819211,Generation:2,CreationTimestamp:2020-01-25 14:19:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 67d80f42-03ea-4e5d-a11a-c0fb6e759faa 0xc0024db760 0xc0024db761}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 25 14:20:03.571: INFO: Pod "test-rollover-deployment-854595fc44-5lm2p" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-5lm2p,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-8577,SelfLink:/api/v1/namespaces/deployment-8577/pods/test-rollover-deployment-854595fc44-5lm2p,UID:1fe68898-16f1-44ec-ab6e-1836dc3c271d,ResourceVersion:21819228,Generation:0,CreationTimestamp:2020-01-25 14:19:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 0616bebc-0e04-4c5b-9a38-c1b541ea817b 0xc000f58647 0xc000f58648}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zkjw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zkjw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-zkjw8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f58760} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f58790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 14:19:43 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 14:19:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 14:19:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 14:19:43 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-25 14:19:43 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-25 14:19:51 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://aae60a130173889ed3504de8e11ed30bbd6a14e5eaced42fd95fd1fbb363e401}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:20:03.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8577" for this suite.
Jan 25 14:20:09.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:20:09.732: INFO: namespace deployment-8577 deletion completed in 6.157511957s

• [SLOW TEST:39.497 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:20:09.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 25 14:20:09.913: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan 25 14:20:09.934: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan 25 14:20:14.947: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 25 14:20:18.964: INFO: Creating deployment "test-rolling-update-deployment"
Jan 25 14:20:18.974: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan 25 14:20:19.008: INFO: deployment "test-rolling-update-deployment" doesn't have the required revision set
Jan 25 14:20:21.047: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan 25 14:20:21.050: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558819, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558819, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558819, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558819, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 14:20:23.061: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558819, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558819, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558819, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558819, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 14:20:25.057: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558819, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558819, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558819, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558819, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 14:20:27.059: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558819, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558819, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558819, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715558819, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 14:20:29.061: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 25 14:20:29.075: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-2705,SelfLink:/apis/apps/v1/namespaces/deployment-2705/deployments/test-rolling-update-deployment,UID:555f3c6b-8e4a-42a8-97fa-310a6908ad10,ResourceVersion:21819368,Generation:1,CreationTimestamp:2020-01-25 14:20:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-25 14:20:19 +0000 UTC 2020-01-25 14:20:19 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-25 14:20:27 +0000 UTC 2020-01-25 14:20:19 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan 25 14:20:29.079: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-2705,SelfLink:/apis/apps/v1/namespaces/deployment-2705/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:2bfb00e5-6b78-4522-b476-e04405d635d9,ResourceVersion:21819357,Generation:1,CreationTimestamp:2020-01-25 14:20:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 555f3c6b-8e4a-42a8-97fa-310a6908ad10 0xc0027e1977 0xc0027e1978}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 25 14:20:29.079: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan 25 14:20:29.079: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-2705,SelfLink:/apis/apps/v1/namespaces/deployment-2705/replicasets/test-rolling-update-controller,UID:8e9f87a1-7c78-4cd6-b23e-4e4b412b446a,ResourceVersion:21819367,Generation:2,CreationTimestamp:2020-01-25 14:20:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 555f3c6b-8e4a-42a8-97fa-310a6908ad10 0xc0027e1707 0xc0027e1708}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 25 14:20:29.083: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-jb925" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-jb925,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-2705,SelfLink:/api/v1/namespaces/deployment-2705/pods/test-rolling-update-deployment-79f6b9d75c-jb925,UID:7b1a2e01-a0c7-453d-8e26-8777a5d58095,ResourceVersion:21819356,Generation:0,CreationTimestamp:2020-01-25 14:20:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 2bfb00e5-6b78-4522-b476-e04405d635d9 0xc0022e0697 0xc0022e0698}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-g9hqm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g9hqm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-g9hqm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022e0710} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022e0730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 14:20:19 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 14:20:27 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 14:20:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 14:20:19 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-25 14:20:19 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-25 14:20:26 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://401235d8d59c7ef4d2866f2ee6bca429f8be4037d71e908e722830637977aaea}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:20:29.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2705" for this suite.
Jan 25 14:20:35.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:20:35.200: INFO: namespace deployment-2705 deletion completed in 6.113453795s

• [SLOW TEST:25.467 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:20:35.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 25 14:20:35.306: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6fc75b94-c09a-48b9-b0eb-860daa31a799" in namespace "downward-api-375" to be "success or failure"
Jan 25 14:20:35.441: INFO: Pod "downwardapi-volume-6fc75b94-c09a-48b9-b0eb-860daa31a799": Phase="Pending", Reason="", readiness=false. Elapsed: 135.353703ms
Jan 25 14:20:37.485: INFO: Pod "downwardapi-volume-6fc75b94-c09a-48b9-b0eb-860daa31a799": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178857596s
Jan 25 14:20:39.496: INFO: Pod "downwardapi-volume-6fc75b94-c09a-48b9-b0eb-860daa31a799": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190074286s
Jan 25 14:20:41.506: INFO: Pod "downwardapi-volume-6fc75b94-c09a-48b9-b0eb-860daa31a799": Phase="Pending", Reason="", readiness=false. Elapsed: 6.199790467s
Jan 25 14:20:43.529: INFO: Pod "downwardapi-volume-6fc75b94-c09a-48b9-b0eb-860daa31a799": Phase="Pending", Reason="", readiness=false. Elapsed: 8.223421147s
Jan 25 14:20:45.539: INFO: Pod "downwardapi-volume-6fc75b94-c09a-48b9-b0eb-860daa31a799": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.233037593s
STEP: Saw pod success
Jan 25 14:20:45.539: INFO: Pod "downwardapi-volume-6fc75b94-c09a-48b9-b0eb-860daa31a799" satisfied condition "success or failure"
Jan 25 14:20:45.545: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-6fc75b94-c09a-48b9-b0eb-860daa31a799 container client-container: 
STEP: delete the pod
Jan 25 14:20:45.888: INFO: Waiting for pod downwardapi-volume-6fc75b94-c09a-48b9-b0eb-860daa31a799 to disappear
Jan 25 14:20:45.898: INFO: Pod downwardapi-volume-6fc75b94-c09a-48b9-b0eb-860daa31a799 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:20:45.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-375" for this suite.
Jan 25 14:20:51.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:20:52.054: INFO: namespace downward-api-375 deletion completed in 6.150415333s

• [SLOW TEST:16.853 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:20:52.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 25 14:20:52.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan 25 14:20:52.377: INFO: stderr: ""
Jan 25 14:20:52.377: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:20:52.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9987" for this suite.
Jan 25 14:20:58.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:20:58.551: INFO: namespace kubectl-9987 deletion completed in 6.16757777s

• [SLOW TEST:6.497 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:20:58.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:21:04.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9795" for this suite.
Jan 25 14:21:11.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:21:11.155: INFO: namespace namespaces-9795 deletion completed in 6.171956534s
STEP: Destroying namespace "nsdeletetest-7719" for this suite.
Jan 25 14:21:11.255: INFO: Namespace nsdeletetest-7719 was already deleted
STEP: Destroying namespace "nsdeletetest-8111" for this suite.
Jan 25 14:21:17.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:21:17.390: INFO: namespace nsdeletetest-8111 deletion completed in 6.135649649s

• [SLOW TEST:18.838 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:21:17.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-8754
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-8754
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8754
Jan 25 14:21:17.573: INFO: Found 0 stateful pods, waiting for 1
Jan 25 14:21:27.588: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan 25 14:21:27.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8754 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 25 14:21:28.440: INFO: stderr: "I0125 14:21:27.840568    2547 log.go:172] (0xc0006a4a50) (0xc0005266e0) Create stream\nI0125 14:21:27.840961    2547 log.go:172] (0xc0006a4a50) (0xc0005266e0) Stream added, broadcasting: 1\nI0125 14:21:27.852677    2547 log.go:172] (0xc0006a4a50) Reply frame received for 1\nI0125 14:21:27.852782    2547 log.go:172] (0xc0006a4a50) (0xc000682280) Create stream\nI0125 14:21:27.852792    2547 log.go:172] (0xc0006a4a50) (0xc000682280) Stream added, broadcasting: 3\nI0125 14:21:27.858755    2547 log.go:172] (0xc0006a4a50) Reply frame received for 3\nI0125 14:21:27.858856    2547 log.go:172] (0xc0006a4a50) (0xc0007bc000) Create stream\nI0125 14:21:27.858872    2547 log.go:172] (0xc0006a4a50) (0xc0007bc000) Stream added, broadcasting: 5\nI0125 14:21:27.869826    2547 log.go:172] (0xc0006a4a50) Reply frame received for 5\nI0125 14:21:28.204177    2547 log.go:172] (0xc0006a4a50) Data frame received for 5\nI0125 14:21:28.204464    2547 log.go:172] (0xc0007bc000) (5) Data frame handling\nI0125 14:21:28.204495    2547 log.go:172] (0xc0007bc000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0125 14:21:28.261858    2547 log.go:172] (0xc0006a4a50) Data frame received for 3\nI0125 14:21:28.261952    2547 log.go:172] (0xc000682280) (3) Data frame handling\nI0125 14:21:28.261963    2547 log.go:172] (0xc000682280) (3) Data frame sent\nI0125 14:21:28.431993    2547 log.go:172] (0xc0006a4a50) Data frame received for 1\nI0125 14:21:28.432144    2547 log.go:172] (0xc0006a4a50) (0xc0007bc000) Stream removed, broadcasting: 5\nI0125 14:21:28.432196    2547 log.go:172] (0xc0005266e0) (1) Data frame handling\nI0125 14:21:28.432209    2547 log.go:172] (0xc0005266e0) (1) Data frame sent\nI0125 14:21:28.432279    2547 log.go:172] (0xc0006a4a50) (0xc000682280) Stream removed, broadcasting: 3\nI0125 14:21:28.432312    2547 log.go:172] (0xc0006a4a50) (0xc0005266e0) Stream removed, broadcasting: 1\nI0125 14:21:28.432320    2547 log.go:172] (0xc0006a4a50) Go away received\nI0125 14:21:28.434010    2547 log.go:172] (0xc0006a4a50) (0xc0005266e0) Stream removed, broadcasting: 1\nI0125 14:21:28.434165    2547 log.go:172] (0xc0006a4a50) (0xc000682280) Stream removed, broadcasting: 3\nI0125 14:21:28.434185    2547 log.go:172] (0xc0006a4a50) (0xc0007bc000) Stream removed, broadcasting: 5\n"
Jan 25 14:21:28.441: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 25 14:21:28.441: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 25 14:21:28.452: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 25 14:21:38.469: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 14:21:38.469: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 14:21:38.509: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999438s
Jan 25 14:21:39.544: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.977908603s
Jan 25 14:21:40.559: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.943073813s
Jan 25 14:21:41.570: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.927985051s
Jan 25 14:21:42.606: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.916930792s
Jan 25 14:21:43.619: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.88152313s
Jan 25 14:21:44.626: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.868459134s
Jan 25 14:21:45.651: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.861559377s
Jan 25 14:21:46.658: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.836004668s
Jan 25 14:21:47.668: INFO: Verifying statefulset ss doesn't scale past 1 for another 829.531085ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8754
Jan 25 14:21:48.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8754 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 25 14:21:49.190: INFO: stderr: "I0125 14:21:48.880755    2563 log.go:172] (0xc0001166e0) (0xc000270820) Create stream\nI0125 14:21:48.881112    2563 log.go:172] (0xc0001166e0) (0xc000270820) Stream added, broadcasting: 1\nI0125 14:21:48.892721    2563 log.go:172] (0xc0001166e0) Reply frame received for 1\nI0125 14:21:48.892820    2563 log.go:172] (0xc0001166e0) (0xc0006ec000) Create stream\nI0125 14:21:48.892850    2563 log.go:172] (0xc0001166e0) (0xc0006ec000) Stream added, broadcasting: 3\nI0125 14:21:48.897814    2563 log.go:172] (0xc0001166e0) Reply frame received for 3\nI0125 14:21:48.897898    2563 log.go:172] (0xc0001166e0) (0xc0007c2000) Create stream\nI0125 14:21:48.897925    2563 log.go:172] (0xc0001166e0) (0xc0007c2000) Stream added, broadcasting: 5\nI0125 14:21:48.900196    2563 log.go:172] (0xc0001166e0) Reply frame received for 5\nI0125 14:21:49.035958    2563 log.go:172] (0xc0001166e0) Data frame received for 3\nI0125 14:21:49.036138    2563 log.go:172] (0xc0006ec000) (3) Data frame handling\nI0125 14:21:49.036165    2563 log.go:172] (0xc0006ec000) (3) Data frame sent\nI0125 14:21:49.036237    2563 log.go:172] (0xc0001166e0) Data frame received for 5\nI0125 14:21:49.036313    2563 log.go:172] (0xc0007c2000) (5) Data frame handling\nI0125 14:21:49.036340    2563 log.go:172] (0xc0007c2000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0125 14:21:49.177195    2563 log.go:172] (0xc0001166e0) Data frame received for 1\nI0125 14:21:49.177352    2563 log.go:172] (0xc000270820) (1) Data frame handling\nI0125 14:21:49.177384    2563 log.go:172] (0xc000270820) (1) Data frame sent\nI0125 14:21:49.177411    2563 log.go:172] (0xc0001166e0) (0xc000270820) Stream removed, broadcasting: 1\nI0125 14:21:49.178637    2563 log.go:172] (0xc0001166e0) (0xc0007c2000) Stream removed, broadcasting: 5\nI0125 14:21:49.178898    2563 log.go:172] (0xc0001166e0) (0xc0006ec000) Stream removed, broadcasting: 3\nI0125 14:21:49.179037    2563 log.go:172] (0xc0001166e0) (0xc000270820) Stream removed, broadcasting: 1\nI0125 14:21:49.179055    2563 log.go:172] (0xc0001166e0) (0xc0006ec000) Stream removed, broadcasting: 3\nI0125 14:21:49.179083    2563 log.go:172] (0xc0001166e0) (0xc0007c2000) Stream removed, broadcasting: 5\nI0125 14:21:49.179917    2563 log.go:172] (0xc0001166e0) Go away received\n"
Jan 25 14:21:49.190: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 25 14:21:49.190: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 25 14:21:49.198: INFO: Found 1 stateful pods, waiting for 3
Jan 25 14:21:59.207: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 14:21:59.207: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 14:21:59.207: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 25 14:22:09.208: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 14:22:09.208: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 14:22:09.208: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan 25 14:22:09.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8754 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 25 14:22:09.731: INFO: stderr: "I0125 14:22:09.477100    2586 log.go:172] (0xc0008fe6e0) (0xc0005d4aa0) Create stream\nI0125 14:22:09.477291    2586 log.go:172] (0xc0008fe6e0) (0xc0005d4aa0) Stream added, broadcasting: 1\nI0125 14:22:09.487713    2586 log.go:172] (0xc0008fe6e0) Reply frame received for 1\nI0125 14:22:09.487827    2586 log.go:172] (0xc0008fe6e0) (0xc0009d4000) Create stream\nI0125 14:22:09.487854    2586 log.go:172] (0xc0008fe6e0) (0xc0009d4000) Stream added, broadcasting: 3\nI0125 14:22:09.490141    2586 log.go:172] (0xc0008fe6e0) Reply frame received for 3\nI0125 14:22:09.490310    2586 log.go:172] (0xc0008fe6e0) (0xc000a16000) Create stream\nI0125 14:22:09.490342    2586 log.go:172] (0xc0008fe6e0) (0xc000a16000) Stream added, broadcasting: 5\nI0125 14:22:09.493009    2586 log.go:172] (0xc0008fe6e0) Reply frame received for 5\nI0125 14:22:09.595161    2586 log.go:172] (0xc0008fe6e0) Data frame received for 3\nI0125 14:22:09.595329    2586 log.go:172] (0xc0009d4000) (3) Data frame handling\nI0125 14:22:09.595359    2586 log.go:172] (0xc0009d4000) (3) Data frame sent\nI0125 14:22:09.595447    2586 log.go:172] (0xc0008fe6e0) Data frame received for 5\nI0125 14:22:09.595469    2586 log.go:172] (0xc000a16000) (5) Data frame handling\nI0125 14:22:09.595485    2586 log.go:172] (0xc000a16000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0125 14:22:09.720364    2586 log.go:172] (0xc0008fe6e0) Data frame received for 1\nI0125 14:22:09.720502    2586 log.go:172] (0xc0008fe6e0) (0xc0009d4000) Stream removed, broadcasting: 3\nI0125 14:22:09.720563    2586 log.go:172] (0xc0005d4aa0) (1) Data frame handling\nI0125 14:22:09.720592    2586 log.go:172] (0xc0005d4aa0) (1) Data frame sent\nI0125 14:22:09.720627    2586 log.go:172] (0xc0008fe6e0) (0xc000a16000) Stream removed, broadcasting: 5\nI0125 14:22:09.720683    2586 log.go:172] (0xc0008fe6e0) (0xc0005d4aa0) Stream removed, broadcasting: 1\nI0125 14:22:09.720735    2586 log.go:172] (0xc0008fe6e0) Go away received\nI0125 14:22:09.721557    2586 log.go:172] (0xc0008fe6e0) (0xc0005d4aa0) Stream removed, broadcasting: 1\nI0125 14:22:09.721574    2586 log.go:172] (0xc0008fe6e0) (0xc0009d4000) Stream removed, broadcasting: 3\nI0125 14:22:09.721585    2586 log.go:172] (0xc0008fe6e0) (0xc000a16000) Stream removed, broadcasting: 5\n"
Jan 25 14:22:09.731: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 25 14:22:09.731: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 25 14:22:09.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8754 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 25 14:22:10.162: INFO: stderr: "I0125 14:22:09.997893    2608 log.go:172] (0xc000934420) (0xc0004d06e0) Create stream\nI0125 14:22:09.998235    2608 log.go:172] (0xc000934420) (0xc0004d06e0) Stream added, broadcasting: 1\nI0125 14:22:10.002879    2608 log.go:172] (0xc000934420) Reply frame received for 1\nI0125 14:22:10.002960    2608 log.go:172] (0xc000934420) (0xc000286140) Create stream\nI0125 14:22:10.002970    2608 log.go:172] (0xc000934420) (0xc000286140) Stream added, broadcasting: 3\nI0125 14:22:10.003813    2608 log.go:172] (0xc000934420) Reply frame received for 3\nI0125 14:22:10.003830    2608 log.go:172] (0xc000934420) (0xc0002861e0) Create stream\nI0125 14:22:10.003838    2608 log.go:172] (0xc000934420) (0xc0002861e0) Stream added, broadcasting: 5\nI0125 14:22:10.004684    2608 log.go:172] (0xc000934420) Reply frame received for 5\nI0125 14:22:10.086575    2608 log.go:172] (0xc000934420) Data frame received for 5\nI0125 14:22:10.086654    2608 log.go:172] (0xc0002861e0) (5) Data frame handling\nI0125 14:22:10.086668    2608 log.go:172] (0xc0002861e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0125 14:22:10.092121    2608 log.go:172] (0xc000934420) Data frame received for 3\nI0125 14:22:10.092136    2608 log.go:172] (0xc000286140) (3) Data frame handling\nI0125 14:22:10.092144    2608 log.go:172] (0xc000286140) (3) Data frame sent\nI0125 14:22:10.152293    2608 log.go:172] (0xc000934420) Data frame received for 1\nI0125 14:22:10.152376    2608 log.go:172] (0xc0004d06e0) (1) Data frame handling\nI0125 14:22:10.152394    2608 log.go:172] (0xc0004d06e0) (1) Data frame sent\nI0125 14:22:10.152690    2608 log.go:172] (0xc000934420) (0xc0004d06e0) Stream removed, broadcasting: 1\nI0125 14:22:10.153702    2608 log.go:172] (0xc000934420) (0xc000286140) Stream removed, broadcasting: 3\nI0125 14:22:10.153790    2608 log.go:172] (0xc000934420) (0xc0002861e0) Stream removed, broadcasting: 5\nI0125 14:22:10.153873    2608 log.go:172] (0xc000934420) Go away received\nI0125 14:22:10.154366    2608 log.go:172] (0xc000934420) (0xc0004d06e0) Stream removed, broadcasting: 1\nI0125 14:22:10.154401    2608 log.go:172] (0xc000934420) (0xc000286140) Stream removed, broadcasting: 3\nI0125 14:22:10.154412    2608 log.go:172] (0xc000934420) (0xc0002861e0) Stream removed, broadcasting: 5\n"
Jan 25 14:22:10.162: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 25 14:22:10.162: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 25 14:22:10.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8754 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 25 14:22:10.749: INFO: stderr: "I0125 14:22:10.385060    2628 log.go:172] (0xc000117130) (0xc000696be0) Create stream\nI0125 14:22:10.385572    2628 log.go:172] (0xc000117130) (0xc000696be0) Stream added, broadcasting: 1\nI0125 14:22:10.398071    2628 log.go:172] (0xc000117130) Reply frame received for 1\nI0125 14:22:10.398164    2628 log.go:172] (0xc000117130) (0xc000990000) Create stream\nI0125 14:22:10.398185    2628 log.go:172] (0xc000117130) (0xc000990000) Stream added, broadcasting: 3\nI0125 14:22:10.400172    2628 log.go:172] (0xc000117130) Reply frame received for 3\nI0125 14:22:10.400255    2628 log.go:172] (0xc000117130) (0xc000722000) Create stream\nI0125 14:22:10.400294    2628 log.go:172] (0xc000117130) (0xc000722000) Stream added, broadcasting: 5\nI0125 14:22:10.402060    2628 log.go:172] (0xc000117130) Reply frame received for 5\nI0125 14:22:10.560666    2628 log.go:172] (0xc000117130) Data frame received for 5\nI0125 14:22:10.561213    2628 log.go:172] (0xc000722000) (5) Data frame handling\nI0125 14:22:10.561270    2628 log.go:172] (0xc000722000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0125 14:22:10.599272    2628 log.go:172] (0xc000117130) Data frame received for 3\nI0125 14:22:10.599342    2628 log.go:172] (0xc000990000) (3) Data frame handling\nI0125 14:22:10.599372    2628 log.go:172] (0xc000990000) (3) Data frame sent\nI0125 14:22:10.738651    2628 log.go:172] (0xc000117130) Data frame received for 1\nI0125 14:22:10.738833    2628 log.go:172] (0xc000696be0) (1) Data frame handling\nI0125 14:22:10.738876    2628 log.go:172] (0xc000696be0) (1) Data frame sent\nI0125 14:22:10.738916    2628 log.go:172] (0xc000117130) (0xc000696be0) Stream removed, broadcasting: 1\nI0125 14:22:10.739675    2628 log.go:172] (0xc000117130) (0xc000990000) Stream removed, broadcasting: 3\nI0125 14:22:10.739781    2628 log.go:172] (0xc000117130) (0xc000722000) Stream removed, broadcasting: 5\nI0125 14:22:10.739915    2628 log.go:172] (0xc000117130) Go away received\nI0125 14:22:10.740299    2628 log.go:172] (0xc000117130) (0xc000696be0) Stream removed, broadcasting: 1\nI0125 14:22:10.740315    2628 log.go:172] (0xc000117130) (0xc000990000) Stream removed, broadcasting: 3\nI0125 14:22:10.740324    2628 log.go:172] (0xc000117130) (0xc000722000) Stream removed, broadcasting: 5\n"
Jan 25 14:22:10.749: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 25 14:22:10.749: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 25 14:22:10.749: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 14:22:10.754: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan 25 14:22:20.771: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 14:22:20.771: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 14:22:20.771: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 14:22:20.849: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999927s
Jan 25 14:22:21.912: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.937625154s
Jan 25 14:22:22.934: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.874157016s
Jan 25 14:22:23.954: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.851796786s
Jan 25 14:22:24.966: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.832835281s
Jan 25 14:22:25.991: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.820285738s
Jan 25 14:22:27.008: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.795852679s
Jan 25 14:22:28.016: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.779137892s
Jan 25 14:22:29.041: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.770125788s
Jan 25 14:22:30.050: INFO: Verifying statefulset ss doesn't scale past 3 for another 745.512618ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8754
Jan 25 14:22:31.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8754 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 25 14:22:31.707: INFO: stderr: "I0125 14:22:31.410969    2648 log.go:172] (0xc000116dc0) (0xc0005b6a00) Create stream\nI0125 14:22:31.411357    2648 log.go:172] (0xc000116dc0) (0xc0005b6a00) Stream added, broadcasting: 1\nI0125 14:22:31.421206    2648 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0125 14:22:31.421290    2648 log.go:172] (0xc000116dc0) (0xc000916000) Create stream\nI0125 14:22:31.421304    2648 log.go:172] (0xc000116dc0) (0xc000916000) Stream added, broadcasting: 3\nI0125 14:22:31.423775    2648 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0125 14:22:31.423818    2648 log.go:172] (0xc000116dc0) (0xc000832000) Create stream\nI0125 14:22:31.423832    2648 log.go:172] (0xc000116dc0) (0xc000832000) Stream added, broadcasting: 5\nI0125 14:22:31.425645    2648 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0125 14:22:31.549218    2648 log.go:172] (0xc000116dc0) Data frame received for 5\nI0125 14:22:31.549355    2648 log.go:172] (0xc000832000) (5) Data frame handling\nI0125 14:22:31.549378    2648 log.go:172] (0xc000832000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0125 14:22:31.551916    2648 log.go:172] (0xc000116dc0) Data frame received for 3\nI0125 14:22:31.551932    2648 log.go:172] (0xc000916000) (3) Data frame handling\nI0125 14:22:31.551946    2648 log.go:172] (0xc000916000) (3) Data frame sent\nI0125 14:22:31.698299    2648 log.go:172] (0xc000116dc0) (0xc000916000) Stream removed, broadcasting: 3\nI0125 14:22:31.698804    2648 log.go:172] (0xc000116dc0) Data frame received for 1\nI0125 14:22:31.698931    2648 log.go:172] (0xc0005b6a00) (1) Data frame handling\nI0125 14:22:31.698953    2648 log.go:172] (0xc0005b6a00) (1) Data frame sent\nI0125 14:22:31.698980    2648 log.go:172] (0xc000116dc0) (0xc0005b6a00) Stream removed, broadcasting: 1\nI0125 14:22:31.699144    2648 log.go:172] (0xc000116dc0) (0xc000832000) Stream removed, broadcasting: 5\nI0125 14:22:31.699196    2648 log.go:172] (0xc000116dc0) Go away received\nI0125 14:22:31.699605    2648 log.go:172] (0xc000116dc0) (0xc0005b6a00) Stream removed, broadcasting: 1\nI0125 14:22:31.699617    2648 log.go:172] (0xc000116dc0) (0xc000916000) Stream removed, broadcasting: 3\nI0125 14:22:31.699623    2648 log.go:172] (0xc000116dc0) (0xc000832000) Stream removed, broadcasting: 5\n"
Jan 25 14:22:31.707: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 25 14:22:31.707: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 25 14:22:31.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8754 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 25 14:22:32.201: INFO: stderr: "I0125 14:22:31.881166    2666 log.go:172] (0xc0005e8420) (0xc00072c6e0) Create stream\nI0125 14:22:31.881634    2666 log.go:172] (0xc0005e8420) (0xc00072c6e0) Stream added, broadcasting: 1\nI0125 14:22:31.891648    2666 log.go:172] (0xc0005e8420) Reply frame received for 1\nI0125 14:22:31.891999    2666 log.go:172] (0xc0005e8420) (0xc0006da000) Create stream\nI0125 14:22:31.892045    2666 log.go:172] (0xc0005e8420) (0xc0006da000) Stream added, broadcasting: 3\nI0125 14:22:31.895218    2666 log.go:172] (0xc0005e8420) Reply frame received for 3\nI0125 14:22:31.895264    2666 log.go:172] (0xc0005e8420) (0xc000778140) Create stream\nI0125 14:22:31.895278    2666 log.go:172] (0xc0005e8420) (0xc000778140) Stream added, broadcasting: 5\nI0125 14:22:31.896434    2666 log.go:172] (0xc0005e8420) Reply frame received for 5\nI0125 14:22:32.069338    2666 log.go:172] (0xc0005e8420) Data frame received for 3\nI0125 14:22:32.069586    2666 log.go:172] (0xc0006da000) (3) Data frame handling\nI0125 14:22:32.069676    2666 log.go:172] (0xc0005e8420) Data frame received for 5\nI0125 14:22:32.069708    2666 log.go:172] (0xc000778140) (5) Data frame handling\nI0125 14:22:32.069765    2666 log.go:172] (0xc000778140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0125 14:22:32.069886    2666 log.go:172] (0xc0006da000) (3) Data frame sent\nI0125 14:22:32.192755    2666 log.go:172] (0xc0005e8420) (0xc000778140) Stream removed, broadcasting: 5\nI0125 14:22:32.192996    2666 log.go:172] (0xc0005e8420) Data frame received for 1\nI0125 14:22:32.193019    2666 log.go:172] (0xc0005e8420) (0xc0006da000) Stream removed, broadcasting: 3\nI0125 14:22:32.193063    2666 log.go:172] (0xc00072c6e0) (1) Data frame handling\nI0125 14:22:32.193079    2666 log.go:172] (0xc00072c6e0) (1) Data frame sent\nI0125 14:22:32.193088    2666 log.go:172] (0xc0005e8420) (0xc00072c6e0) Stream removed, broadcasting: 1\nI0125 14:22:32.193099    2666 log.go:172] (0xc0005e8420) Go away received\nI0125 14:22:32.194034    2666 log.go:172] (0xc0005e8420) (0xc00072c6e0) Stream removed, broadcasting: 1\nI0125 14:22:32.194048    2666 log.go:172] (0xc0005e8420) (0xc0006da000) Stream removed, broadcasting: 3\nI0125 14:22:32.194052    2666 log.go:172] (0xc0005e8420) (0xc000778140) Stream removed, broadcasting: 5\n"
Jan 25 14:22:32.201: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 25 14:22:32.201: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 25 14:22:32.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8754 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 25 14:22:33.098: INFO: stderr: "I0125 14:22:32.406271    2686 log.go:172] (0xc000a340b0) (0xc0007d6140) Create stream\nI0125 14:22:32.406688    2686 log.go:172] (0xc000a340b0) (0xc0007d6140) Stream added, broadcasting: 1\nI0125 14:22:32.413844    2686 log.go:172] (0xc000a340b0) Reply frame received for 1\nI0125 14:22:32.413897    2686 log.go:172] (0xc000a340b0) (0xc0005e0320) Create stream\nI0125 14:22:32.413905    2686 log.go:172] (0xc000a340b0) (0xc0005e0320) Stream added, broadcasting: 3\nI0125 14:22:32.415913    2686 log.go:172] (0xc000a340b0) Reply frame received for 3\nI0125 14:22:32.416078    2686 log.go:172] (0xc000a340b0) (0xc0002e0000) Create stream\nI0125 14:22:32.416100    2686 log.go:172] (0xc000a340b0) (0xc0002e0000) Stream added, broadcasting: 5\nI0125 14:22:32.418857    2686 log.go:172] (0xc000a340b0) Reply frame received for 5\nI0125 14:22:32.799484    2686 log.go:172] (0xc000a340b0) Data frame received for 5\nI0125 14:22:32.799910    2686 log.go:172] (0xc0002e0000) (5) Data frame handling\nI0125 14:22:32.799972    2686 log.go:172] (0xc0002e0000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0125 14:22:32.800109    2686 log.go:172] (0xc000a340b0) Data frame received for 3\nI0125 14:22:32.800145    2686 log.go:172] (0xc0005e0320) (3) Data frame handling\nI0125 14:22:32.800188    2686 log.go:172] (0xc0005e0320) (3) Data frame sent\nI0125 14:22:33.084307    2686 log.go:172] (0xc000a340b0) Data frame received for 1\nI0125 14:22:33.084648    2686 log.go:172] (0xc000a340b0) (0xc0005e0320) Stream removed, broadcasting: 3\nI0125 14:22:33.084821    2686 log.go:172] (0xc0007d6140) (1) Data frame handling\nI0125 14:22:33.084891    2686 log.go:172] (0xc0007d6140) (1) Data frame sent\nI0125 14:22:33.084934    2686 log.go:172] (0xc000a340b0) (0xc0007d6140) Stream removed, broadcasting: 1\nI0125 14:22:33.086926    2686 log.go:172] (0xc000a340b0) (0xc0002e0000) Stream removed, broadcasting: 5\nI0125 14:22:33.087053    2686 log.go:172] (0xc000a340b0) Go away received\nI0125 14:22:33.087289    2686 log.go:172] (0xc000a340b0) (0xc0007d6140) Stream removed, broadcasting: 1\nI0125 14:22:33.087312    2686 log.go:172] (0xc000a340b0) (0xc0005e0320) Stream removed, broadcasting: 3\nI0125 14:22:33.087325    2686 log.go:172] (0xc000a340b0) (0xc0002e0000) Stream removed, broadcasting: 5\n"
Jan 25 14:22:33.099: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 25 14:22:33.099: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 25 14:22:33.099: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 25 14:23:13.232: INFO: Deleting all statefulset in ns statefulset-8754
Jan 25 14:23:13.240: INFO: Scaling statefulset ss to 0
Jan 25 14:23:13.275: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 14:23:13.280: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:23:13.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8754" for this suite.
Jan 25 14:23:19.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:23:19.494: INFO: namespace statefulset-8754 deletion completed in 6.14084089s

• [SLOW TEST:122.103 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:23:19.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Jan 25 14:23:19.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan 25 14:23:20.111: INFO: stderr: ""
Jan 25 14:23:20.111: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:23:20.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9814" for this suite.
Jan 25 14:23:26.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:23:26.295: INFO: namespace kubectl-9814 deletion completed in 6.178198961s

• [SLOW TEST:6.799 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:23:26.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-1b34c5da-35b9-4c68-962b-3bcdeee05fac
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-1b34c5da-35b9-4c68-962b-3bcdeee05fac
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:24:46.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8567" for this suite.
Jan 25 14:25:08.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:25:08.163: INFO: namespace configmap-8567 deletion completed in 22.112276942s

• [SLOW TEST:101.866 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:25:08.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan 25 14:25:17.588: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:25:18.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-5504" for this suite.
Jan 25 14:25:42.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:25:42.800: INFO: namespace replicaset-5504 deletion completed in 24.157748479s

• [SLOW TEST:34.637 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:25:42.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 25 14:25:43.046: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c69e3583-c15e-4bdf-8117-54958d324042" in namespace "projected-9785" to be "success or failure"
Jan 25 14:25:43.054: INFO: Pod "downwardapi-volume-c69e3583-c15e-4bdf-8117-54958d324042": Phase="Pending", Reason="", readiness=false. Elapsed: 7.896798ms
Jan 25 14:25:45.066: INFO: Pod "downwardapi-volume-c69e3583-c15e-4bdf-8117-54958d324042": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020276607s
Jan 25 14:25:47.083: INFO: Pod "downwardapi-volume-c69e3583-c15e-4bdf-8117-54958d324042": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036888822s
Jan 25 14:25:49.097: INFO: Pod "downwardapi-volume-c69e3583-c15e-4bdf-8117-54958d324042": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050359965s
Jan 25 14:25:51.114: INFO: Pod "downwardapi-volume-c69e3583-c15e-4bdf-8117-54958d324042": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.067788217s
STEP: Saw pod success
Jan 25 14:25:51.114: INFO: Pod "downwardapi-volume-c69e3583-c15e-4bdf-8117-54958d324042" satisfied condition "success or failure"
Jan 25 14:25:51.120: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c69e3583-c15e-4bdf-8117-54958d324042 container client-container: 
STEP: delete the pod
Jan 25 14:25:51.219: INFO: Waiting for pod downwardapi-volume-c69e3583-c15e-4bdf-8117-54958d324042 to disappear
Jan 25 14:25:51.226: INFO: Pod downwardapi-volume-c69e3583-c15e-4bdf-8117-54958d324042 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:25:51.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9785" for this suite.
Jan 25 14:25:57.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:25:57.386: INFO: namespace projected-9785 deletion completed in 6.154638775s

• [SLOW TEST:14.585 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:25:57.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 25 14:25:57.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-9083'
Jan 25 14:25:59.596: INFO: stderr: ""
Jan 25 14:25:59.596: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan 25 14:26:09.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-9083 -o json'
Jan 25 14:26:09.803: INFO: stderr: ""
Jan 25 14:26:09.803: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-25T14:25:59Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-9083\",\n        \"resourceVersion\": \"21820211\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-9083/pods/e2e-test-nginx-pod\",\n        \"uid\": \"6d820f7d-43de-4360-8179-9524817e4409\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-ft2g9\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-ft2g9\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-ft2g9\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-25T14:25:59Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-25T14:26:06Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-25T14:26:06Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-25T14:25:59Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://4ec4a0d826dee2a2cf6ded4a02c136877e8f4ffe3bbba129661202b9bd7e27f7\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-25T14:26:06Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-25T14:25:59Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan 25 14:26:09.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9083'
Jan 25 14:26:10.225: INFO: stderr: ""
Jan 25 14:26:10.225: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Jan 25 14:26:10.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-9083'
Jan 25 14:26:19.080: INFO: stderr: ""
Jan 25 14:26:19.080: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:26:19.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9083" for this suite.
Jan 25 14:26:25.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:26:25.282: INFO: namespace kubectl-9083 deletion completed in 6.190935961s

• [SLOW TEST:27.895 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:26:25.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan 25 14:26:25.404: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-1111,SelfLink:/api/v1/namespaces/watch-1111/configmaps/e2e-watch-test-resource-version,UID:4a12c856-7eaa-4d33-8dce-7707bf05d011,ResourceVersion:21820265,Generation:0,CreationTimestamp:2020-01-25 14:26:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 25 14:26:25.404: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-1111,SelfLink:/api/v1/namespaces/watch-1111/configmaps/e2e-watch-test-resource-version,UID:4a12c856-7eaa-4d33-8dce-7707bf05d011,ResourceVersion:21820266,Generation:0,CreationTimestamp:2020-01-25 14:26:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:26:25.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1111" for this suite.
Jan 25 14:26:31.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:26:31.566: INFO: namespace watch-1111 deletion completed in 6.158201843s

• [SLOW TEST:6.284 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:26:31.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 25 14:26:31.681: INFO: Waiting up to 5m0s for pod "downwardapi-volume-969f61ca-a0a9-4066-8919-e7a24d709f42" in namespace "projected-3073" to be "success or failure"
Jan 25 14:26:31.695: INFO: Pod "downwardapi-volume-969f61ca-a0a9-4066-8919-e7a24d709f42": Phase="Pending", Reason="", readiness=false. Elapsed: 14.158982ms
Jan 25 14:26:33.706: INFO: Pod "downwardapi-volume-969f61ca-a0a9-4066-8919-e7a24d709f42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025409766s
Jan 25 14:26:35.720: INFO: Pod "downwardapi-volume-969f61ca-a0a9-4066-8919-e7a24d709f42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038870079s
Jan 25 14:26:37.732: INFO: Pod "downwardapi-volume-969f61ca-a0a9-4066-8919-e7a24d709f42": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051067694s
Jan 25 14:26:39.742: INFO: Pod "downwardapi-volume-969f61ca-a0a9-4066-8919-e7a24d709f42": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060749706s
Jan 25 14:26:41.750: INFO: Pod "downwardapi-volume-969f61ca-a0a9-4066-8919-e7a24d709f42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068854229s
STEP: Saw pod success
Jan 25 14:26:41.750: INFO: Pod "downwardapi-volume-969f61ca-a0a9-4066-8919-e7a24d709f42" satisfied condition "success or failure"
Jan 25 14:26:41.755: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-969f61ca-a0a9-4066-8919-e7a24d709f42 container client-container: 
STEP: delete the pod
Jan 25 14:26:42.000: INFO: Waiting for pod downwardapi-volume-969f61ca-a0a9-4066-8919-e7a24d709f42 to disappear
Jan 25 14:26:42.041: INFO: Pod downwardapi-volume-969f61ca-a0a9-4066-8919-e7a24d709f42 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:26:42.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3073" for this suite.
Jan 25 14:26:48.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:26:48.225: INFO: namespace projected-3073 deletion completed in 6.177193086s

• [SLOW TEST:16.658 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:26:48.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 25 14:26:48.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7824'
Jan 25 14:26:48.492: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 25 14:26:48.492: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Jan 25 14:26:48.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-7824'
Jan 25 14:26:48.652: INFO: stderr: ""
Jan 25 14:26:48.652: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:26:48.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7824" for this suite.
Jan 25 14:27:10.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:27:10.832: INFO: namespace kubectl-7824 deletion completed in 22.170000576s

• [SLOW TEST:22.607 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:27:10.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 25 14:27:11.027: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ae234235-fc4d-484b-9406-4008b1f276ce" in namespace "downward-api-8692" to be "success or failure"
Jan 25 14:27:11.036: INFO: Pod "downwardapi-volume-ae234235-fc4d-484b-9406-4008b1f276ce": Phase="Pending", Reason="", readiness=false. Elapsed: 8.097321ms
Jan 25 14:27:13.448: INFO: Pod "downwardapi-volume-ae234235-fc4d-484b-9406-4008b1f276ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.420933013s
Jan 25 14:27:15.459: INFO: Pod "downwardapi-volume-ae234235-fc4d-484b-9406-4008b1f276ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43113771s
Jan 25 14:27:17.468: INFO: Pod "downwardapi-volume-ae234235-fc4d-484b-9406-4008b1f276ce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.440145909s
Jan 25 14:27:19.528: INFO: Pod "downwardapi-volume-ae234235-fc4d-484b-9406-4008b1f276ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.500686038s
STEP: Saw pod success
Jan 25 14:27:19.528: INFO: Pod "downwardapi-volume-ae234235-fc4d-484b-9406-4008b1f276ce" satisfied condition "success or failure"
Jan 25 14:27:19.541: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ae234235-fc4d-484b-9406-4008b1f276ce container client-container: 
STEP: delete the pod
Jan 25 14:27:19.616: INFO: Waiting for pod downwardapi-volume-ae234235-fc4d-484b-9406-4008b1f276ce to disappear
Jan 25 14:27:19.765: INFO: Pod downwardapi-volume-ae234235-fc4d-484b-9406-4008b1f276ce no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:27:19.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8692" for this suite.
Jan 25 14:27:25.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:27:25.990: INFO: namespace downward-api-8692 deletion completed in 6.197180123s

• [SLOW TEST:15.158 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:27:25.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan 25 14:27:36.198: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-692bedd7-bf1f-4e7c-831e-e55abf04bac9,GenerateName:,Namespace:events-416,SelfLink:/api/v1/namespaces/events-416/pods/send-events-692bedd7-bf1f-4e7c-831e-e55abf04bac9,UID:c8b74c0a-477b-4ca6-836e-cd42313c01ca,ResourceVersion:21820442,Generation:0,CreationTimestamp:2020-01-25 14:27:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 146533255,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-njrtw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-njrtw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-njrtw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023aa4d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023aa4f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 14:27:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 14:27:34 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 14:27:34 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 14:27:26 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-25 14:27:26 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-25 14:27:33 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://135e510ca3c31432a0407a1f0ca36d8dc0aa428a9113cdb32f83ab0459a5b454}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan 25 14:27:38.206: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan 25 14:27:40.216: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:27:40.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-416" for this suite.
Jan 25 14:28:32.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:28:32.383: INFO: namespace events-416 deletion completed in 52.142802935s

• [SLOW TEST:66.392 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:28:32.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:28:42.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9092" for this suite.
Jan 25 14:28:48.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:28:49.007: INFO: namespace emptydir-wrapper-9092 deletion completed in 6.27205933s

• [SLOW TEST:16.623 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:28:49.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-2017eef1-e0ce-44f7-befe-4987a5e44cf8
STEP: Creating a pod to test consume configMaps
Jan 25 14:28:49.148: INFO: Waiting up to 5m0s for pod "pod-configmaps-b3ea7179-a467-45ae-b9d8-c3b06d951e2a" in namespace "configmap-9148" to be "success or failure"
Jan 25 14:28:49.159: INFO: Pod "pod-configmaps-b3ea7179-a467-45ae-b9d8-c3b06d951e2a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.161156ms
Jan 25 14:28:51.168: INFO: Pod "pod-configmaps-b3ea7179-a467-45ae-b9d8-c3b06d951e2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020402364s
Jan 25 14:28:53.176: INFO: Pod "pod-configmaps-b3ea7179-a467-45ae-b9d8-c3b06d951e2a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028451063s
Jan 25 14:28:55.190: INFO: Pod "pod-configmaps-b3ea7179-a467-45ae-b9d8-c3b06d951e2a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042371509s
Jan 25 14:28:57.403: INFO: Pod "pod-configmaps-b3ea7179-a467-45ae-b9d8-c3b06d951e2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.255366233s
STEP: Saw pod success
Jan 25 14:28:57.403: INFO: Pod "pod-configmaps-b3ea7179-a467-45ae-b9d8-c3b06d951e2a" satisfied condition "success or failure"
Jan 25 14:28:57.495: INFO: Trying to get logs from node iruya-node pod pod-configmaps-b3ea7179-a467-45ae-b9d8-c3b06d951e2a container configmap-volume-test: 
STEP: delete the pod
Jan 25 14:28:57.587: INFO: Waiting for pod pod-configmaps-b3ea7179-a467-45ae-b9d8-c3b06d951e2a to disappear
Jan 25 14:28:57.707: INFO: Pod pod-configmaps-b3ea7179-a467-45ae-b9d8-c3b06d951e2a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:28:57.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9148" for this suite.
Jan 25 14:29:03.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:29:03.988: INFO: namespace configmap-9148 deletion completed in 6.273969824s

• [SLOW TEST:14.981 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:29:03.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Jan 25 14:29:04.194: INFO: Waiting up to 5m0s for pod "var-expansion-9adaf8ee-87ac-4481-a074-e6d75c5be3e4" in namespace "var-expansion-8423" to be "success or failure"
Jan 25 14:29:04.204: INFO: Pod "var-expansion-9adaf8ee-87ac-4481-a074-e6d75c5be3e4": Phase="Pending", Reason="", readiness=false. Elapsed: 9.785653ms
Jan 25 14:29:06.220: INFO: Pod "var-expansion-9adaf8ee-87ac-4481-a074-e6d75c5be3e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025813171s
Jan 25 14:29:08.236: INFO: Pod "var-expansion-9adaf8ee-87ac-4481-a074-e6d75c5be3e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041557187s
Jan 25 14:29:10.248: INFO: Pod "var-expansion-9adaf8ee-87ac-4481-a074-e6d75c5be3e4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053645084s
Jan 25 14:29:12.255: INFO: Pod "var-expansion-9adaf8ee-87ac-4481-a074-e6d75c5be3e4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060994439s
Jan 25 14:29:14.264: INFO: Pod "var-expansion-9adaf8ee-87ac-4481-a074-e6d75c5be3e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069301651s
STEP: Saw pod success
Jan 25 14:29:14.264: INFO: Pod "var-expansion-9adaf8ee-87ac-4481-a074-e6d75c5be3e4" satisfied condition "success or failure"
Jan 25 14:29:14.268: INFO: Trying to get logs from node iruya-node pod var-expansion-9adaf8ee-87ac-4481-a074-e6d75c5be3e4 container dapi-container: 
STEP: delete the pod
Jan 25 14:29:14.366: INFO: Waiting for pod var-expansion-9adaf8ee-87ac-4481-a074-e6d75c5be3e4 to disappear
Jan 25 14:29:14.382: INFO: Pod var-expansion-9adaf8ee-87ac-4481-a074-e6d75c5be3e4 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:29:14.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-8423" for this suite.
Jan 25 14:29:20.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:29:20.564: INFO: namespace var-expansion-8423 deletion completed in 6.17465746s

• [SLOW TEST:16.574 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:29:20.564: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-gjr5
STEP: Creating a pod to test atomic-volume-subpath
Jan 25 14:29:20.663: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-gjr5" in namespace "subpath-6129" to be "success or failure"
Jan 25 14:29:20.665: INFO: Pod "pod-subpath-test-downwardapi-gjr5": Phase="Pending", Reason="", readiness=false. Elapsed: 1.798272ms
Jan 25 14:29:22.681: INFO: Pod "pod-subpath-test-downwardapi-gjr5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018348255s
Jan 25 14:29:24.691: INFO: Pod "pod-subpath-test-downwardapi-gjr5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02801338s
Jan 25 14:29:26.704: INFO: Pod "pod-subpath-test-downwardapi-gjr5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04176631s
Jan 25 14:29:28.726: INFO: Pod "pod-subpath-test-downwardapi-gjr5": Phase="Running", Reason="", readiness=true. Elapsed: 8.063500273s
Jan 25 14:29:30.735: INFO: Pod "pod-subpath-test-downwardapi-gjr5": Phase="Running", Reason="", readiness=true. Elapsed: 10.071819506s
Jan 25 14:29:32.762: INFO: Pod "pod-subpath-test-downwardapi-gjr5": Phase="Running", Reason="", readiness=true. Elapsed: 12.099107048s
Jan 25 14:29:34.779: INFO: Pod "pod-subpath-test-downwardapi-gjr5": Phase="Running", Reason="", readiness=true. Elapsed: 14.116151716s
Jan 25 14:29:36.788: INFO: Pod "pod-subpath-test-downwardapi-gjr5": Phase="Running", Reason="", readiness=true. Elapsed: 16.125328405s
Jan 25 14:29:38.798: INFO: Pod "pod-subpath-test-downwardapi-gjr5": Phase="Running", Reason="", readiness=true. Elapsed: 18.134899167s
Jan 25 14:29:40.807: INFO: Pod "pod-subpath-test-downwardapi-gjr5": Phase="Running", Reason="", readiness=true. Elapsed: 20.143867222s
Jan 25 14:29:42.832: INFO: Pod "pod-subpath-test-downwardapi-gjr5": Phase="Running", Reason="", readiness=true. Elapsed: 22.169632534s
Jan 25 14:29:44.840: INFO: Pod "pod-subpath-test-downwardapi-gjr5": Phase="Running", Reason="", readiness=true. Elapsed: 24.177684272s
Jan 25 14:29:46.849: INFO: Pod "pod-subpath-test-downwardapi-gjr5": Phase="Running", Reason="", readiness=true. Elapsed: 26.186187679s
Jan 25 14:29:48.860: INFO: Pod "pod-subpath-test-downwardapi-gjr5": Phase="Running", Reason="", readiness=true. Elapsed: 28.196891457s
Jan 25 14:29:50.872: INFO: Pod "pod-subpath-test-downwardapi-gjr5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.209510376s
STEP: Saw pod success
Jan 25 14:29:50.872: INFO: Pod "pod-subpath-test-downwardapi-gjr5" satisfied condition "success or failure"
Jan 25 14:29:50.880: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-gjr5 container test-container-subpath-downwardapi-gjr5: 
STEP: delete the pod
Jan 25 14:29:50.963: INFO: Waiting for pod pod-subpath-test-downwardapi-gjr5 to disappear
Jan 25 14:29:50.968: INFO: Pod pod-subpath-test-downwardapi-gjr5 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-gjr5
Jan 25 14:29:50.968: INFO: Deleting pod "pod-subpath-test-downwardapi-gjr5" in namespace "subpath-6129"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:29:50.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6129" for this suite.
Jan 25 14:29:57.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:29:57.192: INFO: namespace subpath-6129 deletion completed in 6.215618174s

• [SLOW TEST:36.628 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:29:57.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:30:29.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-6792" for this suite.
Jan 25 14:30:35.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:30:35.680: INFO: namespace namespaces-6792 deletion completed in 6.169274138s
STEP: Destroying namespace "nsdeletetest-8517" for this suite.
Jan 25 14:30:35.682: INFO: Namespace nsdeletetest-8517 was already deleted
STEP: Destroying namespace "nsdeletetest-5650" for this suite.
Jan 25 14:30:41.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:30:41.839: INFO: namespace nsdeletetest-5650 deletion completed in 6.156345492s

• [SLOW TEST:44.646 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:30:41.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-7914a8e7-611d-4953-a562-0cc2a33ac697 in namespace container-probe-759
Jan 25 14:30:49.982: INFO: Started pod test-webserver-7914a8e7-611d-4953-a562-0cc2a33ac697 in namespace container-probe-759
STEP: checking the pod's current state and verifying that restartCount is present
Jan 25 14:30:49.991: INFO: Initial restart count of pod test-webserver-7914a8e7-611d-4953-a562-0cc2a33ac697 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:34:51.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-759" for this suite.
Jan 25 14:34:57.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:34:58.114: INFO: namespace container-probe-759 deletion completed in 6.228151488s

• [SLOW TEST:256.274 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:34:58.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Jan 25 14:34:58.243: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix367558743/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:34:58.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1275" for this suite.
Jan 25 14:35:04.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:35:04.570: INFO: namespace kubectl-1275 deletion completed in 6.230564119s

• [SLOW TEST:6.457 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:35:04.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 25 14:35:04.677: INFO: Waiting up to 5m0s for pod "downwardapi-volume-267d329a-ec9d-4cfb-937b-6eb0e5d5751f" in namespace "downward-api-8990" to be "success or failure"
Jan 25 14:35:04.700: INFO: Pod "downwardapi-volume-267d329a-ec9d-4cfb-937b-6eb0e5d5751f": Phase="Pending", Reason="", readiness=false. Elapsed: 23.588209ms
Jan 25 14:35:06.719: INFO: Pod "downwardapi-volume-267d329a-ec9d-4cfb-937b-6eb0e5d5751f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042565961s
Jan 25 14:35:08.727: INFO: Pod "downwardapi-volume-267d329a-ec9d-4cfb-937b-6eb0e5d5751f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050556964s
Jan 25 14:35:11.153: INFO: Pod "downwardapi-volume-267d329a-ec9d-4cfb-937b-6eb0e5d5751f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.476412724s
Jan 25 14:35:13.163: INFO: Pod "downwardapi-volume-267d329a-ec9d-4cfb-937b-6eb0e5d5751f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.486005092s
Jan 25 14:35:15.173: INFO: Pod "downwardapi-volume-267d329a-ec9d-4cfb-937b-6eb0e5d5751f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.495926084s
STEP: Saw pod success
Jan 25 14:35:15.173: INFO: Pod "downwardapi-volume-267d329a-ec9d-4cfb-937b-6eb0e5d5751f" satisfied condition "success or failure"
Jan 25 14:35:15.177: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-267d329a-ec9d-4cfb-937b-6eb0e5d5751f container client-container: 
STEP: delete the pod
Jan 25 14:35:15.361: INFO: Waiting for pod downwardapi-volume-267d329a-ec9d-4cfb-937b-6eb0e5d5751f to disappear
Jan 25 14:35:15.369: INFO: Pod downwardapi-volume-267d329a-ec9d-4cfb-937b-6eb0e5d5751f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:35:15.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8990" for this suite.
Jan 25 14:35:21.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:35:21.553: INFO: namespace downward-api-8990 deletion completed in 6.174734952s

• [SLOW TEST:16.982 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:35:21.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:35:21.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6070" for this suite.
Jan 25 14:35:27.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:35:28.138: INFO: namespace services-6070 deletion completed in 6.297492072s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.585 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:35:28.139: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Jan 25 14:35:28.222: INFO: Waiting up to 5m0s for pod "client-containers-291a9f73-b646-4abe-83ac-8c3197a61004" in namespace "containers-4511" to be "success or failure"
Jan 25 14:35:28.230: INFO: Pod "client-containers-291a9f73-b646-4abe-83ac-8c3197a61004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.847491ms
Jan 25 14:35:30.305: INFO: Pod "client-containers-291a9f73-b646-4abe-83ac-8c3197a61004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08229806s
Jan 25 14:35:32.333: INFO: Pod "client-containers-291a9f73-b646-4abe-83ac-8c3197a61004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110199633s
Jan 25 14:35:34.344: INFO: Pod "client-containers-291a9f73-b646-4abe-83ac-8c3197a61004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121510543s
Jan 25 14:35:36.394: INFO: Pod "client-containers-291a9f73-b646-4abe-83ac-8c3197a61004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.171771207s
STEP: Saw pod success
Jan 25 14:35:36.394: INFO: Pod "client-containers-291a9f73-b646-4abe-83ac-8c3197a61004" satisfied condition "success or failure"
Jan 25 14:35:36.400: INFO: Trying to get logs from node iruya-node pod client-containers-291a9f73-b646-4abe-83ac-8c3197a61004 container test-container: 
STEP: delete the pod
Jan 25 14:35:36.507: INFO: Waiting for pod client-containers-291a9f73-b646-4abe-83ac-8c3197a61004 to disappear
Jan 25 14:35:36.519: INFO: Pod client-containers-291a9f73-b646-4abe-83ac-8c3197a61004 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:35:36.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4511" for this suite.
Jan 25 14:35:42.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:35:42.721: INFO: namespace containers-4511 deletion completed in 6.19706952s

• [SLOW TEST:14.582 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:35:42.721: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-e7d3bb4e-df8e-45be-9d7d-04dcaf7198d1
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-e7d3bb4e-df8e-45be-9d7d-04dcaf7198d1
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:36:54.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8762" for this suite.
Jan 25 14:37:16.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:37:16.530: INFO: namespace projected-8762 deletion completed in 22.179900397s

• [SLOW TEST:93.809 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:37:16.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-ceb6a95a-97bf-4015-a01a-5a497e824612
STEP: Creating a pod to test consume configMaps
Jan 25 14:37:16.668: INFO: Waiting up to 5m0s for pod "pod-configmaps-514ff73f-8dc1-445a-98b4-28a67b708b6a" in namespace "configmap-8514" to be "success or failure"
Jan 25 14:37:16.673: INFO: Pod "pod-configmaps-514ff73f-8dc1-445a-98b4-28a67b708b6a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.261984ms
Jan 25 14:37:18.680: INFO: Pod "pod-configmaps-514ff73f-8dc1-445a-98b4-28a67b708b6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011440517s
Jan 25 14:37:20.721: INFO: Pod "pod-configmaps-514ff73f-8dc1-445a-98b4-28a67b708b6a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052816483s
Jan 25 14:37:22.738: INFO: Pod "pod-configmaps-514ff73f-8dc1-445a-98b4-28a67b708b6a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069722405s
Jan 25 14:37:24.743: INFO: Pod "pod-configmaps-514ff73f-8dc1-445a-98b4-28a67b708b6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074888611s
STEP: Saw pod success
Jan 25 14:37:24.743: INFO: Pod "pod-configmaps-514ff73f-8dc1-445a-98b4-28a67b708b6a" satisfied condition "success or failure"
Jan 25 14:37:24.746: INFO: Trying to get logs from node iruya-node pod pod-configmaps-514ff73f-8dc1-445a-98b4-28a67b708b6a container configmap-volume-test: 
STEP: delete the pod
Jan 25 14:37:24.803: INFO: Waiting for pod pod-configmaps-514ff73f-8dc1-445a-98b4-28a67b708b6a to disappear
Jan 25 14:37:24.830: INFO: Pod pod-configmaps-514ff73f-8dc1-445a-98b4-28a67b708b6a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:37:24.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8514" for this suite.
Jan 25 14:37:30.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:37:31.010: INFO: namespace configmap-8514 deletion completed in 6.171700476s

• [SLOW TEST:14.479 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:37:31.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:37:39.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2669" for this suite.
Jan 25 14:38:21.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:38:21.350: INFO: namespace kubelet-test-2669 deletion completed in 42.171498698s

• [SLOW TEST:50.338 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:38:21.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan 25 14:38:32.068: INFO: Successfully updated pod "labelsupdate6d39fea5-2120-48cd-99ec-e10d21dfc4b0"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:38:34.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6345" for this suite.
Jan 25 14:38:56.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:38:56.381: INFO: namespace downward-api-6345 deletion completed in 22.245011173s

• [SLOW TEST:35.030 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:38:56.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 25 14:38:56.446: INFO: Waiting up to 5m0s for pod "pod-f45451ae-b3bb-41c2-b5a9-4e686aa08120" in namespace "emptydir-2299" to be "success or failure"
Jan 25 14:38:56.469: INFO: Pod "pod-f45451ae-b3bb-41c2-b5a9-4e686aa08120": Phase="Pending", Reason="", readiness=false. Elapsed: 22.771753ms
Jan 25 14:38:58.481: INFO: Pod "pod-f45451ae-b3bb-41c2-b5a9-4e686aa08120": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034958901s
Jan 25 14:39:00.494: INFO: Pod "pod-f45451ae-b3bb-41c2-b5a9-4e686aa08120": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047977978s
Jan 25 14:39:02.508: INFO: Pod "pod-f45451ae-b3bb-41c2-b5a9-4e686aa08120": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06201031s
Jan 25 14:39:04.529: INFO: Pod "pod-f45451ae-b3bb-41c2-b5a9-4e686aa08120": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082939598s
Jan 25 14:39:06.545: INFO: Pod "pod-f45451ae-b3bb-41c2-b5a9-4e686aa08120": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.09919245s
STEP: Saw pod success
Jan 25 14:39:06.545: INFO: Pod "pod-f45451ae-b3bb-41c2-b5a9-4e686aa08120" satisfied condition "success or failure"
Jan 25 14:39:06.577: INFO: Trying to get logs from node iruya-node pod pod-f45451ae-b3bb-41c2-b5a9-4e686aa08120 container test-container: 
STEP: delete the pod
Jan 25 14:39:06.687: INFO: Waiting for pod pod-f45451ae-b3bb-41c2-b5a9-4e686aa08120 to disappear
Jan 25 14:39:06.692: INFO: Pod pod-f45451ae-b3bb-41c2-b5a9-4e686aa08120 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:39:06.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2299" for this suite.
Jan 25 14:39:12.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:39:12.875: INFO: namespace emptydir-2299 deletion completed in 6.175047241s

• [SLOW TEST:16.494 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:39:12.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 25 14:39:12.981: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Jan 25 14:39:16.293: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:39:16.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3752" for this suite.
Jan 25 14:39:25.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:39:25.994: INFO: namespace replication-controller-3752 deletion completed in 9.641166813s

• [SLOW TEST:13.119 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:39:25.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Jan 25 14:39:26.313: INFO: Waiting up to 5m0s for pod "pod-31ece3ed-1bce-4a8d-aa83-53bacc7b94a1" in namespace "emptydir-6549" to be "success or failure"
Jan 25 14:39:26.365: INFO: Pod "pod-31ece3ed-1bce-4a8d-aa83-53bacc7b94a1": Phase="Pending", Reason="", readiness=false. Elapsed: 51.400498ms
Jan 25 14:39:28.374: INFO: Pod "pod-31ece3ed-1bce-4a8d-aa83-53bacc7b94a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060670362s
Jan 25 14:39:30.385: INFO: Pod "pod-31ece3ed-1bce-4a8d-aa83-53bacc7b94a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0720615s
Jan 25 14:39:32.392: INFO: Pod "pod-31ece3ed-1bce-4a8d-aa83-53bacc7b94a1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079008522s
Jan 25 14:39:34.438: INFO: Pod "pod-31ece3ed-1bce-4a8d-aa83-53bacc7b94a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.124957555s
STEP: Saw pod success
Jan 25 14:39:34.438: INFO: Pod "pod-31ece3ed-1bce-4a8d-aa83-53bacc7b94a1" satisfied condition "success or failure"
Jan 25 14:39:34.445: INFO: Trying to get logs from node iruya-node pod pod-31ece3ed-1bce-4a8d-aa83-53bacc7b94a1 container test-container: 
STEP: delete the pod
Jan 25 14:39:34.667: INFO: Waiting for pod pod-31ece3ed-1bce-4a8d-aa83-53bacc7b94a1 to disappear
Jan 25 14:39:34.683: INFO: Pod pod-31ece3ed-1bce-4a8d-aa83-53bacc7b94a1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:39:34.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6549" for this suite.
Jan 25 14:39:40.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:39:40.884: INFO: namespace emptydir-6549 deletion completed in 6.192035422s

• [SLOW TEST:14.889 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:39:40.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan 25 14:39:41.573: INFO: Pod name wrapped-volume-race-c17c6e24-f55c-444c-96d5-ee8d3540f32b: Found 0 pods out of 5
Jan 25 14:39:46.589: INFO: Pod name wrapped-volume-race-c17c6e24-f55c-444c-96d5-ee8d3540f32b: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-c17c6e24-f55c-444c-96d5-ee8d3540f32b in namespace emptydir-wrapper-9639, will wait for the garbage collector to delete the pods
Jan 25 14:40:20.706: INFO: Deleting ReplicationController wrapped-volume-race-c17c6e24-f55c-444c-96d5-ee8d3540f32b took: 13.402772ms
Jan 25 14:40:21.106: INFO: Terminating ReplicationController wrapped-volume-race-c17c6e24-f55c-444c-96d5-ee8d3540f32b pods took: 400.808278ms
STEP: Creating RC which spawns configmap-volume pods
Jan 25 14:41:06.965: INFO: Pod name wrapped-volume-race-587852e3-63b0-41f0-a341-3c5bb3f32480: Found 0 pods out of 5
Jan 25 14:41:11.981: INFO: Pod name wrapped-volume-race-587852e3-63b0-41f0-a341-3c5bb3f32480: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-587852e3-63b0-41f0-a341-3c5bb3f32480 in namespace emptydir-wrapper-9639, will wait for the garbage collector to delete the pods
Jan 25 14:41:44.283: INFO: Deleting ReplicationController wrapped-volume-race-587852e3-63b0-41f0-a341-3c5bb3f32480 took: 80.492183ms
Jan 25 14:41:44.684: INFO: Terminating ReplicationController wrapped-volume-race-587852e3-63b0-41f0-a341-3c5bb3f32480 pods took: 400.98617ms
STEP: Creating RC which spawns configmap-volume pods
Jan 25 14:42:36.931: INFO: Pod name wrapped-volume-race-b018c664-4b16-4918-ba86-db041dd8223e: Found 0 pods out of 5
Jan 25 14:42:41.956: INFO: Pod name wrapped-volume-race-b018c664-4b16-4918-ba86-db041dd8223e: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-b018c664-4b16-4918-ba86-db041dd8223e in namespace emptydir-wrapper-9639, will wait for the garbage collector to delete the pods
Jan 25 14:43:12.255: INFO: Deleting ReplicationController wrapped-volume-race-b018c664-4b16-4918-ba86-db041dd8223e took: 32.954995ms
Jan 25 14:43:12.656: INFO: Terminating ReplicationController wrapped-volume-race-b018c664-4b16-4918-ba86-db041dd8223e pods took: 400.801121ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:43:58.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9639" for this suite.
Jan 25 14:44:08.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:44:08.807: INFO: namespace emptydir-wrapper-9639 deletion completed in 10.135451744s

• [SLOW TEST:267.923 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:44:08.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 25 14:44:08.910: INFO: Waiting up to 5m0s for pod "downward-api-ded7557f-2406-414e-90c5-8c493b50bb3c" in namespace "downward-api-2315" to be "success or failure"
Jan 25 14:44:08.923: INFO: Pod "downward-api-ded7557f-2406-414e-90c5-8c493b50bb3c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.976167ms
Jan 25 14:44:10.931: INFO: Pod "downward-api-ded7557f-2406-414e-90c5-8c493b50bb3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020097373s
Jan 25 14:44:12.939: INFO: Pod "downward-api-ded7557f-2406-414e-90c5-8c493b50bb3c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02807611s
Jan 25 14:44:14.949: INFO: Pod "downward-api-ded7557f-2406-414e-90c5-8c493b50bb3c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038624792s
Jan 25 14:44:16.964: INFO: Pod "downward-api-ded7557f-2406-414e-90c5-8c493b50bb3c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053673622s
Jan 25 14:44:18.973: INFO: Pod "downward-api-ded7557f-2406-414e-90c5-8c493b50bb3c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.063013802s
Jan 25 14:44:20.981: INFO: Pod "downward-api-ded7557f-2406-414e-90c5-8c493b50bb3c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.070760045s
Jan 25 14:44:22.996: INFO: Pod "downward-api-ded7557f-2406-414e-90c5-8c493b50bb3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.085080921s
STEP: Saw pod success
Jan 25 14:44:22.996: INFO: Pod "downward-api-ded7557f-2406-414e-90c5-8c493b50bb3c" satisfied condition "success or failure"
Jan 25 14:44:23.001: INFO: Trying to get logs from node iruya-node pod downward-api-ded7557f-2406-414e-90c5-8c493b50bb3c container dapi-container: 
STEP: delete the pod
Jan 25 14:44:23.527: INFO: Waiting for pod downward-api-ded7557f-2406-414e-90c5-8c493b50bb3c to disappear
Jan 25 14:44:23.540: INFO: Pod downward-api-ded7557f-2406-414e-90c5-8c493b50bb3c no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:44:23.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2315" for this suite.
Jan 25 14:44:29.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:44:29.723: INFO: namespace downward-api-2315 deletion completed in 6.174669139s

• [SLOW TEST:20.915 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:44:29.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0125 14:45:00.398656       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 25 14:45:00.398: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:45:00.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8723" for this suite.
Jan 25 14:45:08.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:45:08.585: INFO: namespace gc-8723 deletion completed in 8.18140768s

• [SLOW TEST:38.862 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:45:08.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:45:20.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-638" for this suite.
Jan 25 14:46:12.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:46:12.720: INFO: namespace kubelet-test-638 deletion completed in 52.19452412s

• [SLOW TEST:64.134 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:46:12.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-lp4q7 in namespace proxy-3552
I0125 14:46:12.908895       8 runners.go:180] Created replication controller with name: proxy-service-lp4q7, namespace: proxy-3552, replica count: 1
I0125 14:46:13.959861       8 runners.go:180] proxy-service-lp4q7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 14:46:14.960421       8 runners.go:180] proxy-service-lp4q7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 14:46:15.960886       8 runners.go:180] proxy-service-lp4q7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 14:46:16.961236       8 runners.go:180] proxy-service-lp4q7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 14:46:17.961532       8 runners.go:180] proxy-service-lp4q7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 14:46:18.961817       8 runners.go:180] proxy-service-lp4q7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 14:46:19.962237       8 runners.go:180] proxy-service-lp4q7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 14:46:20.962839       8 runners.go:180] proxy-service-lp4q7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0125 14:46:21.963158       8 runners.go:180] proxy-service-lp4q7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0125 14:46:22.963482       8 runners.go:180] proxy-service-lp4q7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0125 14:46:23.964282       8 runners.go:180] proxy-service-lp4q7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0125 14:46:24.965069       8 runners.go:180] proxy-service-lp4q7 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 25 14:46:24.971: INFO: Endpoint proxy-3552/proxy-service-lp4q7 is not ready yet
Jan 25 14:46:26.978: INFO: setup took 14.18315146s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan 25 14:46:27.007: INFO: (0) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 28.701812ms)
Jan 25 14:46:27.010: INFO: (0) /api/v1/namespaces/proxy-3552/services/http:proxy-service-lp4q7:portname1/proxy/: foo (200; 30.55344ms)
Jan 25 14:46:27.011: INFO: (0) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99/proxy/: test (200; 32.655671ms)
Jan 25 14:46:27.011: INFO: (0) /api/v1/namespaces/proxy-3552/services/proxy-service-lp4q7:portname1/proxy/: foo (200; 32.736744ms)
Jan 25 14:46:27.011: INFO: (0) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:1080/proxy/: ... (200; 33.01809ms)
Jan 25 14:46:27.011: INFO: (0) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:160/proxy/: foo (200; 33.176325ms)
Jan 25 14:46:27.011: INFO: (0) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:160/proxy/: foo (200; 33.163708ms)
Jan 25 14:46:27.012: INFO: (0) /api/v1/namespaces/proxy-3552/services/http:proxy-service-lp4q7:portname2/proxy/: bar (200; 33.75664ms)
Jan 25 14:46:27.012: INFO: (0) /api/v1/namespaces/proxy-3552/services/proxy-service-lp4q7:portname2/proxy/: bar (200; 34.313886ms)
Jan 25 14:46:27.012: INFO: (0) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 34.362106ms)
Jan 25 14:46:27.020: INFO: (0) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:1080/proxy/: test<... (200; 41.589821ms)
Jan 25 14:46:27.023: INFO: (0) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:460/proxy/: tls baz (200; 44.560887ms)
Jan 25 14:46:27.023: INFO: (0) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:462/proxy/: tls qux (200; 45.216246ms)
Jan 25 14:46:27.023: INFO: (0) /api/v1/namespaces/proxy-3552/services/https:proxy-service-lp4q7:tlsportname1/proxy/: tls baz (200; 45.280447ms)
Jan 25 14:46:27.024: INFO: (0) /api/v1/namespaces/proxy-3552/services/https:proxy-service-lp4q7:tlsportname2/proxy/: tls qux (200; 46.063427ms)
Jan 25 14:46:27.028: INFO: (0) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:443/proxy/: test (200; 12.227638ms)
Jan 25 14:46:27.041: INFO: (1) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 12.230978ms)
Jan 25 14:46:27.041: INFO: (1) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:1080/proxy/: ... (200; 12.32074ms)
Jan 25 14:46:27.041: INFO: (1) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:160/proxy/: foo (200; 12.621649ms)
Jan 25 14:46:27.042: INFO: (1) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:1080/proxy/: test<... (200; 12.712616ms)
Jan 25 14:46:27.042: INFO: (1) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:460/proxy/: tls baz (200; 13.056829ms)
Jan 25 14:46:27.043: INFO: (1) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:443/proxy/: test<... (200; 12.673125ms)
Jan 25 14:46:27.061: INFO: (2) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 12.684546ms)
Jan 25 14:46:27.061: INFO: (2) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:462/proxy/: tls qux (200; 12.88552ms)
Jan 25 14:46:27.061: INFO: (2) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:160/proxy/: foo (200; 13.046258ms)
Jan 25 14:46:27.061: INFO: (2) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:443/proxy/: ... (200; 13.48441ms)
Jan 25 14:46:27.061: INFO: (2) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99/proxy/: test (200; 13.406519ms)
Jan 25 14:46:27.061: INFO: (2) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 13.434434ms)
Jan 25 14:46:27.063: INFO: (2) /api/v1/namespaces/proxy-3552/services/https:proxy-service-lp4q7:tlsportname1/proxy/: tls baz (200; 14.896812ms)
Jan 25 14:46:27.063: INFO: (2) /api/v1/namespaces/proxy-3552/services/http:proxy-service-lp4q7:portname1/proxy/: foo (200; 15.41998ms)
Jan 25 14:46:27.063: INFO: (2) /api/v1/namespaces/proxy-3552/services/proxy-service-lp4q7:portname2/proxy/: bar (200; 15.290553ms)
Jan 25 14:46:27.064: INFO: (2) /api/v1/namespaces/proxy-3552/services/https:proxy-service-lp4q7:tlsportname2/proxy/: tls qux (200; 15.500362ms)
Jan 25 14:46:27.065: INFO: (2) /api/v1/namespaces/proxy-3552/services/proxy-service-lp4q7:portname1/proxy/: foo (200; 16.622232ms)
Jan 25 14:46:27.066: INFO: (2) /api/v1/namespaces/proxy-3552/services/http:proxy-service-lp4q7:portname2/proxy/: bar (200; 17.546157ms)
Jan 25 14:46:27.072: INFO: (3) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99/proxy/: test (200; 6.260618ms)
Jan 25 14:46:27.075: INFO: (3) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:443/proxy/: test<... (200; 13.914449ms)
Jan 25 14:46:27.080: INFO: (3) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:1080/proxy/: ... (200; 14.035011ms)
Jan 25 14:46:27.080: INFO: (3) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:462/proxy/: tls qux (200; 14.390526ms)
Jan 25 14:46:27.080: INFO: (3) /api/v1/namespaces/proxy-3552/services/proxy-service-lp4q7:portname1/proxy/: foo (200; 14.387989ms)
Jan 25 14:46:27.081: INFO: (3) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 14.450021ms)
Jan 25 14:46:27.081: INFO: (3) /api/v1/namespaces/proxy-3552/services/https:proxy-service-lp4q7:tlsportname2/proxy/: tls qux (200; 14.33074ms)
Jan 25 14:46:27.081: INFO: (3) /api/v1/namespaces/proxy-3552/services/http:proxy-service-lp4q7:portname1/proxy/: foo (200; 14.93813ms)
Jan 25 14:46:27.081: INFO: (3) /api/v1/namespaces/proxy-3552/services/https:proxy-service-lp4q7:tlsportname1/proxy/: tls baz (200; 14.736634ms)
Jan 25 14:46:27.081: INFO: (3) /api/v1/namespaces/proxy-3552/services/proxy-service-lp4q7:portname2/proxy/: bar (200; 14.950886ms)
Jan 25 14:46:27.081: INFO: (3) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:160/proxy/: foo (200; 14.783333ms)
Jan 25 14:46:27.095: INFO: (4) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:460/proxy/: tls baz (200; 13.26074ms)
Jan 25 14:46:27.095: INFO: (4) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:160/proxy/: foo (200; 13.580397ms)
Jan 25 14:46:27.095: INFO: (4) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 13.458262ms)
Jan 25 14:46:27.096: INFO: (4) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:462/proxy/: tls qux (200; 14.030033ms)
Jan 25 14:46:27.096: INFO: (4) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:1080/proxy/: test<... (200; 13.438412ms)
Jan 25 14:46:27.096: INFO: (4) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 13.064538ms)
Jan 25 14:46:27.096: INFO: (4) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:160/proxy/: foo (200; 13.704919ms)
Jan 25 14:46:27.098: INFO: (4) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:1080/proxy/: ... (200; 15.615015ms)
Jan 25 14:46:27.098: INFO: (4) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99/proxy/: test (200; 16.224127ms)
Jan 25 14:46:27.098: INFO: (4) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:443/proxy/: test<... (200; 15.636261ms)
Jan 25 14:46:27.118: INFO: (5) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:160/proxy/: foo (200; 16.021882ms)
Jan 25 14:46:27.122: INFO: (5) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99/proxy/: test (200; 19.626459ms)
Jan 25 14:46:27.124: INFO: (5) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 21.152555ms)
Jan 25 14:46:27.124: INFO: (5) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 21.248744ms)
Jan 25 14:46:27.125: INFO: (5) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:460/proxy/: tls baz (200; 22.041658ms)
Jan 25 14:46:27.125: INFO: (5) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:443/proxy/: ... (200; 22.743434ms)
Jan 25 14:46:27.125: INFO: (5) /api/v1/namespaces/proxy-3552/services/http:proxy-service-lp4q7:portname2/proxy/: bar (200; 22.816042ms)
Jan 25 14:46:27.126: INFO: (5) /api/v1/namespaces/proxy-3552/services/https:proxy-service-lp4q7:tlsportname2/proxy/: tls qux (200; 23.734845ms)
Jan 25 14:46:27.127: INFO: (5) /api/v1/namespaces/proxy-3552/services/proxy-service-lp4q7:portname2/proxy/: bar (200; 23.93195ms)
Jan 25 14:46:27.127: INFO: (5) /api/v1/namespaces/proxy-3552/services/proxy-service-lp4q7:portname1/proxy/: foo (200; 23.976007ms)
Jan 25 14:46:27.136: INFO: (6) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 8.426522ms)
Jan 25 14:46:27.136: INFO: (6) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:1080/proxy/: ... (200; 9.166521ms)
Jan 25 14:46:27.136: INFO: (6) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:443/proxy/: test (200; 9.570015ms)
Jan 25 14:46:27.137: INFO: (6) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 9.820642ms)
Jan 25 14:46:27.137: INFO: (6) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:160/proxy/: foo (200; 10.065525ms)
Jan 25 14:46:27.137: INFO: (6) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:160/proxy/: foo (200; 10.084945ms)
Jan 25 14:46:27.137: INFO: (6) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:1080/proxy/: test<... (200; 9.711758ms)
Jan 25 14:46:27.137: INFO: (6) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:460/proxy/: tls baz (200; 9.833966ms)
Jan 25 14:46:27.139: INFO: (6) /api/v1/namespaces/proxy-3552/services/https:proxy-service-lp4q7:tlsportname2/proxy/: tls qux (200; 11.911724ms)
Jan 25 14:46:27.139: INFO: (6) /api/v1/namespaces/proxy-3552/services/proxy-service-lp4q7:portname1/proxy/: foo (200; 12.034541ms)
Jan 25 14:46:27.139: INFO: (6) /api/v1/namespaces/proxy-3552/services/http:proxy-service-lp4q7:portname2/proxy/: bar (200; 12.009177ms)
Jan 25 14:46:27.139: INFO: (6) /api/v1/namespaces/proxy-3552/services/proxy-service-lp4q7:portname2/proxy/: bar (200; 12.56107ms)
Jan 25 14:46:27.139: INFO: (6) /api/v1/namespaces/proxy-3552/services/http:proxy-service-lp4q7:portname1/proxy/: foo (200; 12.296191ms)
Jan 25 14:46:27.139: INFO: (6) /api/v1/namespaces/proxy-3552/services/https:proxy-service-lp4q7:tlsportname1/proxy/: tls baz (200; 12.221743ms)
Jan 25 14:46:27.152: INFO: (7) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:462/proxy/: tls qux (200; 12.34766ms)
Jan 25 14:46:27.153: INFO: (7) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 13.647501ms)
Jan 25 14:46:27.153: INFO: (7) /api/v1/namespaces/proxy-3552/services/proxy-service-lp4q7:portname1/proxy/: foo (200; 13.864667ms)
Jan 25 14:46:27.154: INFO: (7) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:460/proxy/: tls baz (200; 14.122394ms)
Jan 25 14:46:27.154: INFO: (7) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:1080/proxy/: ... (200; 13.97266ms)
Jan 25 14:46:27.154: INFO: (7) /api/v1/namespaces/proxy-3552/services/proxy-service-lp4q7:portname2/proxy/: bar (200; 14.036615ms)
Jan 25 14:46:27.154: INFO: (7) /api/v1/namespaces/proxy-3552/services/http:proxy-service-lp4q7:portname1/proxy/: foo (200; 14.221945ms)
Jan 25 14:46:27.154: INFO: (7) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99/proxy/: test (200; 14.338719ms)
Jan 25 14:46:27.155: INFO: (7) /api/v1/namespaces/proxy-3552/services/http:proxy-service-lp4q7:portname2/proxy/: bar (200; 15.07252ms)
Jan 25 14:46:27.155: INFO: (7) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:1080/proxy/: test<... (200; 15.215755ms)
Jan 25 14:46:27.155: INFO: (7) /api/v1/namespaces/proxy-3552/services/https:proxy-service-lp4q7:tlsportname1/proxy/: tls baz (200; 15.120537ms)
Jan 25 14:46:27.155: INFO: (7) /api/v1/namespaces/proxy-3552/services/https:proxy-service-lp4q7:tlsportname2/proxy/: tls qux (200; 15.170873ms)
Jan 25 14:46:27.155: INFO: (7) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:160/proxy/: foo (200; 15.237004ms)
Jan 25 14:46:27.155: INFO: (7) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 15.410571ms)
Jan 25 14:46:27.155: INFO: (7) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:443/proxy/: test<... (200; 6.865513ms)
Jan 25 14:46:27.162: INFO: (8) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:443/proxy/: ... (200; 9.428926ms)
Jan 25 14:46:27.165: INFO: (8) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:462/proxy/: tls qux (200; 9.301518ms)
Jan 25 14:46:27.165: INFO: (8) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:160/proxy/: foo (200; 9.685208ms)
Jan 25 14:46:27.167: INFO: (8) /api/v1/namespaces/proxy-3552/services/http:proxy-service-lp4q7:portname2/proxy/: bar (200; 11.764698ms)
Jan 25 14:46:27.167: INFO: (8) /api/v1/namespaces/proxy-3552/services/http:proxy-service-lp4q7:portname1/proxy/: foo (200; 11.736068ms)
Jan 25 14:46:27.167: INFO: (8) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 12.197189ms)
Jan 25 14:46:27.168: INFO: (8) /api/v1/namespaces/proxy-3552/services/proxy-service-lp4q7:portname2/proxy/: bar (200; 12.76725ms)
Jan 25 14:46:27.169: INFO: (8) /api/v1/namespaces/proxy-3552/services/proxy-service-lp4q7:portname1/proxy/: foo (200; 13.647229ms)
Jan 25 14:46:27.169: INFO: (8) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99/proxy/: test (200; 13.619573ms)
Jan 25 14:46:27.169: INFO: (8) /api/v1/namespaces/proxy-3552/services/https:proxy-service-lp4q7:tlsportname2/proxy/: tls qux (200; 13.678229ms)
Jan 25 14:46:27.169: INFO: (8) /api/v1/namespaces/proxy-3552/services/https:proxy-service-lp4q7:tlsportname1/proxy/: tls baz (200; 13.633543ms)
Jan 25 14:46:27.179: INFO: (9) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 9.872334ms)
Jan 25 14:46:27.179: INFO: (9) /api/v1/namespaces/proxy-3552/services/https:proxy-service-lp4q7:tlsportname2/proxy/: tls qux (200; 9.807348ms)
Jan 25 14:46:27.179: INFO: (9) /api/v1/namespaces/proxy-3552/services/proxy-service-lp4q7:portname1/proxy/: foo (200; 9.833423ms)
Jan 25 14:46:27.180: INFO: (9) /api/v1/namespaces/proxy-3552/services/http:proxy-service-lp4q7:portname1/proxy/: foo (200; 10.928089ms)
Jan 25 14:46:27.181: INFO: (9) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:1080/proxy/: ... (200; 11.519456ms)
Jan 25 14:46:27.181: INFO: (9) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:1080/proxy/: test<... (200; 11.408223ms)
Jan 25 14:46:27.181: INFO: (9) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:462/proxy/: tls qux (200; 11.800326ms)
Jan 25 14:46:27.181: INFO: (9) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99/proxy/: test (200; 11.873681ms)
Jan 25 14:46:27.181: INFO: (9) /api/v1/namespaces/proxy-3552/services/https:proxy-service-lp4q7:tlsportname1/proxy/: tls baz (200; 12.38357ms)
Jan 25 14:46:27.182: INFO: (9) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:160/proxy/: foo (200; 12.460968ms)
Jan 25 14:46:27.183: INFO: (9) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 13.818947ms)
Jan 25 14:46:27.183: INFO: (9) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:460/proxy/: tls baz (200; 14.218891ms)
Jan 25 14:46:27.183: INFO: (9) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:443/proxy/: test (200; 12.951228ms)
Jan 25 14:46:27.198: INFO: (10) /api/v1/namespaces/proxy-3552/services/https:proxy-service-lp4q7:tlsportname1/proxy/: tls baz (200; 12.768815ms)
Jan 25 14:46:27.199: INFO: (10) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:462/proxy/: tls qux (200; 13.680577ms)
Jan 25 14:46:27.199: INFO: (10) /api/v1/namespaces/proxy-3552/services/https:proxy-service-lp4q7:tlsportname2/proxy/: tls qux (200; 13.750228ms)
Jan 25 14:46:27.200: INFO: (10) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:443/proxy/: test<... (200; 15.987315ms)
Jan 25 14:46:27.201: INFO: (10) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:160/proxy/: foo (200; 15.960867ms)
Jan 25 14:46:27.201: INFO: (10) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 15.893694ms)
Jan 25 14:46:27.201: INFO: (10) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:1080/proxy/: ... (200; 15.804051ms)
Jan 25 14:46:27.201: INFO: (10) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 15.756421ms)
Jan 25 14:46:27.202: INFO: (10) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:460/proxy/: tls baz (200; 16.234725ms)
Jan 25 14:46:27.217: INFO: (11) /api/v1/namespaces/proxy-3552/services/proxy-service-lp4q7:portname1/proxy/: foo (200; 14.689117ms)
Jan 25 14:46:27.217: INFO: (11) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:462/proxy/: tls qux (200; 15.199381ms)
Jan 25 14:46:27.220: INFO: (11) /api/v1/namespaces/proxy-3552/services/http:proxy-service-lp4q7:portname1/proxy/: foo (200; 18.176429ms)
Jan 25 14:46:27.220: INFO: (11) /api/v1/namespaces/proxy-3552/services/proxy-service-lp4q7:portname2/proxy/: bar (200; 18.154048ms)
Jan 25 14:46:27.220: INFO: (11) /api/v1/namespaces/proxy-3552/services/https:proxy-service-lp4q7:tlsportname2/proxy/: tls qux (200; 18.119446ms)
Jan 25 14:46:27.220: INFO: (11) /api/v1/namespaces/proxy-3552/services/http:proxy-service-lp4q7:portname2/proxy/: bar (200; 18.222215ms)
Jan 25 14:46:27.220: INFO: (11) /api/v1/namespaces/proxy-3552/services/https:proxy-service-lp4q7:tlsportname1/proxy/: tls baz (200; 18.530758ms)
Jan 25 14:46:27.222: INFO: (11) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:160/proxy/: foo (200; 20.195774ms)
Jan 25 14:46:27.223: INFO: (11) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 20.832214ms)
Jan 25 14:46:27.223: INFO: (11) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:1080/proxy/: ... (200; 20.665005ms)
Jan 25 14:46:27.223: INFO: (11) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:460/proxy/: tls baz (200; 20.720131ms)
Jan 25 14:46:27.223: INFO: (11) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99/proxy/: test (200; 21.027207ms)
Jan 25 14:46:27.223: INFO: (11) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:160/proxy/: foo (200; 21.472473ms)
Jan 25 14:46:27.224: INFO: (11) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 21.548463ms)
Jan 25 14:46:27.224: INFO: (11) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:443/proxy/: test<... (200; 21.838022ms)
Jan 25 14:46:27.231: INFO: (12) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:1080/proxy/: ... (200; 6.60369ms)
Jan 25 14:46:27.231: INFO: (12) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:1080/proxy/: test<... (200; 6.380409ms)
Jan 25 14:46:27.234: INFO: (12) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:160/proxy/: foo (200; 9.549124ms)
Jan 25 14:46:27.234: INFO: (12) /api/v1/namespaces/proxy-3552/services/https:proxy-service-lp4q7:tlsportname2/proxy/: tls qux (200; 9.19208ms)
Jan 25 14:46:27.234: INFO: (12) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:160/proxy/: foo (200; 9.456493ms)
Jan 25 14:46:27.234: INFO: (12) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:443/proxy/: test (200; 12.029598ms)
Jan 25 14:46:27.237: INFO: (12) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 12.673997ms)
Jan 25 14:46:27.237: INFO: (12) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:460/proxy/: tls baz (200; 12.917479ms)
Jan 25 14:46:27.238: INFO: (12) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:462/proxy/: tls qux (200; 13.53753ms)
Jan 25 14:46:27.238: INFO: (12) /api/v1/namespaces/proxy-3552/services/http:proxy-service-lp4q7:portname2/proxy/: bar (200; 14.21862ms)
Jan 25 14:46:27.238: INFO: (12) /api/v1/namespaces/proxy-3552/services/https:proxy-service-lp4q7:tlsportname1/proxy/: tls baz (200; 14.08087ms)
Jan 25 14:46:27.243: INFO: (12) /api/v1/namespaces/proxy-3552/services/proxy-service-lp4q7:portname2/proxy/: bar (200; 19.071007ms)
Jan 25 14:46:27.256: INFO: (13) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 10.140505ms)
Jan 25 14:46:27.256: INFO: (13) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:160/proxy/: foo (200; 11.968594ms)
Jan 25 14:46:27.256: INFO: (13) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:160/proxy/: foo (200; 11.822788ms)
Jan 25 14:46:27.256: INFO: (13) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:1080/proxy/: ... (200; 12.849566ms)
Jan 25 14:46:27.257: INFO: (13) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:460/proxy/: tls baz (200; 12.412342ms)
Jan 25 14:46:27.258: INFO: (13) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 12.884796ms)
Jan 25 14:46:27.259: INFO: (13) /api/v1/namespaces/proxy-3552/services/https:proxy-service-lp4q7:tlsportname2/proxy/: tls qux (200; 13.839355ms)
Jan 25 14:46:27.260: INFO: (13) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99/proxy/: test (200; 14.485085ms)
Jan 25 14:46:27.262: INFO: (13) /api/v1/namespaces/proxy-3552/services/proxy-service-lp4q7:portname2/proxy/: bar (200; 17.648587ms)
Jan 25 14:46:27.262: INFO: (13) /api/v1/namespaces/proxy-3552/services/https:proxy-service-lp4q7:tlsportname1/proxy/: tls baz (200; 17.907147ms)
Jan 25 14:46:27.262: INFO: (13) /api/v1/namespaces/proxy-3552/services/http:proxy-service-lp4q7:portname1/proxy/: foo (200; 17.587134ms)
Jan 25 14:46:27.262: INFO: (13) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:443/proxy/: test<... (200; 17.512997ms)
Jan 25 14:46:27.262: INFO: (13) /api/v1/namespaces/proxy-3552/services/proxy-service-lp4q7:portname1/proxy/: foo (200; 17.09659ms)
Jan 25 14:46:27.262: INFO: (13) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:462/proxy/: tls qux (200; 17.840731ms)
Jan 25 14:46:27.262: INFO: (13) /api/v1/namespaces/proxy-3552/services/http:proxy-service-lp4q7:portname2/proxy/: bar (200; 17.644135ms)
Jan 25 14:46:27.274: INFO: (14) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 11.039262ms)
Jan 25 14:46:27.274: INFO: (14) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:160/proxy/: foo (200; 10.865784ms)
Jan 25 14:46:27.274: INFO: (14) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99/proxy/: test (200; 11.133304ms)
Jan 25 14:46:27.274: INFO: (14) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:1080/proxy/: ... (200; 11.428018ms)
Jan 25 14:46:27.275: INFO: (14) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:160/proxy/: foo (200; 11.981686ms)
Jan 25 14:46:27.275: INFO: (14) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:1080/proxy/: test<... (200; 11.758379ms)
Jan 25 14:46:27.275: INFO: (14) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 12.122824ms)
Jan 25 14:46:27.275: INFO: (14) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:462/proxy/: tls qux (200; 12.597154ms)
Jan 25 14:46:27.275: INFO: (14) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:443/proxy/: ... (200; 11.088544ms)
Jan 25 14:46:27.294: INFO: (15) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:1080/proxy/: test<... (200; 11.016909ms)
Jan 25 14:46:27.295: INFO: (15) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:160/proxy/: foo (200; 11.632152ms)
Jan 25 14:46:27.296: INFO: (15) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99/proxy/: test (200; 12.581297ms)
Jan 25 14:46:27.299: INFO: (15) /api/v1/namespaces/proxy-3552/services/proxy-service-lp4q7:portname1/proxy/: foo (200; 15.359652ms)
Jan 25 14:46:27.299: INFO: (15) /api/v1/namespaces/proxy-3552/services/https:proxy-service-lp4q7:tlsportname2/proxy/: tls qux (200; 15.3543ms)
Jan 25 14:46:27.299: INFO: (15) /api/v1/namespaces/proxy-3552/services/proxy-service-lp4q7:portname2/proxy/: bar (200; 15.962931ms)
Jan 25 14:46:27.299: INFO: (15) /api/v1/namespaces/proxy-3552/services/http:proxy-service-lp4q7:portname2/proxy/: bar (200; 15.87758ms)
Jan 25 14:46:27.300: INFO: (15) /api/v1/namespaces/proxy-3552/services/http:proxy-service-lp4q7:portname1/proxy/: foo (200; 16.863747ms)
Jan 25 14:46:27.300: INFO: (15) /api/v1/namespaces/proxy-3552/services/https:proxy-service-lp4q7:tlsportname1/proxy/: tls baz (200; 16.906814ms)
Jan 25 14:46:27.307: INFO: (16) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 7.158571ms)
Jan 25 14:46:27.307: INFO: (16) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99/proxy/: test (200; 7.167272ms)
Jan 25 14:46:27.308: INFO: (16) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:1080/proxy/: ... (200; 7.79628ms)
Jan 25 14:46:27.308: INFO: (16) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:1080/proxy/: test<... (200; 7.953462ms)
Jan 25 14:46:27.308: INFO: (16) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 7.998611ms)
Jan 25 14:46:27.308: INFO: (16) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:160/proxy/: foo (200; 8.226673ms)
Jan 25 14:46:27.308: INFO: (16) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:460/proxy/: tls baz (200; 8.146392ms)
Jan 25 14:46:27.308: INFO: (16) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:462/proxy/: tls qux (200; 8.092284ms)
Jan 25 14:46:27.309: INFO: (16) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:443/proxy/: test (200; 8.083526ms)
Jan 25 14:46:27.323: INFO: (17) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:1080/proxy/: test<... (200; 8.15775ms)
Jan 25 14:46:27.323: INFO: (17) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:160/proxy/: foo (200; 8.292805ms)
Jan 25 14:46:27.323: INFO: (17) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 8.344022ms)
Jan 25 14:46:27.324: INFO: (17) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:462/proxy/: tls qux (200; 8.500941ms)
Jan 25 14:46:27.324: INFO: (17) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:460/proxy/: tls baz (200; 8.678934ms)
Jan 25 14:46:27.324: INFO: (17) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:160/proxy/: foo (200; 8.813599ms)
Jan 25 14:46:27.327: INFO: (17) /api/v1/namespaces/proxy-3552/services/https:proxy-service-lp4q7:tlsportname2/proxy/: tls qux (200; 11.539126ms)
Jan 25 14:46:27.327: INFO: (17) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:443/proxy/: ... (200; 16.70464ms)
Jan 25 14:46:27.332: INFO: (17) /api/v1/namespaces/proxy-3552/services/proxy-service-lp4q7:portname2/proxy/: bar (200; 16.783094ms)
Jan 25 14:46:27.332: INFO: (17) /api/v1/namespaces/proxy-3552/services/https:proxy-service-lp4q7:tlsportname1/proxy/: tls baz (200; 17.151737ms)
Jan 25 14:46:27.332: INFO: (17) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 17.156498ms)
Jan 25 14:46:27.338: INFO: (18) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 5.094436ms)
Jan 25 14:46:27.338: INFO: (18) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:460/proxy/: tls baz (200; 5.218863ms)
Jan 25 14:46:27.339: INFO: (18) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:1080/proxy/: ... (200; 6.648349ms)
Jan 25 14:46:27.339: INFO: (18) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:160/proxy/: foo (200; 6.826335ms)
Jan 25 14:46:27.339: INFO: (18) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 6.832409ms)
Jan 25 14:46:27.339: INFO: (18) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:1080/proxy/: test<... (200; 6.789015ms)
Jan 25 14:46:27.339: INFO: (18) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:160/proxy/: foo (200; 6.88533ms)
Jan 25 14:46:27.339: INFO: (18) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99/proxy/: test (200; 6.946125ms)
Jan 25 14:46:27.339: INFO: (18) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:443/proxy/: ... (200; 9.601006ms)
Jan 25 14:46:27.353: INFO: (19) /api/v1/namespaces/proxy-3552/pods/http:proxy-service-lp4q7-vlh99:162/proxy/: bar (200; 9.551488ms)
Jan 25 14:46:27.353: INFO: (19) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99:1080/proxy/: test<... (200; 9.768198ms)
Jan 25 14:46:27.353: INFO: (19) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:462/proxy/: tls qux (200; 10.324211ms)
Jan 25 14:46:27.354: INFO: (19) /api/v1/namespaces/proxy-3552/pods/proxy-service-lp4q7-vlh99/proxy/: test (200; 9.92416ms)
Jan 25 14:46:27.354: INFO: (19) /api/v1/namespaces/proxy-3552/pods/https:proxy-service-lp4q7-vlh99:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-6d2fb6b9-80d7-47d2-8e7b-41f77616ed0a
STEP: Creating a pod to test consume configMaps
Jan 25 14:46:43.292: INFO: Waiting up to 5m0s for pod "pod-configmaps-2228ddfb-fe33-4a71-b336-318bedfb5581" in namespace "configmap-3307" to be "success or failure"
Jan 25 14:46:43.302: INFO: Pod "pod-configmaps-2228ddfb-fe33-4a71-b336-318bedfb5581": Phase="Pending", Reason="", readiness=false. Elapsed: 10.118569ms
Jan 25 14:46:45.311: INFO: Pod "pod-configmaps-2228ddfb-fe33-4a71-b336-318bedfb5581": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018847586s
Jan 25 14:46:47.319: INFO: Pod "pod-configmaps-2228ddfb-fe33-4a71-b336-318bedfb5581": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027101727s
Jan 25 14:46:49.331: INFO: Pod "pod-configmaps-2228ddfb-fe33-4a71-b336-318bedfb5581": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039178154s
Jan 25 14:46:51.366: INFO: Pod "pod-configmaps-2228ddfb-fe33-4a71-b336-318bedfb5581": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074424104s
Jan 25 14:46:53.383: INFO: Pod "pod-configmaps-2228ddfb-fe33-4a71-b336-318bedfb5581": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.090874699s
STEP: Saw pod success
Jan 25 14:46:53.383: INFO: Pod "pod-configmaps-2228ddfb-fe33-4a71-b336-318bedfb5581" satisfied condition "success or failure"
Jan 25 14:46:53.389: INFO: Trying to get logs from node iruya-node pod pod-configmaps-2228ddfb-fe33-4a71-b336-318bedfb5581 container configmap-volume-test: 
STEP: delete the pod
Jan 25 14:46:53.673: INFO: Waiting for pod pod-configmaps-2228ddfb-fe33-4a71-b336-318bedfb5581 to disappear
Jan 25 14:46:53.684: INFO: Pod pod-configmaps-2228ddfb-fe33-4a71-b336-318bedfb5581 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:46:53.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3307" for this suite.
Jan 25 14:46:59.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:46:59.905: INFO: namespace configmap-3307 deletion completed in 6.21050461s

• [SLOW TEST:17.081 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:46:59.906: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-f39faae6-3369-4c66-8eb3-a135ad94466c
STEP: Creating configMap with name cm-test-opt-upd-081d3592-e7da-48be-a541-aaf5b33bfaca
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-f39faae6-3369-4c66-8eb3-a135ad94466c
STEP: Updating configmap cm-test-opt-upd-081d3592-e7da-48be-a541-aaf5b33bfaca
STEP: Creating configMap with name cm-test-opt-create-8d711166-ebea-4d99-83ab-f29d1024c9c9
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:47:16.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5159" for this suite.
Jan 25 14:47:38.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:47:38.726: INFO: namespace projected-5159 deletion completed in 22.190562609s

• [SLOW TEST:38.820 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:47:38.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Jan 25 14:47:38.877: INFO: Waiting up to 5m0s for pod "var-expansion-1c5ce733-e64f-43c7-b1a1-a31c6adfbaf7" in namespace "var-expansion-7824" to be "success or failure"
Jan 25 14:47:38.931: INFO: Pod "var-expansion-1c5ce733-e64f-43c7-b1a1-a31c6adfbaf7": Phase="Pending", Reason="", readiness=false. Elapsed: 53.409755ms
Jan 25 14:47:40.947: INFO: Pod "var-expansion-1c5ce733-e64f-43c7-b1a1-a31c6adfbaf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06914464s
Jan 25 14:47:42.964: INFO: Pod "var-expansion-1c5ce733-e64f-43c7-b1a1-a31c6adfbaf7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086912524s
Jan 25 14:47:44.972: INFO: Pod "var-expansion-1c5ce733-e64f-43c7-b1a1-a31c6adfbaf7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094627004s
Jan 25 14:47:46.981: INFO: Pod "var-expansion-1c5ce733-e64f-43c7-b1a1-a31c6adfbaf7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.103161206s
Jan 25 14:47:48.997: INFO: Pod "var-expansion-1c5ce733-e64f-43c7-b1a1-a31c6adfbaf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.119806003s
STEP: Saw pod success
Jan 25 14:47:48.997: INFO: Pod "var-expansion-1c5ce733-e64f-43c7-b1a1-a31c6adfbaf7" satisfied condition "success or failure"
Jan 25 14:47:49.004: INFO: Trying to get logs from node iruya-node pod var-expansion-1c5ce733-e64f-43c7-b1a1-a31c6adfbaf7 container dapi-container: 
STEP: delete the pod
Jan 25 14:47:49.045: INFO: Waiting for pod var-expansion-1c5ce733-e64f-43c7-b1a1-a31c6adfbaf7 to disappear
Jan 25 14:47:49.052: INFO: Pod var-expansion-1c5ce733-e64f-43c7-b1a1-a31c6adfbaf7 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:47:49.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7824" for this suite.
Jan 25 14:47:55.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:47:55.193: INFO: namespace var-expansion-7824 deletion completed in 6.134167172s

• [SLOW TEST:16.466 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:47:55.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 25 14:47:55.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:48:05.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5369" for this suite.
Jan 25 14:48:59.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:49:00.039: INFO: namespace pods-5369 deletion completed in 54.262748142s

• [SLOW TEST:64.846 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:49:00.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-584a8066-f557-4e8f-8e33-67200cf07b47 in namespace container-probe-915
Jan 25 14:49:10.168: INFO: Started pod liveness-584a8066-f557-4e8f-8e33-67200cf07b47 in namespace container-probe-915
STEP: checking the pod's current state and verifying that restartCount is present
Jan 25 14:49:10.176: INFO: Initial restart count of pod liveness-584a8066-f557-4e8f-8e33-67200cf07b47 is 0
Jan 25 14:49:26.249: INFO: Restart count of pod container-probe-915/liveness-584a8066-f557-4e8f-8e33-67200cf07b47 is now 1 (16.072754897s elapsed)
Jan 25 14:49:46.375: INFO: Restart count of pod container-probe-915/liveness-584a8066-f557-4e8f-8e33-67200cf07b47 is now 2 (36.199264628s elapsed)
Jan 25 14:50:06.501: INFO: Restart count of pod container-probe-915/liveness-584a8066-f557-4e8f-8e33-67200cf07b47 is now 3 (56.32551905s elapsed)
Jan 25 14:50:26.867: INFO: Restart count of pod container-probe-915/liveness-584a8066-f557-4e8f-8e33-67200cf07b47 is now 4 (1m16.690844952s elapsed)
Jan 25 14:51:33.306: INFO: Restart count of pod container-probe-915/liveness-584a8066-f557-4e8f-8e33-67200cf07b47 is now 5 (2m23.129980457s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:51:33.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-915" for this suite.
Jan 25 14:51:39.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:51:39.565: INFO: namespace container-probe-915 deletion completed in 6.174463636s

• [SLOW TEST:159.525 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:51:39.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Jan 25 14:51:39.670: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:51:39.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7375" for this suite.
Jan 25 14:51:45.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:51:45.916: INFO: namespace kubectl-7375 deletion completed in 6.126267054s

• [SLOW TEST:6.350 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:51:45.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-8253
I0125 14:51:46.060612       8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8253, replica count: 1
I0125 14:51:47.111458       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 14:51:48.111782       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 14:51:49.112084       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 14:51:50.112396       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 14:51:51.112739       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 14:51:52.113716       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 14:51:53.114463       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 14:51:54.115179       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 14:51:55.115559       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 25 14:51:55.292: INFO: Created: latency-svc-8dvrg
Jan 25 14:51:55.398: INFO: Got endpoints: latency-svc-8dvrg [182.385998ms]
Jan 25 14:51:55.489: INFO: Created: latency-svc-css6l
Jan 25 14:51:55.580: INFO: Got endpoints: latency-svc-css6l [179.801538ms]
Jan 25 14:51:55.612: INFO: Created: latency-svc-hjfkt
Jan 25 14:51:55.623: INFO: Got endpoints: latency-svc-hjfkt [222.80345ms]
Jan 25 14:51:55.658: INFO: Created: latency-svc-r67l6
Jan 25 14:51:55.667: INFO: Got endpoints: latency-svc-r67l6 [266.934844ms]
Jan 25 14:51:55.834: INFO: Created: latency-svc-knwmf
Jan 25 14:51:55.879: INFO: Got endpoints: latency-svc-knwmf [478.664338ms]
Jan 25 14:51:56.052: INFO: Created: latency-svc-pcn24
Jan 25 14:51:56.054: INFO: Got endpoints: latency-svc-pcn24 [653.369548ms]
Jan 25 14:51:56.193: INFO: Created: latency-svc-tjm82
Jan 25 14:51:56.209: INFO: Got endpoints: latency-svc-tjm82 [808.438359ms]
Jan 25 14:51:56.281: INFO: Created: latency-svc-gkzsp
Jan 25 14:51:56.352: INFO: Got endpoints: latency-svc-gkzsp [951.296128ms]
Jan 25 14:51:56.372: INFO: Created: latency-svc-vvrh2
Jan 25 14:51:56.381: INFO: Got endpoints: latency-svc-vvrh2 [980.406273ms]
Jan 25 14:51:56.453: INFO: Created: latency-svc-zw579
Jan 25 14:51:56.498: INFO: Got endpoints: latency-svc-zw579 [1.097214196s]
Jan 25 14:51:56.525: INFO: Created: latency-svc-xzn55
Jan 25 14:51:56.537: INFO: Got endpoints: latency-svc-xzn55 [1.135981465s]
Jan 25 14:51:56.657: INFO: Created: latency-svc-cnrvz
Jan 25 14:51:56.665: INFO: Got endpoints: latency-svc-cnrvz [1.264825922s]
Jan 25 14:51:56.705: INFO: Created: latency-svc-qhc9j
Jan 25 14:51:56.722: INFO: Got endpoints: latency-svc-qhc9j [1.320976067s]
Jan 25 14:51:56.821: INFO: Created: latency-svc-zsld9
Jan 25 14:51:56.840: INFO: Got endpoints: latency-svc-zsld9 [1.439073904s]
Jan 25 14:51:56.908: INFO: Created: latency-svc-khpjm
Jan 25 14:51:56.911: INFO: Got endpoints: latency-svc-khpjm [1.510659221s]
Jan 25 14:51:57.011: INFO: Created: latency-svc-xwcvt
Jan 25 14:51:57.011: INFO: Got endpoints: latency-svc-xwcvt [1.611650479s]
Jan 25 14:51:57.064: INFO: Created: latency-svc-97656
Jan 25 14:51:57.071: INFO: Got endpoints: latency-svc-97656 [1.490792004s]
Jan 25 14:51:57.203: INFO: Created: latency-svc-2vrcc
Jan 25 14:51:57.205: INFO: Got endpoints: latency-svc-2vrcc [193.880189ms]
Jan 25 14:51:57.370: INFO: Created: latency-svc-l7jbc
Jan 25 14:51:57.385: INFO: Got endpoints: latency-svc-l7jbc [1.761509552s]
Jan 25 14:51:57.446: INFO: Created: latency-svc-glbcl
Jan 25 14:51:57.454: INFO: Got endpoints: latency-svc-glbcl [1.787003495s]
Jan 25 14:51:57.557: INFO: Created: latency-svc-9stq9
Jan 25 14:51:57.564: INFO: Got endpoints: latency-svc-9stq9 [1.684218572s]
Jan 25 14:51:57.613: INFO: Created: latency-svc-fnjdh
Jan 25 14:51:57.617: INFO: Got endpoints: latency-svc-fnjdh [1.56272732s]
Jan 25 14:51:57.753: INFO: Created: latency-svc-tczdr
Jan 25 14:51:57.764: INFO: Got endpoints: latency-svc-tczdr [1.554801844s]
Jan 25 14:51:57.819: INFO: Created: latency-svc-lgbmx
Jan 25 14:51:57.827: INFO: Got endpoints: latency-svc-lgbmx [1.475348948s]
Jan 25 14:51:57.940: INFO: Created: latency-svc-6z9zr
Jan 25 14:51:57.952: INFO: Got endpoints: latency-svc-6z9zr [1.570581034s]
Jan 25 14:51:58.024: INFO: Created: latency-svc-pkmxm
Jan 25 14:51:58.139: INFO: Got endpoints: latency-svc-pkmxm [1.641364208s]
Jan 25 14:51:58.174: INFO: Created: latency-svc-fmrdr
Jan 25 14:51:58.182: INFO: Got endpoints: latency-svc-fmrdr [1.644730632s]
Jan 25 14:51:58.302: INFO: Created: latency-svc-cf5vg
Jan 25 14:51:58.308: INFO: Got endpoints: latency-svc-cf5vg [1.642795708s]
Jan 25 14:51:58.360: INFO: Created: latency-svc-7clrp
Jan 25 14:51:58.375: INFO: Got endpoints: latency-svc-7clrp [1.653395135s]
Jan 25 14:51:58.477: INFO: Created: latency-svc-jkpwl
Jan 25 14:51:58.488: INFO: Got endpoints: latency-svc-jkpwl [1.64767482s]
Jan 25 14:51:58.520: INFO: Created: latency-svc-wgg49
Jan 25 14:51:58.542: INFO: Got endpoints: latency-svc-wgg49 [1.630916546s]
Jan 25 14:51:58.673: INFO: Created: latency-svc-knztp
Jan 25 14:51:58.678: INFO: Got endpoints: latency-svc-knztp [1.606346255s]
Jan 25 14:51:58.712: INFO: Created: latency-svc-8cts5
Jan 25 14:51:58.838: INFO: Created: latency-svc-gxcs9
Jan 25 14:51:58.843: INFO: Got endpoints: latency-svc-8cts5 [1.63825776s]
Jan 25 14:51:58.848: INFO: Got endpoints: latency-svc-gxcs9 [1.463479037s]
Jan 25 14:51:58.922: INFO: Created: latency-svc-p49f2
Jan 25 14:51:58.999: INFO: Got endpoints: latency-svc-p49f2 [1.545143423s]
Jan 25 14:51:59.044: INFO: Created: latency-svc-j5nvn
Jan 25 14:51:59.052: INFO: Got endpoints: latency-svc-j5nvn [1.488727357s]
Jan 25 14:51:59.171: INFO: Created: latency-svc-vpxqd
Jan 25 14:51:59.200: INFO: Got endpoints: latency-svc-vpxqd [1.583524474s]
Jan 25 14:51:59.247: INFO: Created: latency-svc-dbc8s
Jan 25 14:51:59.254: INFO: Got endpoints: latency-svc-dbc8s [1.489945632s]
Jan 25 14:51:59.374: INFO: Created: latency-svc-4p62p
Jan 25 14:51:59.381: INFO: Got endpoints: latency-svc-4p62p [1.553879898s]
Jan 25 14:51:59.449: INFO: Created: latency-svc-2pprb
Jan 25 14:51:59.591: INFO: Got endpoints: latency-svc-2pprb [1.639346603s]
Jan 25 14:51:59.597: INFO: Created: latency-svc-4bwl8
Jan 25 14:51:59.615: INFO: Got endpoints: latency-svc-4bwl8 [1.475975268s]
Jan 25 14:51:59.650: INFO: Created: latency-svc-vvsrd
Jan 25 14:51:59.672: INFO: Got endpoints: latency-svc-vvsrd [1.490214031s]
Jan 25 14:51:59.802: INFO: Created: latency-svc-hf2k8
Jan 25 14:51:59.824: INFO: Got endpoints: latency-svc-hf2k8 [1.515114651s]
Jan 25 14:51:59.870: INFO: Created: latency-svc-vdqfw
Jan 25 14:51:59.892: INFO: Got endpoints: latency-svc-vdqfw [1.516406342s]
Jan 25 14:51:59.985: INFO: Created: latency-svc-v968l
Jan 25 14:52:00.000: INFO: Got endpoints: latency-svc-v968l [1.512127042s]
Jan 25 14:52:00.083: INFO: Created: latency-svc-vtnqj
Jan 25 14:52:00.203: INFO: Got endpoints: latency-svc-vtnqj [1.659838232s]
Jan 25 14:52:00.217: INFO: Created: latency-svc-jpmpf
Jan 25 14:52:00.224: INFO: Got endpoints: latency-svc-jpmpf [1.546035925s]
Jan 25 14:52:00.441: INFO: Created: latency-svc-7gq42
Jan 25 14:52:00.452: INFO: Got endpoints: latency-svc-7gq42 [1.608235586s]
Jan 25 14:52:00.516: INFO: Created: latency-svc-fmtth
Jan 25 14:52:00.535: INFO: Got endpoints: latency-svc-fmtth [1.686841374s]
Jan 25 14:52:00.670: INFO: Created: latency-svc-5gztm
Jan 25 14:52:00.683: INFO: Got endpoints: latency-svc-5gztm [1.683214349s]
Jan 25 14:52:00.764: INFO: Created: latency-svc-6jhsz
Jan 25 14:52:00.839: INFO: Got endpoints: latency-svc-6jhsz [1.78669741s]
Jan 25 14:52:00.883: INFO: Created: latency-svc-ng6w5
Jan 25 14:52:00.922: INFO: Got endpoints: latency-svc-ng6w5 [1.721904317s]
Jan 25 14:52:01.025: INFO: Created: latency-svc-qjmrx
Jan 25 14:52:01.027: INFO: Got endpoints: latency-svc-qjmrx [1.772794727s]
Jan 25 14:52:01.088: INFO: Created: latency-svc-dshd8
Jan 25 14:52:01.193: INFO: Got endpoints: latency-svc-dshd8 [1.81101567s]
Jan 25 14:52:01.214: INFO: Created: latency-svc-j898m
Jan 25 14:52:01.222: INFO: Got endpoints: latency-svc-j898m [1.630734592s]
Jan 25 14:52:01.278: INFO: Created: latency-svc-4m9f2
Jan 25 14:52:01.284: INFO: Got endpoints: latency-svc-4m9f2 [1.668196933s]
Jan 25 14:52:01.408: INFO: Created: latency-svc-8tlsl
Jan 25 14:52:01.416: INFO: Got endpoints: latency-svc-8tlsl [1.743664083s]
Jan 25 14:52:01.473: INFO: Created: latency-svc-hk2rl
Jan 25 14:52:01.534: INFO: Got endpoints: latency-svc-hk2rl [1.710144708s]
Jan 25 14:52:01.587: INFO: Created: latency-svc-765p9
Jan 25 14:52:01.594: INFO: Got endpoints: latency-svc-765p9 [1.701853351s]
Jan 25 14:52:01.689: INFO: Created: latency-svc-h8pdb
Jan 25 14:52:01.702: INFO: Got endpoints: latency-svc-h8pdb [1.701876054s]
Jan 25 14:52:01.768: INFO: Created: latency-svc-vss4d
Jan 25 14:52:01.769: INFO: Got endpoints: latency-svc-vss4d [1.566573617s]
Jan 25 14:52:01.863: INFO: Created: latency-svc-nr54x
Jan 25 14:52:01.867: INFO: Got endpoints: latency-svc-nr54x [1.642508683s]
Jan 25 14:52:01.923: INFO: Created: latency-svc-fl9vq
Jan 25 14:52:01.945: INFO: Got endpoints: latency-svc-fl9vq [1.492838151s]
Jan 25 14:52:02.053: INFO: Created: latency-svc-r9j7b
Jan 25 14:52:02.059: INFO: Got endpoints: latency-svc-r9j7b [1.523816877s]
Jan 25 14:52:02.118: INFO: Created: latency-svc-b42p4
Jan 25 14:52:02.219: INFO: Got endpoints: latency-svc-b42p4 [1.536250107s]
Jan 25 14:52:02.262: INFO: Created: latency-svc-7bzl8
Jan 25 14:52:02.289: INFO: Got endpoints: latency-svc-7bzl8 [1.448373454s]
Jan 25 14:52:02.413: INFO: Created: latency-svc-qzlwt
Jan 25 14:52:02.418: INFO: Got endpoints: latency-svc-qzlwt [1.495361784s]
Jan 25 14:52:02.479: INFO: Created: latency-svc-bcqhx
Jan 25 14:52:02.586: INFO: Got endpoints: latency-svc-bcqhx [1.55862896s]
Jan 25 14:52:02.590: INFO: Created: latency-svc-p6vsl
Jan 25 14:52:02.606: INFO: Got endpoints: latency-svc-p6vsl [1.413441759s]
Jan 25 14:52:02.775: INFO: Created: latency-svc-s5kkp
Jan 25 14:52:02.832: INFO: Got endpoints: latency-svc-s5kkp [1.610168062s]
Jan 25 14:52:02.838: INFO: Created: latency-svc-4m8tg
Jan 25 14:52:02.838: INFO: Got endpoints: latency-svc-4m8tg [1.554151134s]
Jan 25 14:52:02.967: INFO: Created: latency-svc-mrj2f
Jan 25 14:52:02.978: INFO: Got endpoints: latency-svc-mrj2f [1.561421358s]
Jan 25 14:52:03.041: INFO: Created: latency-svc-jq9sh
Jan 25 14:52:03.133: INFO: Got endpoints: latency-svc-jq9sh [1.599041812s]
Jan 25 14:52:03.181: INFO: Created: latency-svc-px2kg
Jan 25 14:52:03.194: INFO: Got endpoints: latency-svc-px2kg [1.600329906s]
Jan 25 14:52:03.312: INFO: Created: latency-svc-jstpp
Jan 25 14:52:03.320: INFO: Got endpoints: latency-svc-jstpp [1.617593411s]
Jan 25 14:52:03.372: INFO: Created: latency-svc-9ppv6
Jan 25 14:52:03.377: INFO: Got endpoints: latency-svc-9ppv6 [1.60722479s]
Jan 25 14:52:03.565: INFO: Created: latency-svc-jrsgn
Jan 25 14:52:03.587: INFO: Got endpoints: latency-svc-jrsgn [1.719763882s]
Jan 25 14:52:03.634: INFO: Created: latency-svc-8cxd7
Jan 25 14:52:03.646: INFO: Got endpoints: latency-svc-8cxd7 [1.70083309s]
Jan 25 14:52:03.759: INFO: Created: latency-svc-txc8j
Jan 25 14:52:03.764: INFO: Got endpoints: latency-svc-txc8j [1.704519685s]
Jan 25 14:52:03.944: INFO: Created: latency-svc-9qd6g
Jan 25 14:52:03.987: INFO: Got endpoints: latency-svc-9qd6g [1.767080347s]
Jan 25 14:52:04.042: INFO: Created: latency-svc-482jc
Jan 25 14:52:04.161: INFO: Got endpoints: latency-svc-482jc [1.872527819s]
Jan 25 14:52:04.209: INFO: Created: latency-svc-4cx52
Jan 25 14:52:04.413: INFO: Got endpoints: latency-svc-4cx52 [1.99515307s]
Jan 25 14:52:04.420: INFO: Created: latency-svc-lltwq
Jan 25 14:52:04.458: INFO: Got endpoints: latency-svc-lltwq [1.870963142s]
Jan 25 14:52:04.650: INFO: Created: latency-svc-lp9mg
Jan 25 14:52:04.656: INFO: Got endpoints: latency-svc-lp9mg [2.04965986s]
Jan 25 14:52:04.734: INFO: Created: latency-svc-chqxj
Jan 25 14:52:04.913: INFO: Got endpoints: latency-svc-chqxj [2.080206035s]
Jan 25 14:52:04.953: INFO: Created: latency-svc-cc5mh
Jan 25 14:52:04.961: INFO: Got endpoints: latency-svc-cc5mh [2.122989544s]
Jan 25 14:52:05.005: INFO: Created: latency-svc-8m2xt
Jan 25 14:52:05.198: INFO: Got endpoints: latency-svc-8m2xt [2.21945554s]
Jan 25 14:52:05.232: INFO: Created: latency-svc-vdrd6
Jan 25 14:52:05.246: INFO: Got endpoints: latency-svc-vdrd6 [2.112441435s]
Jan 25 14:52:05.511: INFO: Created: latency-svc-9tnp6
Jan 25 14:52:05.547: INFO: Got endpoints: latency-svc-9tnp6 [2.35259222s]
Jan 25 14:52:05.686: INFO: Created: latency-svc-nt2j2
Jan 25 14:52:05.694: INFO: Got endpoints: latency-svc-nt2j2 [2.373271231s]
Jan 25 14:52:05.733: INFO: Created: latency-svc-zszmx
Jan 25 14:52:05.734: INFO: Got endpoints: latency-svc-zszmx [2.356747771s]
Jan 25 14:52:05.773: INFO: Created: latency-svc-wdwdj
Jan 25 14:52:05.888: INFO: Got endpoints: latency-svc-wdwdj [2.300993351s]
Jan 25 14:52:05.904: INFO: Created: latency-svc-6wpsl
Jan 25 14:52:05.914: INFO: Got endpoints: latency-svc-6wpsl [2.267360825s]
Jan 25 14:52:05.956: INFO: Created: latency-svc-n9wjq
Jan 25 14:52:05.961: INFO: Got endpoints: latency-svc-n9wjq [2.195947784s]
Jan 25 14:52:06.149: INFO: Created: latency-svc-kwj5c
Jan 25 14:52:06.163: INFO: Got endpoints: latency-svc-kwj5c [2.176008472s]
Jan 25 14:52:06.238: INFO: Created: latency-svc-dnxnv
Jan 25 14:52:06.418: INFO: Got endpoints: latency-svc-dnxnv [2.256154063s]
Jan 25 14:52:06.436: INFO: Created: latency-svc-sm85d
Jan 25 14:52:06.446: INFO: Got endpoints: latency-svc-sm85d [2.03303964s]
Jan 25 14:52:06.697: INFO: Created: latency-svc-dvc7t
Jan 25 14:52:06.886: INFO: Got endpoints: latency-svc-dvc7t [2.427872793s]
Jan 25 14:52:06.900: INFO: Created: latency-svc-mxnff
Jan 25 14:52:06.920: INFO: Got endpoints: latency-svc-mxnff [2.262910943s]
Jan 25 14:52:07.041: INFO: Created: latency-svc-v6t7v
Jan 25 14:52:07.051: INFO: Got endpoints: latency-svc-v6t7v [2.137558392s]
Jan 25 14:52:07.119: INFO: Created: latency-svc-f7zd2
Jan 25 14:52:07.222: INFO: Created: latency-svc-c2xdr
Jan 25 14:52:07.223: INFO: Got endpoints: latency-svc-f7zd2 [2.261653775s]
Jan 25 14:52:07.236: INFO: Got endpoints: latency-svc-c2xdr [2.038148369s]
Jan 25 14:52:07.282: INFO: Created: latency-svc-zc58v
Jan 25 14:52:07.291: INFO: Got endpoints: latency-svc-zc58v [2.044487401s]
Jan 25 14:52:07.395: INFO: Created: latency-svc-ssgtj
Jan 25 14:52:07.402: INFO: Got endpoints: latency-svc-ssgtj [1.853844263s]
Jan 25 14:52:07.459: INFO: Created: latency-svc-9dv2f
Jan 25 14:52:07.483: INFO: Got endpoints: latency-svc-9dv2f [1.78886395s]
Jan 25 14:52:07.584: INFO: Created: latency-svc-lhlvr
Jan 25 14:52:07.633: INFO: Got endpoints: latency-svc-lhlvr [1.899299277s]
Jan 25 14:52:07.645: INFO: Created: latency-svc-pw4nf
Jan 25 14:52:07.648: INFO: Got endpoints: latency-svc-pw4nf [1.759920382s]
Jan 25 14:52:07.785: INFO: Created: latency-svc-7m94z
Jan 25 14:52:07.788: INFO: Got endpoints: latency-svc-7m94z [1.874322338s]
Jan 25 14:52:07.852: INFO: Created: latency-svc-tlrgg
Jan 25 14:52:07.932: INFO: Got endpoints: latency-svc-tlrgg [1.971714145s]
Jan 25 14:52:07.976: INFO: Created: latency-svc-qw8jh
Jan 25 14:52:07.994: INFO: Got endpoints: latency-svc-qw8jh [1.83064716s]
Jan 25 14:52:08.059: INFO: Created: latency-svc-vrw2p
Jan 25 14:52:08.151: INFO: Got endpoints: latency-svc-vrw2p [1.732947636s]
Jan 25 14:52:08.191: INFO: Created: latency-svc-jcjhz
Jan 25 14:52:08.235: INFO: Got endpoints: latency-svc-jcjhz [1.788681653s]
Jan 25 14:52:08.239: INFO: Created: latency-svc-tnhbc
Jan 25 14:52:08.419: INFO: Got endpoints: latency-svc-tnhbc [1.533313725s]
Jan 25 14:52:08.465: INFO: Created: latency-svc-wnvlb
Jan 25 14:52:08.465: INFO: Got endpoints: latency-svc-wnvlb [1.544698334s]
Jan 25 14:52:08.625: INFO: Created: latency-svc-tqmtz
Jan 25 14:52:08.640: INFO: Got endpoints: latency-svc-tqmtz [1.589423101s]
Jan 25 14:52:08.684: INFO: Created: latency-svc-4bvr2
Jan 25 14:52:08.698: INFO: Got endpoints: latency-svc-4bvr2 [1.474799358s]
Jan 25 14:52:08.831: INFO: Created: latency-svc-fjc87
Jan 25 14:52:08.848: INFO: Got endpoints: latency-svc-fjc87 [1.611936867s]
Jan 25 14:52:08.924: INFO: Created: latency-svc-z52z5
Jan 25 14:52:09.036: INFO: Got endpoints: latency-svc-z52z5 [1.74472999s]
Jan 25 14:52:09.065: INFO: Created: latency-svc-vfkph
Jan 25 14:52:09.101: INFO: Got endpoints: latency-svc-vfkph [1.698537506s]
Jan 25 14:52:09.207: INFO: Created: latency-svc-w5mj2
Jan 25 14:52:09.207: INFO: Got endpoints: latency-svc-w5mj2 [1.724049645s]
Jan 25 14:52:09.242: INFO: Created: latency-svc-4txz2
Jan 25 14:52:09.276: INFO: Got endpoints: latency-svc-4txz2 [1.642694274s]
Jan 25 14:52:09.395: INFO: Created: latency-svc-8zq49
Jan 25 14:52:09.395: INFO: Got endpoints: latency-svc-8zq49 [1.747046443s]
Jan 25 14:52:09.457: INFO: Created: latency-svc-6rnzg
Jan 25 14:52:09.465: INFO: Got endpoints: latency-svc-6rnzg [1.676691703s]
Jan 25 14:52:09.606: INFO: Created: latency-svc-k6bgm
Jan 25 14:52:09.619: INFO: Got endpoints: latency-svc-k6bgm [1.686033168s]
Jan 25 14:52:09.794: INFO: Created: latency-svc-qdmx6
Jan 25 14:52:09.807: INFO: Got endpoints: latency-svc-qdmx6 [1.81245951s]
Jan 25 14:52:09.855: INFO: Created: latency-svc-q49kv
Jan 25 14:52:09.870: INFO: Got endpoints: latency-svc-q49kv [1.718190158s]
Jan 25 14:52:10.025: INFO: Created: latency-svc-nkwbk
Jan 25 14:52:10.046: INFO: Got endpoints: latency-svc-nkwbk [1.810489046s]
Jan 25 14:52:10.081: INFO: Created: latency-svc-lt2dj
Jan 25 14:52:10.119: INFO: Got endpoints: latency-svc-lt2dj [1.699279907s]
Jan 25 14:52:10.258: INFO: Created: latency-svc-98sdv
Jan 25 14:52:10.315: INFO: Got endpoints: latency-svc-98sdv [1.850149729s]
Jan 25 14:52:10.552: INFO: Created: latency-svc-vbwnt
Jan 25 14:52:10.663: INFO: Got endpoints: latency-svc-vbwnt [2.022192088s]
Jan 25 14:52:10.714: INFO: Created: latency-svc-x4mt4
Jan 25 14:52:10.760: INFO: Got endpoints: latency-svc-x4mt4 [2.062101754s]
Jan 25 14:52:10.764: INFO: Created: latency-svc-ngv94
Jan 25 14:52:10.885: INFO: Got endpoints: latency-svc-ngv94 [2.036675609s]
Jan 25 14:52:10.894: INFO: Created: latency-svc-52blv
Jan 25 14:52:10.917: INFO: Got endpoints: latency-svc-52blv [1.881134501s]
Jan 25 14:52:10.976: INFO: Created: latency-svc-jd6pc
Jan 25 14:52:10.984: INFO: Got endpoints: latency-svc-jd6pc [1.882355515s]
Jan 25 14:52:11.100: INFO: Created: latency-svc-8kkvv
Jan 25 14:52:11.125: INFO: Got endpoints: latency-svc-8kkvv [1.918632694s]
Jan 25 14:52:11.173: INFO: Created: latency-svc-lrnq5
Jan 25 14:52:11.291: INFO: Got endpoints: latency-svc-lrnq5 [2.014530828s]
Jan 25 14:52:11.302: INFO: Created: latency-svc-wb6sx
Jan 25 14:52:11.309: INFO: Got endpoints: latency-svc-wb6sx [1.914163469s]
Jan 25 14:52:11.396: INFO: Created: latency-svc-7tc4w
Jan 25 14:52:11.504: INFO: Got endpoints: latency-svc-7tc4w [2.039348194s]
Jan 25 14:52:11.569: INFO: Created: latency-svc-gg7c2
Jan 25 14:52:11.592: INFO: Got endpoints: latency-svc-gg7c2 [1.9727653s]
Jan 25 14:52:11.734: INFO: Created: latency-svc-wzbxq
Jan 25 14:52:11.788: INFO: Got endpoints: latency-svc-wzbxq [1.981508528s]
Jan 25 14:52:11.831: INFO: Created: latency-svc-zmsmb
Jan 25 14:52:11.922: INFO: Got endpoints: latency-svc-zmsmb [2.052292383s]
Jan 25 14:52:11.961: INFO: Created: latency-svc-g9fpk
Jan 25 14:52:11.987: INFO: Got endpoints: latency-svc-g9fpk [1.939886138s]
Jan 25 14:52:12.139: INFO: Created: latency-svc-5b8m8
Jan 25 14:52:12.169: INFO: Got endpoints: latency-svc-5b8m8 [2.050290077s]
Jan 25 14:52:12.205: INFO: Created: latency-svc-hr8tl
Jan 25 14:52:12.210: INFO: Got endpoints: latency-svc-hr8tl [1.89432029s]
Jan 25 14:52:12.313: INFO: Created: latency-svc-gdwzh
Jan 25 14:52:12.341: INFO: Got endpoints: latency-svc-gdwzh [1.677023044s]
Jan 25 14:52:12.378: INFO: Created: latency-svc-lqhj5
Jan 25 14:52:12.386: INFO: Got endpoints: latency-svc-lqhj5 [1.625795847s]
Jan 25 14:52:12.515: INFO: Created: latency-svc-lbqm4
Jan 25 14:52:12.758: INFO: Got endpoints: latency-svc-lbqm4 [1.872157103s]
Jan 25 14:52:12.769: INFO: Created: latency-svc-28wpt
Jan 25 14:52:12.776: INFO: Got endpoints: latency-svc-28wpt [1.858408443s]
Jan 25 14:52:12.943: INFO: Created: latency-svc-bpfcv
Jan 25 14:52:12.943: INFO: Got endpoints: latency-svc-bpfcv [1.959185813s]
Jan 25 14:52:13.019: INFO: Created: latency-svc-fzvpz
Jan 25 14:52:13.021: INFO: Got endpoints: latency-svc-fzvpz [1.895483015s]
Jan 25 14:52:13.274: INFO: Created: latency-svc-mwxrq
Jan 25 14:52:13.291: INFO: Got endpoints: latency-svc-mwxrq [2.000158776s]
Jan 25 14:52:13.419: INFO: Created: latency-svc-qb5sm
Jan 25 14:52:13.428: INFO: Got endpoints: latency-svc-qb5sm [2.118925483s]
Jan 25 14:52:13.707: INFO: Created: latency-svc-9rl5z
Jan 25 14:52:13.786: INFO: Got endpoints: latency-svc-9rl5z [2.281484656s]
Jan 25 14:52:13.787: INFO: Created: latency-svc-kb4rt
Jan 25 14:52:13.911: INFO: Got endpoints: latency-svc-kb4rt [2.319581755s]
Jan 25 14:52:13.985: INFO: Created: latency-svc-8s8nb
Jan 25 14:52:13.998: INFO: Got endpoints: latency-svc-8s8nb [2.209187732s]
Jan 25 14:52:14.091: INFO: Created: latency-svc-2s49c
Jan 25 14:52:14.159: INFO: Got endpoints: latency-svc-2s49c [2.236031294s]
Jan 25 14:52:14.173: INFO: Created: latency-svc-mm9x9
Jan 25 14:52:14.266: INFO: Got endpoints: latency-svc-mm9x9 [2.279447448s]
Jan 25 14:52:14.313: INFO: Created: latency-svc-2cthz
Jan 25 14:52:14.334: INFO: Got endpoints: latency-svc-2cthz [2.164659185s]
Jan 25 14:52:14.433: INFO: Created: latency-svc-wvqhr
Jan 25 14:52:14.445: INFO: Got endpoints: latency-svc-wvqhr [2.235280833s]
Jan 25 14:52:14.487: INFO: Created: latency-svc-hdpjs
Jan 25 14:52:14.497: INFO: Got endpoints: latency-svc-hdpjs [2.156089604s]
Jan 25 14:52:14.650: INFO: Created: latency-svc-2vfqh
Jan 25 14:52:14.653: INFO: Got endpoints: latency-svc-2vfqh [2.266344199s]
Jan 25 14:52:14.795: INFO: Created: latency-svc-dbl26
Jan 25 14:52:14.804: INFO: Got endpoints: latency-svc-dbl26 [2.046378213s]
Jan 25 14:52:14.853: INFO: Created: latency-svc-dfllv
Jan 25 14:52:14.987: INFO: Got endpoints: latency-svc-dfllv [2.211073916s]
Jan 25 14:52:14.987: INFO: Created: latency-svc-lz9r2
Jan 25 14:52:15.023: INFO: Got endpoints: latency-svc-lz9r2 [2.080283662s]
Jan 25 14:52:15.027: INFO: Created: latency-svc-hhjbk
Jan 25 14:52:15.035: INFO: Got endpoints: latency-svc-hhjbk [2.013950153s]
Jan 25 14:52:15.195: INFO: Created: latency-svc-krc9q
Jan 25 14:52:15.200: INFO: Got endpoints: latency-svc-krc9q [1.908733661s]
Jan 25 14:52:15.240: INFO: Created: latency-svc-kd7gc
Jan 25 14:52:15.251: INFO: Got endpoints: latency-svc-kd7gc [1.822614054s]
Jan 25 14:52:15.367: INFO: Created: latency-svc-m589z
Jan 25 14:52:15.375: INFO: Got endpoints: latency-svc-m589z [1.588898868s]
Jan 25 14:52:15.447: INFO: Created: latency-svc-2rwf5
Jan 25 14:52:15.450: INFO: Got endpoints: latency-svc-2rwf5 [1.538434627s]
Jan 25 14:52:15.567: INFO: Created: latency-svc-4k7r2
Jan 25 14:52:15.576: INFO: Got endpoints: latency-svc-4k7r2 [1.577737193s]
Jan 25 14:52:15.624: INFO: Created: latency-svc-6mgjd
Jan 25 14:52:15.635: INFO: Got endpoints: latency-svc-6mgjd [1.474998757s]
Jan 25 14:52:15.725: INFO: Created: latency-svc-2x786
Jan 25 14:52:15.735: INFO: Got endpoints: latency-svc-2x786 [1.46746666s]
Jan 25 14:52:15.780: INFO: Created: latency-svc-66rxm
Jan 25 14:52:15.887: INFO: Got endpoints: latency-svc-66rxm [1.553035427s]
Jan 25 14:52:15.890: INFO: Created: latency-svc-x5hqq
Jan 25 14:52:15.898: INFO: Got endpoints: latency-svc-x5hqq [1.451515322s]
Jan 25 14:52:15.974: INFO: Created: latency-svc-bvpfk
Jan 25 14:52:16.059: INFO: Got endpoints: latency-svc-bvpfk [1.561492614s]
Jan 25 14:52:16.069: INFO: Created: latency-svc-xq757
Jan 25 14:52:16.090: INFO: Got endpoints: latency-svc-xq757 [1.437284274s]
Jan 25 14:52:16.120: INFO: Created: latency-svc-c7ldt
Jan 25 14:52:16.128: INFO: Got endpoints: latency-svc-c7ldt [1.323404995s]
Jan 25 14:52:16.218: INFO: Created: latency-svc-4282z
Jan 25 14:52:16.232: INFO: Got endpoints: latency-svc-4282z [1.24441361s]
Jan 25 14:52:16.291: INFO: Created: latency-svc-6gff7
Jan 25 14:52:16.405: INFO: Got endpoints: latency-svc-6gff7 [1.381229406s]
Jan 25 14:52:16.433: INFO: Created: latency-svc-x65vv
Jan 25 14:52:16.438: INFO: Got endpoints: latency-svc-x65vv [1.4025215s]
Jan 25 14:52:16.494: INFO: Created: latency-svc-sphcj
Jan 25 14:52:16.568: INFO: Got endpoints: latency-svc-sphcj [1.367583912s]
Jan 25 14:52:16.604: INFO: Created: latency-svc-69gw2
Jan 25 14:52:16.612: INFO: Got endpoints: latency-svc-69gw2 [1.360139511s]
Jan 25 14:52:16.737: INFO: Created: latency-svc-p8bb9
Jan 25 14:52:16.784: INFO: Created: latency-svc-8sdbj
Jan 25 14:52:16.785: INFO: Got endpoints: latency-svc-p8bb9 [1.40911294s]
Jan 25 14:52:16.797: INFO: Got endpoints: latency-svc-8sdbj [1.347251993s]
Jan 25 14:52:16.885: INFO: Created: latency-svc-ltx8g
Jan 25 14:52:16.888: INFO: Got endpoints: latency-svc-ltx8g [1.31235111s]
Jan 25 14:52:16.958: INFO: Created: latency-svc-zdg92
Jan 25 14:52:16.968: INFO: Got endpoints: latency-svc-zdg92 [1.333363615s]
Jan 25 14:52:17.093: INFO: Created: latency-svc-pfp9q
Jan 25 14:52:17.105: INFO: Got endpoints: latency-svc-pfp9q [1.370331473s]
Jan 25 14:52:17.235: INFO: Created: latency-svc-pf9l5
Jan 25 14:52:17.278: INFO: Got endpoints: latency-svc-pf9l5 [1.390754382s]
Jan 25 14:52:17.278: INFO: Created: latency-svc-6g857
Jan 25 14:52:17.298: INFO: Got endpoints: latency-svc-6g857 [1.399632839s]
Jan 25 14:52:17.460: INFO: Created: latency-svc-jg977
Jan 25 14:52:17.502: INFO: Got endpoints: latency-svc-jg977 [1.443094246s]
Jan 25 14:52:17.541: INFO: Created: latency-svc-t9894
Jan 25 14:52:17.549: INFO: Got endpoints: latency-svc-t9894 [1.458446694s]
Jan 25 14:52:17.689: INFO: Created: latency-svc-xhjg9
Jan 25 14:52:17.709: INFO: Got endpoints: latency-svc-xhjg9 [1.581118355s]
Jan 25 14:52:17.745: INFO: Created: latency-svc-btnkv
Jan 25 14:52:17.878: INFO: Got endpoints: latency-svc-btnkv [1.646606535s]
Jan 25 14:52:17.886: INFO: Created: latency-svc-tcvjd
Jan 25 14:52:17.897: INFO: Got endpoints: latency-svc-tcvjd [1.492204319s]
Jan 25 14:52:17.968: INFO: Created: latency-svc-b4rt4
Jan 25 14:52:18.087: INFO: Got endpoints: latency-svc-b4rt4 [1.649533041s]
Jan 25 14:52:18.132: INFO: Created: latency-svc-cqd4d
Jan 25 14:52:18.142: INFO: Got endpoints: latency-svc-cqd4d [1.574203595s]
Jan 25 14:52:18.194: INFO: Created: latency-svc-454dr
Jan 25 14:52:18.285: INFO: Got endpoints: latency-svc-454dr [1.672791181s]
Jan 25 14:52:18.375: INFO: Created: latency-svc-q667v
Jan 25 14:52:18.377: INFO: Got endpoints: latency-svc-q667v [1.592442599s]
Jan 25 14:52:18.498: INFO: Created: latency-svc-g22q4
Jan 25 14:52:18.508: INFO: Got endpoints: latency-svc-g22q4 [1.710067726s]
Jan 25 14:52:18.562: INFO: Created: latency-svc-f94js
Jan 25 14:52:18.669: INFO: Got endpoints: latency-svc-f94js [1.78074518s]
Jan 25 14:52:18.700: INFO: Created: latency-svc-xlq55
Jan 25 14:52:18.722: INFO: Got endpoints: latency-svc-xlq55 [1.754029924s]
Jan 25 14:52:18.723: INFO: Latencies: [179.801538ms 193.880189ms 222.80345ms 266.934844ms 478.664338ms 653.369548ms 808.438359ms 951.296128ms 980.406273ms 1.097214196s 1.135981465s 1.24441361s 1.264825922s 1.31235111s 1.320976067s 1.323404995s 1.333363615s 1.347251993s 1.360139511s 1.367583912s 1.370331473s 1.381229406s 1.390754382s 1.399632839s 1.4025215s 1.40911294s 1.413441759s 1.437284274s 1.439073904s 1.443094246s 1.448373454s 1.451515322s 1.458446694s 1.463479037s 1.46746666s 1.474799358s 1.474998757s 1.475348948s 1.475975268s 1.488727357s 1.489945632s 1.490214031s 1.490792004s 1.492204319s 1.492838151s 1.495361784s 1.510659221s 1.512127042s 1.515114651s 1.516406342s 1.523816877s 1.533313725s 1.536250107s 1.538434627s 1.544698334s 1.545143423s 1.546035925s 1.553035427s 1.553879898s 1.554151134s 1.554801844s 1.55862896s 1.561421358s 1.561492614s 1.56272732s 1.566573617s 1.570581034s 1.574203595s 1.577737193s 1.581118355s 1.583524474s 1.588898868s 1.589423101s 1.592442599s 1.599041812s 1.600329906s 1.606346255s 1.60722479s 1.608235586s 1.610168062s 1.611650479s 1.611936867s 1.617593411s 1.625795847s 1.630734592s 1.630916546s 1.63825776s 1.639346603s 1.641364208s 1.642508683s 1.642694274s 1.642795708s 1.644730632s 1.646606535s 1.64767482s 1.649533041s 1.653395135s 1.659838232s 1.668196933s 1.672791181s 1.676691703s 1.677023044s 1.683214349s 1.684218572s 1.686033168s 1.686841374s 1.698537506s 1.699279907s 1.70083309s 1.701853351s 1.701876054s 1.704519685s 1.710067726s 1.710144708s 1.718190158s 1.719763882s 1.721904317s 1.724049645s 1.732947636s 1.743664083s 1.74472999s 1.747046443s 1.754029924s 1.759920382s 1.761509552s 1.767080347s 1.772794727s 1.78074518s 1.78669741s 1.787003495s 1.788681653s 1.78886395s 1.810489046s 1.81101567s 1.81245951s 1.822614054s 1.83064716s 1.850149729s 1.853844263s 1.858408443s 1.870963142s 1.872157103s 1.872527819s 1.874322338s 1.881134501s 1.882355515s 1.89432029s 1.895483015s 1.899299277s 1.908733661s 1.914163469s 1.918632694s 1.939886138s 1.959185813s 1.971714145s 1.9727653s 1.981508528s 1.99515307s 2.000158776s 2.013950153s 2.014530828s 2.022192088s 2.03303964s 2.036675609s 2.038148369s 2.039348194s 2.044487401s 2.046378213s 2.04965986s 2.050290077s 2.052292383s 2.062101754s 2.080206035s 2.080283662s 2.112441435s 2.118925483s 2.122989544s 2.137558392s 2.156089604s 2.164659185s 2.176008472s 2.195947784s 2.209187732s 2.211073916s 2.21945554s 2.235280833s 2.236031294s 2.256154063s 2.261653775s 2.262910943s 2.266344199s 2.267360825s 2.279447448s 2.281484656s 2.300993351s 2.319581755s 2.35259222s 2.356747771s 2.373271231s 2.427872793s]
Jan 25 14:52:18.724: INFO: 50 %ile: 1.676691703s
Jan 25 14:52:18.724: INFO: 90 %ile: 2.176008472s
Jan 25 14:52:18.724: INFO: 99 %ile: 2.373271231s
Jan 25 14:52:18.724: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:52:18.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-8253" for this suite.
Jan 25 14:53:02.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:53:02.998: INFO: namespace svc-latency-8253 deletion completed in 44.265513072s

• [SLOW TEST:77.081 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:53:02.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-957f7f40-fb99-4b36-939e-8d2c4f61ea56
STEP: Creating a pod to test consume configMaps
Jan 25 14:53:03.123: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e7e1bfe4-9a5c-4d5f-b1f4-b4368d6307df" in namespace "projected-6070" to be "success or failure"
Jan 25 14:53:03.129: INFO: Pod "pod-projected-configmaps-e7e1bfe4-9a5c-4d5f-b1f4-b4368d6307df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.499325ms
Jan 25 14:53:05.139: INFO: Pod "pod-projected-configmaps-e7e1bfe4-9a5c-4d5f-b1f4-b4368d6307df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016395206s
Jan 25 14:53:07.147: INFO: Pod "pod-projected-configmaps-e7e1bfe4-9a5c-4d5f-b1f4-b4368d6307df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024663925s
Jan 25 14:53:09.156: INFO: Pod "pod-projected-configmaps-e7e1bfe4-9a5c-4d5f-b1f4-b4368d6307df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03339144s
Jan 25 14:53:11.170: INFO: Pod "pod-projected-configmaps-e7e1bfe4-9a5c-4d5f-b1f4-b4368d6307df": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047241219s
Jan 25 14:53:13.179: INFO: Pod "pod-projected-configmaps-e7e1bfe4-9a5c-4d5f-b1f4-b4368d6307df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.056019507s
STEP: Saw pod success
Jan 25 14:53:13.179: INFO: Pod "pod-projected-configmaps-e7e1bfe4-9a5c-4d5f-b1f4-b4368d6307df" satisfied condition "success or failure"
Jan 25 14:53:13.188: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-e7e1bfe4-9a5c-4d5f-b1f4-b4368d6307df container projected-configmap-volume-test: 
STEP: delete the pod
Jan 25 14:53:13.263: INFO: Waiting for pod pod-projected-configmaps-e7e1bfe4-9a5c-4d5f-b1f4-b4368d6307df to disappear
Jan 25 14:53:13.268: INFO: Pod pod-projected-configmaps-e7e1bfe4-9a5c-4d5f-b1f4-b4368d6307df no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:53:13.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6070" for this suite.
Jan 25 14:53:19.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:53:19.468: INFO: namespace projected-6070 deletion completed in 6.194212844s

• [SLOW TEST:16.470 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:53:19.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9558.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9558.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9558.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9558.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9558.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9558.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 25 14:53:33.777: INFO: Unable to read wheezy_udp@PodARecord from pod dns-9558/dns-test-35940f24-8c4e-4d07-a574-074cab9ca635: the server could not find the requested resource (get pods dns-test-35940f24-8c4e-4d07-a574-074cab9ca635)
Jan 25 14:53:33.789: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-9558/dns-test-35940f24-8c4e-4d07-a574-074cab9ca635: the server could not find the requested resource (get pods dns-test-35940f24-8c4e-4d07-a574-074cab9ca635)
Jan 25 14:53:33.800: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-9558.svc.cluster.local from pod dns-9558/dns-test-35940f24-8c4e-4d07-a574-074cab9ca635: the server could not find the requested resource (get pods dns-test-35940f24-8c4e-4d07-a574-074cab9ca635)
Jan 25 14:53:33.805: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-9558/dns-test-35940f24-8c4e-4d07-a574-074cab9ca635: the server could not find the requested resource (get pods dns-test-35940f24-8c4e-4d07-a574-074cab9ca635)
Jan 25 14:53:33.818: INFO: Unable to read jessie_udp@PodARecord from pod dns-9558/dns-test-35940f24-8c4e-4d07-a574-074cab9ca635: the server could not find the requested resource (get pods dns-test-35940f24-8c4e-4d07-a574-074cab9ca635)
Jan 25 14:53:33.826: INFO: Unable to read jessie_tcp@PodARecord from pod dns-9558/dns-test-35940f24-8c4e-4d07-a574-074cab9ca635: the server could not find the requested resource (get pods dns-test-35940f24-8c4e-4d07-a574-074cab9ca635)
Jan 25 14:53:33.827: INFO: Lookups using dns-9558/dns-test-35940f24-8c4e-4d07-a574-074cab9ca635 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-9558.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan 25 14:53:38.931: INFO: DNS probes using dns-9558/dns-test-35940f24-8c4e-4d07-a574-074cab9ca635 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:53:38.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9558" for this suite.
Jan 25 14:53:45.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:53:45.275: INFO: namespace dns-9558 deletion completed in 6.189482343s

• [SLOW TEST:25.807 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:53:45.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan 25 14:53:53.965: INFO: Successfully updated pod "labelsupdatef9e15082-ea75-4a62-988b-858994e7262b"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:53:56.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-868" for this suite.
Jan 25 14:54:18.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:54:18.274: INFO: namespace projected-868 deletion completed in 22.138239247s

• [SLOW TEST:32.998 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:54:18.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-c39563cf-cb64-485a-93b7-3c456237d2b8
STEP: Creating a pod to test consume configMaps
Jan 25 14:54:18.408: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-76bfcc05-e3f9-4ee5-96c2-7d60a3115a72" in namespace "projected-1143" to be "success or failure"
Jan 25 14:54:18.441: INFO: Pod "pod-projected-configmaps-76bfcc05-e3f9-4ee5-96c2-7d60a3115a72": Phase="Pending", Reason="", readiness=false. Elapsed: 32.503203ms
Jan 25 14:54:20.455: INFO: Pod "pod-projected-configmaps-76bfcc05-e3f9-4ee5-96c2-7d60a3115a72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046546818s
Jan 25 14:54:22.464: INFO: Pod "pod-projected-configmaps-76bfcc05-e3f9-4ee5-96c2-7d60a3115a72": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055283479s
Jan 25 14:54:24.564: INFO: Pod "pod-projected-configmaps-76bfcc05-e3f9-4ee5-96c2-7d60a3115a72": Phase="Pending", Reason="", readiness=false. Elapsed: 6.156076802s
Jan 25 14:54:26.582: INFO: Pod "pod-projected-configmaps-76bfcc05-e3f9-4ee5-96c2-7d60a3115a72": Phase="Pending", Reason="", readiness=false. Elapsed: 8.173901087s
Jan 25 14:54:28.596: INFO: Pod "pod-projected-configmaps-76bfcc05-e3f9-4ee5-96c2-7d60a3115a72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.18747367s
STEP: Saw pod success
Jan 25 14:54:28.596: INFO: Pod "pod-projected-configmaps-76bfcc05-e3f9-4ee5-96c2-7d60a3115a72" satisfied condition "success or failure"
Jan 25 14:54:28.605: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-76bfcc05-e3f9-4ee5-96c2-7d60a3115a72 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 25 14:54:28.646: INFO: Waiting for pod pod-projected-configmaps-76bfcc05-e3f9-4ee5-96c2-7d60a3115a72 to disappear
Jan 25 14:54:28.651: INFO: Pod pod-projected-configmaps-76bfcc05-e3f9-4ee5-96c2-7d60a3115a72 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:54:28.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1143" for this suite.
Jan 25 14:54:34.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:54:34.779: INFO: namespace projected-1143 deletion completed in 6.124784083s

• [SLOW TEST:16.505 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:54:34.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8640.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8640.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8640.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8640.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 25 14:54:47.021: INFO: File wheezy_udp@dns-test-service-3.dns-8640.svc.cluster.local from pod  dns-8640/dns-test-d711443a-04d8-4833-8ec0-93cffe41faba contains '' instead of 'foo.example.com.'
Jan 25 14:54:47.029: INFO: File jessie_udp@dns-test-service-3.dns-8640.svc.cluster.local from pod  dns-8640/dns-test-d711443a-04d8-4833-8ec0-93cffe41faba contains '' instead of 'foo.example.com.'
Jan 25 14:54:47.029: INFO: Lookups using dns-8640/dns-test-d711443a-04d8-4833-8ec0-93cffe41faba failed for: [wheezy_udp@dns-test-service-3.dns-8640.svc.cluster.local jessie_udp@dns-test-service-3.dns-8640.svc.cluster.local]

Jan 25 14:54:52.056: INFO: File jessie_udp@dns-test-service-3.dns-8640.svc.cluster.local from pod  dns-8640/dns-test-d711443a-04d8-4833-8ec0-93cffe41faba contains '' instead of 'foo.example.com.'
Jan 25 14:54:52.056: INFO: Lookups using dns-8640/dns-test-d711443a-04d8-4833-8ec0-93cffe41faba failed for: [jessie_udp@dns-test-service-3.dns-8640.svc.cluster.local]

Jan 25 14:54:57.057: INFO: DNS probes using dns-test-d711443a-04d8-4833-8ec0-93cffe41faba succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8640.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8640.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8640.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8640.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 25 14:55:13.286: INFO: File wheezy_udp@dns-test-service-3.dns-8640.svc.cluster.local from pod  dns-8640/dns-test-884ae6d6-83a9-4fbb-8857-4b2dbc446d8b contains '' instead of 'bar.example.com.'
Jan 25 14:55:13.300: INFO: File jessie_udp@dns-test-service-3.dns-8640.svc.cluster.local from pod  dns-8640/dns-test-884ae6d6-83a9-4fbb-8857-4b2dbc446d8b contains '' instead of 'bar.example.com.'
Jan 25 14:55:13.300: INFO: Lookups using dns-8640/dns-test-884ae6d6-83a9-4fbb-8857-4b2dbc446d8b failed for: [wheezy_udp@dns-test-service-3.dns-8640.svc.cluster.local jessie_udp@dns-test-service-3.dns-8640.svc.cluster.local]

Jan 25 14:55:18.376: INFO: File wheezy_udp@dns-test-service-3.dns-8640.svc.cluster.local from pod  dns-8640/dns-test-884ae6d6-83a9-4fbb-8857-4b2dbc446d8b contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 25 14:55:18.386: INFO: Lookups using dns-8640/dns-test-884ae6d6-83a9-4fbb-8857-4b2dbc446d8b failed for: [wheezy_udp@dns-test-service-3.dns-8640.svc.cluster.local]

Jan 25 14:55:23.327: INFO: DNS probes using dns-test-884ae6d6-83a9-4fbb-8857-4b2dbc446d8b succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8640.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8640.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8640.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8640.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 25 14:55:39.696: INFO: File wheezy_udp@dns-test-service-3.dns-8640.svc.cluster.local from pod  dns-8640/dns-test-509c33f2-b904-4235-91b3-ff85e9cc774c contains '' instead of '10.106.166.51'
Jan 25 14:55:39.701: INFO: File jessie_udp@dns-test-service-3.dns-8640.svc.cluster.local from pod  dns-8640/dns-test-509c33f2-b904-4235-91b3-ff85e9cc774c contains '' instead of '10.106.166.51'
Jan 25 14:55:39.701: INFO: Lookups using dns-8640/dns-test-509c33f2-b904-4235-91b3-ff85e9cc774c failed for: [wheezy_udp@dns-test-service-3.dns-8640.svc.cluster.local jessie_udp@dns-test-service-3.dns-8640.svc.cluster.local]

Jan 25 14:55:44.724: INFO: DNS probes using dns-test-509c33f2-b904-4235-91b3-ff85e9cc774c succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:55:44.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8640" for this suite.
Jan 25 14:55:50.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:55:51.134: INFO: namespace dns-8640 deletion completed in 6.24682223s

• [SLOW TEST:76.355 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:55:51.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-9bb02352-c4ce-4efe-805d-49481aa83efe
STEP: Creating a pod to test consume configMaps
Jan 25 14:55:51.263: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7403090b-43a2-43d0-b40e-22d17cd21e60" in namespace "projected-7431" to be "success or failure"
Jan 25 14:55:51.275: INFO: Pod "pod-projected-configmaps-7403090b-43a2-43d0-b40e-22d17cd21e60": Phase="Pending", Reason="", readiness=false. Elapsed: 11.725286ms
Jan 25 14:55:53.291: INFO: Pod "pod-projected-configmaps-7403090b-43a2-43d0-b40e-22d17cd21e60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0281356s
Jan 25 14:55:55.302: INFO: Pod "pod-projected-configmaps-7403090b-43a2-43d0-b40e-22d17cd21e60": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039374331s
Jan 25 14:55:57.316: INFO: Pod "pod-projected-configmaps-7403090b-43a2-43d0-b40e-22d17cd21e60": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052739607s
Jan 25 14:55:59.325: INFO: Pod "pod-projected-configmaps-7403090b-43a2-43d0-b40e-22d17cd21e60": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061694667s
Jan 25 14:56:01.333: INFO: Pod "pod-projected-configmaps-7403090b-43a2-43d0-b40e-22d17cd21e60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070474925s
STEP: Saw pod success
Jan 25 14:56:01.333: INFO: Pod "pod-projected-configmaps-7403090b-43a2-43d0-b40e-22d17cd21e60" satisfied condition "success or failure"
Jan 25 14:56:01.337: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-7403090b-43a2-43d0-b40e-22d17cd21e60 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 25 14:56:01.410: INFO: Waiting for pod pod-projected-configmaps-7403090b-43a2-43d0-b40e-22d17cd21e60 to disappear
Jan 25 14:56:01.448: INFO: Pod pod-projected-configmaps-7403090b-43a2-43d0-b40e-22d17cd21e60 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:56:01.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7431" for this suite.
Jan 25 14:56:07.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:56:07.640: INFO: namespace projected-7431 deletion completed in 6.182296625s

• [SLOW TEST:16.505 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:56:07.641: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 25 14:56:07.818: INFO: Create a RollingUpdate DaemonSet
Jan 25 14:56:07.826: INFO: Check that daemon pods launch on every node of the cluster
Jan 25 14:56:07.860: INFO: Number of nodes with available pods: 0
Jan 25 14:56:07.861: INFO: Node iruya-node is running more than one daemon pod
Jan 25 14:56:09.829: INFO: Number of nodes with available pods: 0
Jan 25 14:56:09.830: INFO: Node iruya-node is running more than one daemon pod
Jan 25 14:56:10.376: INFO: Number of nodes with available pods: 0
Jan 25 14:56:10.376: INFO: Node iruya-node is running more than one daemon pod
Jan 25 14:56:11.362: INFO: Number of nodes with available pods: 0
Jan 25 14:56:11.362: INFO: Node iruya-node is running more than one daemon pod
Jan 25 14:56:11.897: INFO: Number of nodes with available pods: 0
Jan 25 14:56:11.897: INFO: Node iruya-node is running more than one daemon pod
Jan 25 14:56:12.957: INFO: Number of nodes with available pods: 0
Jan 25 14:56:12.957: INFO: Node iruya-node is running more than one daemon pod
Jan 25 14:56:14.906: INFO: Number of nodes with available pods: 0
Jan 25 14:56:14.906: INFO: Node iruya-node is running more than one daemon pod
Jan 25 14:56:15.883: INFO: Number of nodes with available pods: 0
Jan 25 14:56:15.883: INFO: Node iruya-node is running more than one daemon pod
Jan 25 14:56:16.931: INFO: Number of nodes with available pods: 0
Jan 25 14:56:16.932: INFO: Node iruya-node is running more than one daemon pod
Jan 25 14:56:17.881: INFO: Number of nodes with available pods: 0
Jan 25 14:56:17.881: INFO: Node iruya-node is running more than one daemon pod
Jan 25 14:56:18.884: INFO: Number of nodes with available pods: 2
Jan 25 14:56:18.884: INFO: Number of running nodes: 2, number of available pods: 2
Jan 25 14:56:18.884: INFO: Update the DaemonSet to trigger a rollout
Jan 25 14:56:18.910: INFO: Updating DaemonSet daemon-set
Jan 25 14:56:28.179: INFO: Roll back the DaemonSet before rollout is complete
Jan 25 14:56:28.192: INFO: Updating DaemonSet daemon-set
Jan 25 14:56:28.192: INFO: Make sure DaemonSet rollback is complete
Jan 25 14:56:28.208: INFO: Wrong image for pod: daemon-set-jnsxs. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan 25 14:56:28.208: INFO: Pod daemon-set-jnsxs is not available
Jan 25 14:56:29.291: INFO: Wrong image for pod: daemon-set-jnsxs. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan 25 14:56:29.291: INFO: Pod daemon-set-jnsxs is not available
Jan 25 14:56:30.288: INFO: Wrong image for pod: daemon-set-jnsxs. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan 25 14:56:30.288: INFO: Pod daemon-set-jnsxs is not available
Jan 25 14:56:31.289: INFO: Wrong image for pod: daemon-set-jnsxs. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan 25 14:56:31.289: INFO: Pod daemon-set-jnsxs is not available
Jan 25 14:56:32.278: INFO: Wrong image for pod: daemon-set-jnsxs. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan 25 14:56:32.278: INFO: Pod daemon-set-jnsxs is not available
Jan 25 14:56:33.295: INFO: Wrong image for pod: daemon-set-jnsxs. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan 25 14:56:33.295: INFO: Pod daemon-set-jnsxs is not available
Jan 25 14:56:34.277: INFO: Wrong image for pod: daemon-set-jnsxs. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan 25 14:56:34.277: INFO: Pod daemon-set-jnsxs is not available
Jan 25 14:56:35.278: INFO: Wrong image for pod: daemon-set-jnsxs. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan 25 14:56:35.278: INFO: Pod daemon-set-jnsxs is not available
Jan 25 14:56:36.280: INFO: Wrong image for pod: daemon-set-jnsxs. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan 25 14:56:36.280: INFO: Pod daemon-set-jnsxs is not available
Jan 25 14:56:37.279: INFO: Wrong image for pod: daemon-set-jnsxs. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan 25 14:56:37.279: INFO: Pod daemon-set-jnsxs is not available
Jan 25 14:56:38.282: INFO: Pod daemon-set-gxqp2 is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7751, will wait for the garbage collector to delete the pods
Jan 25 14:56:38.396: INFO: Deleting DaemonSet.extensions daemon-set took: 16.282189ms
Jan 25 14:56:39.697: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.300354452s
Jan 25 14:56:56.608: INFO: Number of nodes with available pods: 0
Jan 25 14:56:56.608: INFO: Number of running nodes: 0, number of available pods: 0
Jan 25 14:56:56.614: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7751/daemonsets","resourceVersion":"21825959"},"items":null}

Jan 25 14:56:56.619: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7751/pods","resourceVersion":"21825959"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:56:56.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7751" for this suite.
Jan 25 14:57:04.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:57:04.884: INFO: namespace daemonsets-7751 deletion completed in 8.246068857s

• [SLOW TEST:57.244 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:57:04.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Jan 25 14:57:05.044: INFO: Waiting up to 5m0s for pod "client-containers-0671a9e5-4582-47db-8a6e-33149d6667ab" in namespace "containers-6309" to be "success or failure"
Jan 25 14:57:05.053: INFO: Pod "client-containers-0671a9e5-4582-47db-8a6e-33149d6667ab": Phase="Pending", Reason="", readiness=false. Elapsed: 8.963923ms
Jan 25 14:57:07.067: INFO: Pod "client-containers-0671a9e5-4582-47db-8a6e-33149d6667ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023213693s
Jan 25 14:57:09.076: INFO: Pod "client-containers-0671a9e5-4582-47db-8a6e-33149d6667ab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032036987s
Jan 25 14:57:11.090: INFO: Pod "client-containers-0671a9e5-4582-47db-8a6e-33149d6667ab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046041883s
Jan 25 14:57:13.215: INFO: Pod "client-containers-0671a9e5-4582-47db-8a6e-33149d6667ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.171009112s
STEP: Saw pod success
Jan 25 14:57:13.215: INFO: Pod "client-containers-0671a9e5-4582-47db-8a6e-33149d6667ab" satisfied condition "success or failure"
Jan 25 14:57:13.219: INFO: Trying to get logs from node iruya-node pod client-containers-0671a9e5-4582-47db-8a6e-33149d6667ab container test-container: 
STEP: delete the pod
Jan 25 14:57:13.266: INFO: Waiting for pod client-containers-0671a9e5-4582-47db-8a6e-33149d6667ab to disappear
Jan 25 14:57:13.274: INFO: Pod client-containers-0671a9e5-4582-47db-8a6e-33149d6667ab no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:57:13.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6309" for this suite.
Jan 25 14:57:19.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:57:19.459: INFO: namespace containers-6309 deletion completed in 6.175455914s

• [SLOW TEST:14.573 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:57:19.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan 25 14:57:19.617: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8365,SelfLink:/api/v1/namespaces/watch-8365/configmaps/e2e-watch-test-watch-closed,UID:5ab4b1f9-8925-4f75-854c-e8f3e8579394,ResourceVersion:21826042,Generation:0,CreationTimestamp:2020-01-25 14:57:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 25 14:57:19.618: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8365,SelfLink:/api/v1/namespaces/watch-8365/configmaps/e2e-watch-test-watch-closed,UID:5ab4b1f9-8925-4f75-854c-e8f3e8579394,ResourceVersion:21826043,Generation:0,CreationTimestamp:2020-01-25 14:57:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan 25 14:57:19.654: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8365,SelfLink:/api/v1/namespaces/watch-8365/configmaps/e2e-watch-test-watch-closed,UID:5ab4b1f9-8925-4f75-854c-e8f3e8579394,ResourceVersion:21826044,Generation:0,CreationTimestamp:2020-01-25 14:57:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 25 14:57:19.654: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8365,SelfLink:/api/v1/namespaces/watch-8365/configmaps/e2e-watch-test-watch-closed,UID:5ab4b1f9-8925-4f75-854c-e8f3e8579394,ResourceVersion:21826045,Generation:0,CreationTimestamp:2020-01-25 14:57:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:57:19.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8365" for this suite.
Jan 25 14:57:25.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:57:25.916: INFO: namespace watch-8365 deletion completed in 6.165760964s

• [SLOW TEST:6.457 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:57:25.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-46623a69-3670-4cd0-a426-ffee5e5f22d8
STEP: Creating a pod to test consume configMaps
Jan 25 14:57:26.046: INFO: Waiting up to 5m0s for pod "pod-configmaps-9efce9f1-7cd2-4713-bc44-98363b1a2505" in namespace "configmap-3278" to be "success or failure"
Jan 25 14:57:26.061: INFO: Pod "pod-configmaps-9efce9f1-7cd2-4713-bc44-98363b1a2505": Phase="Pending", Reason="", readiness=false. Elapsed: 14.800074ms
Jan 25 14:57:28.071: INFO: Pod "pod-configmaps-9efce9f1-7cd2-4713-bc44-98363b1a2505": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024212373s
Jan 25 14:57:30.081: INFO: Pod "pod-configmaps-9efce9f1-7cd2-4713-bc44-98363b1a2505": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034961591s
Jan 25 14:57:32.095: INFO: Pod "pod-configmaps-9efce9f1-7cd2-4713-bc44-98363b1a2505": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048985567s
Jan 25 14:57:34.116: INFO: Pod "pod-configmaps-9efce9f1-7cd2-4713-bc44-98363b1a2505": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069793133s
STEP: Saw pod success
Jan 25 14:57:34.116: INFO: Pod "pod-configmaps-9efce9f1-7cd2-4713-bc44-98363b1a2505" satisfied condition "success or failure"
Jan 25 14:57:34.132: INFO: Trying to get logs from node iruya-node pod pod-configmaps-9efce9f1-7cd2-4713-bc44-98363b1a2505 container configmap-volume-test: 
STEP: delete the pod
Jan 25 14:57:34.225: INFO: Waiting for pod pod-configmaps-9efce9f1-7cd2-4713-bc44-98363b1a2505 to disappear
Jan 25 14:57:34.229: INFO: Pod pod-configmaps-9efce9f1-7cd2-4713-bc44-98363b1a2505 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:57:34.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3278" for this suite.
Jan 25 14:57:40.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:57:40.452: INFO: namespace configmap-3278 deletion completed in 6.217959538s

• [SLOW TEST:14.535 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:57:40.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-4l8f
STEP: Creating a pod to test atomic-volume-subpath
Jan 25 14:57:40.590: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-4l8f" in namespace "subpath-2425" to be "success or failure"
Jan 25 14:57:40.623: INFO: Pod "pod-subpath-test-secret-4l8f": Phase="Pending", Reason="", readiness=false. Elapsed: 33.666326ms
Jan 25 14:57:42.638: INFO: Pod "pod-subpath-test-secret-4l8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047967377s
Jan 25 14:57:44.649: INFO: Pod "pod-subpath-test-secret-4l8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05914552s
Jan 25 14:57:46.666: INFO: Pod "pod-subpath-test-secret-4l8f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076698733s
Jan 25 14:57:48.684: INFO: Pod "pod-subpath-test-secret-4l8f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.094109193s
Jan 25 14:57:50.689: INFO: Pod "pod-subpath-test-secret-4l8f": Phase="Running", Reason="", readiness=true. Elapsed: 10.099329206s
Jan 25 14:57:52.694: INFO: Pod "pod-subpath-test-secret-4l8f": Phase="Running", Reason="", readiness=true. Elapsed: 12.104861927s
Jan 25 14:57:54.710: INFO: Pod "pod-subpath-test-secret-4l8f": Phase="Running", Reason="", readiness=true. Elapsed: 14.12069098s
Jan 25 14:57:56.717: INFO: Pod "pod-subpath-test-secret-4l8f": Phase="Running", Reason="", readiness=true. Elapsed: 16.12768019s
Jan 25 14:57:58.725: INFO: Pod "pod-subpath-test-secret-4l8f": Phase="Running", Reason="", readiness=true. Elapsed: 18.134985285s
Jan 25 14:58:00.748: INFO: Pod "pod-subpath-test-secret-4l8f": Phase="Running", Reason="", readiness=true. Elapsed: 20.158375564s
Jan 25 14:58:02.777: INFO: Pod "pod-subpath-test-secret-4l8f": Phase="Running", Reason="", readiness=true. Elapsed: 22.187394644s
Jan 25 14:58:04.785: INFO: Pod "pod-subpath-test-secret-4l8f": Phase="Running", Reason="", readiness=true. Elapsed: 24.195240022s
Jan 25 14:58:06.795: INFO: Pod "pod-subpath-test-secret-4l8f": Phase="Running", Reason="", readiness=true. Elapsed: 26.205217222s
Jan 25 14:58:08.810: INFO: Pod "pod-subpath-test-secret-4l8f": Phase="Running", Reason="", readiness=true. Elapsed: 28.220001514s
Jan 25 14:58:10.831: INFO: Pod "pod-subpath-test-secret-4l8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.241561871s
STEP: Saw pod success
Jan 25 14:58:10.831: INFO: Pod "pod-subpath-test-secret-4l8f" satisfied condition "success or failure"
Jan 25 14:58:10.836: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-4l8f container test-container-subpath-secret-4l8f: 
STEP: delete the pod
Jan 25 14:58:11.095: INFO: Waiting for pod pod-subpath-test-secret-4l8f to disappear
Jan 25 14:58:11.114: INFO: Pod pod-subpath-test-secret-4l8f no longer exists
STEP: Deleting pod pod-subpath-test-secret-4l8f
Jan 25 14:58:11.115: INFO: Deleting pod "pod-subpath-test-secret-4l8f" in namespace "subpath-2425"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:58:11.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2425" for this suite.
Jan 25 14:58:17.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:58:17.247: INFO: namespace subpath-2425 deletion completed in 6.123172031s

• [SLOW TEST:36.794 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:58:17.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 25 14:58:17.335: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f1ee54b0-a5f4-444f-9b32-8ce2b72e598d" in namespace "projected-9417" to be "success or failure"
Jan 25 14:58:17.340: INFO: Pod "downwardapi-volume-f1ee54b0-a5f4-444f-9b32-8ce2b72e598d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.854253ms
Jan 25 14:58:19.348: INFO: Pod "downwardapi-volume-f1ee54b0-a5f4-444f-9b32-8ce2b72e598d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013079319s
Jan 25 14:58:21.375: INFO: Pod "downwardapi-volume-f1ee54b0-a5f4-444f-9b32-8ce2b72e598d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040006908s
Jan 25 14:58:23.384: INFO: Pod "downwardapi-volume-f1ee54b0-a5f4-444f-9b32-8ce2b72e598d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04864254s
Jan 25 14:58:25.393: INFO: Pod "downwardapi-volume-f1ee54b0-a5f4-444f-9b32-8ce2b72e598d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05812166s
Jan 25 14:58:27.404: INFO: Pod "downwardapi-volume-f1ee54b0-a5f4-444f-9b32-8ce2b72e598d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068651611s
STEP: Saw pod success
Jan 25 14:58:27.404: INFO: Pod "downwardapi-volume-f1ee54b0-a5f4-444f-9b32-8ce2b72e598d" satisfied condition "success or failure"
Jan 25 14:58:27.408: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f1ee54b0-a5f4-444f-9b32-8ce2b72e598d container client-container: 
STEP: delete the pod
Jan 25 14:58:27.958: INFO: Waiting for pod downwardapi-volume-f1ee54b0-a5f4-444f-9b32-8ce2b72e598d to disappear
Jan 25 14:58:27.969: INFO: Pod downwardapi-volume-f1ee54b0-a5f4-444f-9b32-8ce2b72e598d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:58:27.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9417" for this suite.
Jan 25 14:58:34.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:58:34.174: INFO: namespace projected-9417 deletion completed in 6.199652115s

• [SLOW TEST:16.927 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:58:34.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-9cba5293-59f2-4966-a452-e0740092b067
STEP: Creating a pod to test consume secrets
Jan 25 14:58:34.278: INFO: Waiting up to 5m0s for pod "pod-secrets-4e952840-6e8f-42bb-8887-47fd39a7b8d7" in namespace "secrets-9553" to be "success or failure"
Jan 25 14:58:34.355: INFO: Pod "pod-secrets-4e952840-6e8f-42bb-8887-47fd39a7b8d7": Phase="Pending", Reason="", readiness=false. Elapsed: 76.987372ms
Jan 25 14:58:36.366: INFO: Pod "pod-secrets-4e952840-6e8f-42bb-8887-47fd39a7b8d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087587749s
Jan 25 14:58:38.413: INFO: Pod "pod-secrets-4e952840-6e8f-42bb-8887-47fd39a7b8d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135014278s
Jan 25 14:58:40.421: INFO: Pod "pod-secrets-4e952840-6e8f-42bb-8887-47fd39a7b8d7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142444428s
Jan 25 14:58:42.430: INFO: Pod "pod-secrets-4e952840-6e8f-42bb-8887-47fd39a7b8d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.151474506s
STEP: Saw pod success
Jan 25 14:58:42.430: INFO: Pod "pod-secrets-4e952840-6e8f-42bb-8887-47fd39a7b8d7" satisfied condition "success or failure"
Jan 25 14:58:42.432: INFO: Trying to get logs from node iruya-node pod pod-secrets-4e952840-6e8f-42bb-8887-47fd39a7b8d7 container secret-volume-test: 
STEP: delete the pod
Jan 25 14:58:42.584: INFO: Waiting for pod pod-secrets-4e952840-6e8f-42bb-8887-47fd39a7b8d7 to disappear
Jan 25 14:58:42.595: INFO: Pod pod-secrets-4e952840-6e8f-42bb-8887-47fd39a7b8d7 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:58:42.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9553" for this suite.
Jan 25 14:58:48.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:58:48.749: INFO: namespace secrets-9553 deletion completed in 6.147254606s

• [SLOW TEST:14.574 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:58:48.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Jan 25 14:58:48.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5352'
Jan 25 14:58:51.296: INFO: stderr: ""
Jan 25 14:58:51.296: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 25 14:58:52.305: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 14:58:52.305: INFO: Found 0 / 1
Jan 25 14:58:53.321: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 14:58:53.321: INFO: Found 0 / 1
Jan 25 14:58:54.340: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 14:58:54.340: INFO: Found 0 / 1
Jan 25 14:58:55.311: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 14:58:55.311: INFO: Found 0 / 1
Jan 25 14:58:56.314: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 14:58:56.314: INFO: Found 0 / 1
Jan 25 14:58:57.319: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 14:58:57.320: INFO: Found 0 / 1
Jan 25 14:58:58.317: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 14:58:58.317: INFO: Found 0 / 1
Jan 25 14:58:59.310: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 14:58:59.310: INFO: Found 1 / 1
Jan 25 14:58:59.310: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan 25 14:58:59.315: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 14:58:59.315: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 25 14:58:59.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-7t6k5 --namespace=kubectl-5352 -p {"metadata":{"annotations":{"x":"y"}}}'
Jan 25 14:58:59.544: INFO: stderr: ""
Jan 25 14:58:59.544: INFO: stdout: "pod/redis-master-7t6k5 patched\n"
STEP: checking annotations
Jan 25 14:58:59.548: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 14:58:59.548: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:58:59.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5352" for this suite.
Jan 25 14:59:21.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:59:21.751: INFO: namespace kubectl-5352 deletion completed in 22.198741289s

• [SLOW TEST:33.000 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:59:21.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Jan 25 14:59:21.948: INFO: Waiting up to 5m0s for pod "var-expansion-791decf2-df7f-4fc1-b838-efdc0d94f739" in namespace "var-expansion-3319" to be "success or failure"
Jan 25 14:59:21.969: INFO: Pod "var-expansion-791decf2-df7f-4fc1-b838-efdc0d94f739": Phase="Pending", Reason="", readiness=false. Elapsed: 21.124907ms
Jan 25 14:59:23.984: INFO: Pod "var-expansion-791decf2-df7f-4fc1-b838-efdc0d94f739": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036170235s
Jan 25 14:59:25.990: INFO: Pod "var-expansion-791decf2-df7f-4fc1-b838-efdc0d94f739": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041826097s
Jan 25 14:59:27.997: INFO: Pod "var-expansion-791decf2-df7f-4fc1-b838-efdc0d94f739": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048892364s
Jan 25 14:59:30.003: INFO: Pod "var-expansion-791decf2-df7f-4fc1-b838-efdc0d94f739": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055349682s
Jan 25 14:59:32.018: INFO: Pod "var-expansion-791decf2-df7f-4fc1-b838-efdc0d94f739": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069777418s
STEP: Saw pod success
Jan 25 14:59:32.018: INFO: Pod "var-expansion-791decf2-df7f-4fc1-b838-efdc0d94f739" satisfied condition "success or failure"
Jan 25 14:59:32.026: INFO: Trying to get logs from node iruya-node pod var-expansion-791decf2-df7f-4fc1-b838-efdc0d94f739 container dapi-container: 
STEP: delete the pod
Jan 25 14:59:32.109: INFO: Waiting for pod var-expansion-791decf2-df7f-4fc1-b838-efdc0d94f739 to disappear
Jan 25 14:59:32.245: INFO: Pod var-expansion-791decf2-df7f-4fc1-b838-efdc0d94f739 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:59:32.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3319" for this suite.
Jan 25 14:59:38.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 14:59:38.431: INFO: namespace var-expansion-3319 deletion completed in 6.177190455s

• [SLOW TEST:16.680 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 14:59:38.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Jan 25 14:59:46.665: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jan 25 14:59:56.840: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 14:59:56.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3917" for this suite.
Jan 25 15:00:02.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 15:00:03.020: INFO: namespace pods-3917 deletion completed in 6.167279146s

• [SLOW TEST:24.588 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 15:00:03.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Jan 25 15:00:03.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1093'
Jan 25 15:00:03.804: INFO: stderr: ""
Jan 25 15:00:03.804: INFO: stdout: "pod/pause created\n"
Jan 25 15:00:03.804: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan 25 15:00:03.804: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1093" to be "running and ready"
Jan 25 15:00:03.874: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 70.015322ms
Jan 25 15:00:05.913: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108197139s
Jan 25 15:00:07.927: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122871854s
Jan 25 15:00:09.934: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129361606s
Jan 25 15:00:11.944: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.139336601s
Jan 25 15:00:11.944: INFO: Pod "pause" satisfied condition "running and ready"
Jan 25 15:00:11.944: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Jan 25 15:00:11.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1093'
Jan 25 15:00:12.166: INFO: stderr: ""
Jan 25 15:00:12.166: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan 25 15:00:12.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1093'
Jan 25 15:00:12.588: INFO: stderr: ""
Jan 25 15:00:12.588: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan 25 15:00:12.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1093'
Jan 25 15:00:12.730: INFO: stderr: ""
Jan 25 15:00:12.730: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan 25 15:00:12.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1093'
Jan 25 15:00:12.911: INFO: stderr: ""
Jan 25 15:00:12.911: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Jan 25 15:00:12.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1093'
Jan 25 15:00:13.165: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 25 15:00:13.165: INFO: stdout: "pod \"pause\" force deleted\n"
Jan 25 15:00:13.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1093'
Jan 25 15:00:13.370: INFO: stderr: "No resources found.\n"
Jan 25 15:00:13.370: INFO: stdout: ""
Jan 25 15:00:13.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1093 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 25 15:00:13.461: INFO: stderr: ""
Jan 25 15:00:13.461: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 15:00:13.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1093" for this suite.
Jan 25 15:00:19.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 15:00:19.648: INFO: namespace kubectl-1093 deletion completed in 6.173022407s

• [SLOW TEST:16.628 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 15:00:19.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-7720
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7720 to expose endpoints map[]
Jan 25 15:00:19.855: INFO: Get endpoints failed (51.755735ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jan 25 15:00:20.876: INFO: successfully validated that service multi-endpoint-test in namespace services-7720 exposes endpoints map[] (1.072320848s elapsed)
STEP: Creating pod pod1 in namespace services-7720
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7720 to expose endpoints map[pod1:[100]]
Jan 25 15:00:25.104: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.160092773s elapsed, will retry)
Jan 25 15:00:28.147: INFO: successfully validated that service multi-endpoint-test in namespace services-7720 exposes endpoints map[pod1:[100]] (7.202786525s elapsed)
STEP: Creating pod pod2 in namespace services-7720
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7720 to expose endpoints map[pod1:[100] pod2:[101]]
Jan 25 15:00:32.494: INFO: Unexpected endpoints: found map[da6cf9c2-c601-4493-b9ae-006532345e54:[100]], expected map[pod1:[100] pod2:[101]] (4.340800123s elapsed, will retry)
Jan 25 15:00:35.272: INFO: successfully validated that service multi-endpoint-test in namespace services-7720 exposes endpoints map[pod1:[100] pod2:[101]] (7.118093282s elapsed)
STEP: Deleting pod pod1 in namespace services-7720
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7720 to expose endpoints map[pod2:[101]]
Jan 25 15:00:35.315: INFO: successfully validated that service multi-endpoint-test in namespace services-7720 exposes endpoints map[pod2:[101]] (35.844771ms elapsed)
STEP: Deleting pod pod2 in namespace services-7720
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7720 to expose endpoints map[]
Jan 25 15:00:35.408: INFO: successfully validated that service multi-endpoint-test in namespace services-7720 exposes endpoints map[] (20.161481ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 15:00:35.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7720" for this suite.
Jan 25 15:00:57.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 15:00:57.635: INFO: namespace services-7720 deletion completed in 22.185318057s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:37.986 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 15:00:57.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 25 15:01:15.930: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 15:01:15.953: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 15:01:17.954: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 15:01:18.008: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 15:01:19.954: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 15:01:19.964: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 15:01:21.954: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 15:01:21.968: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 15:01:23.954: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 15:01:23.962: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 15:01:25.954: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 15:01:25.967: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 15:01:27.954: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 15:01:27.968: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 15:01:29.954: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 15:01:29.978: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 15:01:31.954: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 15:01:31.964: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 15:01:33.954: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 15:01:33.972: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 15:01:35.954: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 15:01:35.965: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 15:01:37.954: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 15:01:37.965: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 15:01:39.954: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 15:01:39.963: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 15:01:41.954: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 15:01:41.967: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 15:01:43.954: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 15:01:43.969: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 15:01:45.954: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 15:01:45.976: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 15:01:47.954: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 15:01:47.982: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 15:01:48.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-606" for this suite.
Jan 25 15:02:10.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 15:02:10.308: INFO: namespace container-lifecycle-hook-606 deletion completed in 22.143796519s

• [SLOW TEST:72.673 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 15:02:10.309: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 25 15:02:10.515: INFO: Creating ReplicaSet my-hostname-basic-8c3545b9-cc3b-4470-a1d9-c930314d2803
Jan 25 15:02:10.528: INFO: Pod name my-hostname-basic-8c3545b9-cc3b-4470-a1d9-c930314d2803: Found 0 pods out of 1
Jan 25 15:02:15.549: INFO: Pod name my-hostname-basic-8c3545b9-cc3b-4470-a1d9-c930314d2803: Found 1 pods out of 1
Jan 25 15:02:15.549: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-8c3545b9-cc3b-4470-a1d9-c930314d2803" is running
Jan 25 15:02:19.565: INFO: Pod "my-hostname-basic-8c3545b9-cc3b-4470-a1d9-c930314d2803-lbkln" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 15:02:10 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 15:02:10 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-8c3545b9-cc3b-4470-a1d9-c930314d2803]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 15:02:10 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-8c3545b9-cc3b-4470-a1d9-c930314d2803]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 15:02:10 +0000 UTC Reason: Message:}])
Jan 25 15:02:19.565: INFO: Trying to dial the pod
Jan 25 15:02:24.593: INFO: Controller my-hostname-basic-8c3545b9-cc3b-4470-a1d9-c930314d2803: Got expected result from replica 1 [my-hostname-basic-8c3545b9-cc3b-4470-a1d9-c930314d2803-lbkln]: "my-hostname-basic-8c3545b9-cc3b-4470-a1d9-c930314d2803-lbkln", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 15:02:24.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9203" for this suite.
Jan 25 15:02:30.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 15:02:30.707: INFO: namespace replicaset-9203 deletion completed in 6.109237276s

• [SLOW TEST:20.398 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 15:02:30.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 25 15:02:30.810: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan 25 15:02:35.824: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 25 15:02:39.842: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 25 15:02:39.914: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-192,SelfLink:/apis/apps/v1/namespaces/deployment-192/deployments/test-cleanup-deployment,UID:9b3f6325-4c99-4e6c-958d-aebc61639e00,ResourceVersion:21826826,Generation:1,CreationTimestamp:2020-01-25 15:02:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jan 25 15:02:40.004: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-192,SelfLink:/apis/apps/v1/namespaces/deployment-192/replicasets/test-cleanup-deployment-55bbcbc84c,UID:620613e0-e3ee-45c2-bd4c-8f0a9d83cc6e,ResourceVersion:21826834,Generation:1,CreationTimestamp:2020-01-25 15:02:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 9b3f6325-4c99-4e6c-958d-aebc61639e00 0xc0027e1487 0xc0027e1488}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 25 15:02:40.004: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jan 25 15:02:40.004: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-192,SelfLink:/apis/apps/v1/namespaces/deployment-192/replicasets/test-cleanup-controller,UID:6ee6fcd8-098f-4ab1-bdc5-1b4af347bbee,ResourceVersion:21826827,Generation:1,CreationTimestamp:2020-01-25 15:02:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 9b3f6325-4c99-4e6c-958d-aebc61639e00 0xc0027e121f 0xc0027e1230}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 25 15:02:40.059: INFO: Pod "test-cleanup-controller-tzcfd" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-tzcfd,GenerateName:test-cleanup-controller-,Namespace:deployment-192,SelfLink:/api/v1/namespaces/deployment-192/pods/test-cleanup-controller-tzcfd,UID:87ab05d8-b0fc-4dee-b48e-770d3e839147,ResourceVersion:21826823,Generation:0,CreationTimestamp:2020-01-25 15:02:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 6ee6fcd8-098f-4ab1-bdc5-1b4af347bbee 0xc0027d2837 0xc0027d2838}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2rfwb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2rfwb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2rfwb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027d28b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027d28d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:02:30 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:02:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:02:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:02:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-25 15:02:30 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-25 15:02:38 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://11035ac319cae3b01fee5b9420d06efef1975be9951ab88a3bf77e8bcf14864b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:02:40.059: INFO: Pod "test-cleanup-deployment-55bbcbc84c-tlpgx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-tlpgx,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-192,SelfLink:/api/v1/namespaces/deployment-192/pods/test-cleanup-deployment-55bbcbc84c-tlpgx,UID:17e725d4-e920-45ce-96ec-ca8c17df384b,ResourceVersion:21826832,Generation:0,CreationTimestamp:2020-01-25 15:02:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 620613e0-e3ee-45c2-bd4c-8f0a9d83cc6e 0xc0027d2a87 0xc0027d2a88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2rfwb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2rfwb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-2rfwb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027d2bb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027d2c20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:02:39 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 15:02:40.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-192" for this suite.
Jan 25 15:02:48.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 15:02:48.332: INFO: namespace deployment-192 deletion completed in 8.249034195s

• [SLOW TEST:17.625 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 15:02:48.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 25 15:02:48.532: INFO: Waiting up to 5m0s for pod "pod-a0f2895b-e120-4267-a304-07fca5660802" in namespace "emptydir-6628" to be "success or failure"
Jan 25 15:02:48.588: INFO: Pod "pod-a0f2895b-e120-4267-a304-07fca5660802": Phase="Pending", Reason="", readiness=false. Elapsed: 56.320487ms
Jan 25 15:02:50.599: INFO: Pod "pod-a0f2895b-e120-4267-a304-07fca5660802": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067522442s
Jan 25 15:02:52.616: INFO: Pod "pod-a0f2895b-e120-4267-a304-07fca5660802": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08477379s
Jan 25 15:02:54.623: INFO: Pod "pod-a0f2895b-e120-4267-a304-07fca5660802": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090841809s
Jan 25 15:02:56.634: INFO: Pod "pod-a0f2895b-e120-4267-a304-07fca5660802": Phase="Pending", Reason="", readiness=false. Elapsed: 8.102260078s
Jan 25 15:02:58.652: INFO: Pod "pod-a0f2895b-e120-4267-a304-07fca5660802": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.120420322s
STEP: Saw pod success
Jan 25 15:02:58.652: INFO: Pod "pod-a0f2895b-e120-4267-a304-07fca5660802" satisfied condition "success or failure"
Jan 25 15:02:58.666: INFO: Trying to get logs from node iruya-node pod pod-a0f2895b-e120-4267-a304-07fca5660802 container test-container: 
STEP: delete the pod
Jan 25 15:02:58.796: INFO: Waiting for pod pod-a0f2895b-e120-4267-a304-07fca5660802 to disappear
Jan 25 15:02:58.842: INFO: Pod pod-a0f2895b-e120-4267-a304-07fca5660802 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 15:02:58.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6628" for this suite.
Jan 25 15:03:04.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 15:03:05.087: INFO: namespace emptydir-6628 deletion completed in 6.225891414s

• [SLOW TEST:16.754 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 15:03:05.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 25 15:03:05.190: INFO: Creating deployment "nginx-deployment"
Jan 25 15:03:05.194: INFO: Waiting for observed generation 1
Jan 25 15:03:07.828: INFO: Waiting for all required pods to come up
Jan 25 15:03:08.622: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan 25 15:03:35.359: INFO: Waiting for deployment "nginx-deployment" to complete
Jan 25 15:03:35.368: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jan 25 15:03:35.377: INFO: Updating deployment nginx-deployment
Jan 25 15:03:35.377: INFO: Waiting for observed generation 2
Jan 25 15:03:38.062: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan 25 15:03:38.066: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan 25 15:03:38.091: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan 25 15:03:38.102: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan 25 15:03:38.102: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan 25 15:03:38.105: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan 25 15:03:38.110: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jan 25 15:03:38.110: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jan 25 15:03:38.119: INFO: Updating deployment nginx-deployment
Jan 25 15:03:38.119: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jan 25 15:03:38.683: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan 25 15:03:38.979: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 25 15:03:43.160: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-3974,SelfLink:/apis/apps/v1/namespaces/deployment-3974/deployments/nginx-deployment,UID:e88d141d-78a4-45b5-9847-3f0e0311fc96,ResourceVersion:21827155,Generation:3,CreationTimestamp:2020-01-25 15:03:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-01-25 15:03:35 +0000 UTC 2020-01-25 15:03:05 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-01-25 15:03:38 +0000 UTC 2020-01-25 15:03:38 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Jan 25 15:03:46.085: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-3974,SelfLink:/apis/apps/v1/namespaces/deployment-3974/replicasets/nginx-deployment-55fb7cb77f,UID:26c7a17b-2fe7-49f3-a0ff-47e54ce756be,ResourceVersion:21827196,Generation:3,CreationTimestamp:2020-01-25 15:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment e88d141d-78a4-45b5-9847-3f0e0311fc96 0xc002c7f5c7 0xc002c7f5c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 25 15:03:46.085: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jan 25 15:03:46.085: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-3974,SelfLink:/apis/apps/v1/namespaces/deployment-3974/replicasets/nginx-deployment-7b8c6f4498,UID:f47d2a78-f81a-4f36-85fa-8c6ee361ef66,ResourceVersion:21827190,Generation:3,CreationTimestamp:2020-01-25 15:03:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment e88d141d-78a4-45b5-9847-3f0e0311fc96 0xc002c7f697 0xc002c7f698}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jan 25 15:03:48.250: INFO: Pod "nginx-deployment-55fb7cb77f-2gtcb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2gtcb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-55fb7cb77f-2gtcb,UID:5f7e6231-097a-4d9b-bb07-d3728e125dae,ResourceVersion:21827183,Generation:0,CreationTimestamp:2020-01-25 15:03:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 26c7a17b-2fe7-49f3-a0ff-47e54ce756be 0xc0025d1c27 0xc0025d1c28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025d1c90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025d1cb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.250: INFO: Pod "nginx-deployment-55fb7cb77f-4dm95" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4dm95,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-55fb7cb77f-4dm95,UID:934938ef-3ad3-4265-82f2-de5f130bf266,ResourceVersion:21827160,Generation:0,CreationTimestamp:2020-01-25 15:03:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 26c7a17b-2fe7-49f3-a0ff-47e54ce756be 0xc0025d1d37 0xc0025d1d38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025d1db0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025d1dd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:39 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.251: INFO: Pod "nginx-deployment-55fb7cb77f-4pzmx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4pzmx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-55fb7cb77f-4pzmx,UID:830897ed-adaf-446b-9ea0-5b1da988af47,ResourceVersion:21827184,Generation:0,CreationTimestamp:2020-01-25 15:03:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 26c7a17b-2fe7-49f3-a0ff-47e54ce756be 0xc0025d1e57 0xc0025d1e58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025d1ed0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025d1ef0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.251: INFO: Pod "nginx-deployment-55fb7cb77f-b2bd7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-b2bd7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-55fb7cb77f-b2bd7,UID:6705eee0-c688-485a-9d67-c8b03bc3f57d,ResourceVersion:21827182,Generation:0,CreationTimestamp:2020-01-25 15:03:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 26c7a17b-2fe7-49f3-a0ff-47e54ce756be 0xc0025d1f77 0xc0025d1f78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025d1ff0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026ae010}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.251: INFO: Pod "nginx-deployment-55fb7cb77f-c29wd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-c29wd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-55fb7cb77f-c29wd,UID:46a6e5f3-1dbf-46de-b425-a1d147faffdf,ResourceVersion:21827154,Generation:0,CreationTimestamp:2020-01-25 15:03:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 26c7a17b-2fe7-49f3-a0ff-47e54ce756be 0xc0026ae097 0xc0026ae098}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026ae100} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026ae120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:39 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.251: INFO: Pod "nginx-deployment-55fb7cb77f-dbmxq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dbmxq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-55fb7cb77f-dbmxq,UID:51b90136-2d48-49ea-bd2a-7fbd1befbb56,ResourceVersion:21827129,Generation:0,CreationTimestamp:2020-01-25 15:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 26c7a17b-2fe7-49f3-a0ff-47e54ce756be 0xc0026ae1a7 0xc0026ae1a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026ae220} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026ae240}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-25 15:03:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.252: INFO: Pod "nginx-deployment-55fb7cb77f-flsx8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-flsx8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-55fb7cb77f-flsx8,UID:4d05b83c-d50c-444d-b48a-55f396b7bab5,ResourceVersion:21827185,Generation:0,CreationTimestamp:2020-01-25 15:03:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 26c7a17b-2fe7-49f3-a0ff-47e54ce756be 0xc0026ae317 0xc0026ae318}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026ae380} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026ae3a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.252: INFO: Pod "nginx-deployment-55fb7cb77f-kpdz8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-kpdz8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-55fb7cb77f-kpdz8,UID:55829fbd-0b34-4f0f-be1a-1e0d625714f4,ResourceVersion:21827107,Generation:0,CreationTimestamp:2020-01-25 15:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 26c7a17b-2fe7-49f3-a0ff-47e54ce756be 0xc0026ae427 0xc0026ae428}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026ae4a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026ae4c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-25 15:03:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.252: INFO: Pod "nginx-deployment-55fb7cb77f-ndkdg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ndkdg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-55fb7cb77f-ndkdg,UID:58ef6bc0-b2fc-43b7-9b63-3b8ed12b7482,ResourceVersion:21827128,Generation:0,CreationTimestamp:2020-01-25 15:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 26c7a17b-2fe7-49f3-a0ff-47e54ce756be 0xc0026ae597 0xc0026ae598}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026ae600} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026ae620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-25 15:03:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.252: INFO: Pod "nginx-deployment-55fb7cb77f-np4wj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-np4wj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-55fb7cb77f-np4wj,UID:04d07b47-c032-47d2-b7ce-6df601e66232,ResourceVersion:21827126,Generation:0,CreationTimestamp:2020-01-25 15:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 26c7a17b-2fe7-49f3-a0ff-47e54ce756be 0xc0026ae6f7 0xc0026ae6f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026ae770} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026ae790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-25 15:03:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.252: INFO: Pod "nginx-deployment-55fb7cb77f-qdzk9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qdzk9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-55fb7cb77f-qdzk9,UID:867011a6-0bbb-4ab2-b676-b603e1c5887b,ResourceVersion:21827122,Generation:0,CreationTimestamp:2020-01-25 15:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 26c7a17b-2fe7-49f3-a0ff-47e54ce756be 0xc0026ae867 0xc0026ae868}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026ae8d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026ae8f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-25 15:03:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.252: INFO: Pod "nginx-deployment-55fb7cb77f-t9nbb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-t9nbb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-55fb7cb77f-t9nbb,UID:9e0e5cbd-ef76-4897-8f9a-fab0b3e3ab6a,ResourceVersion:21827188,Generation:0,CreationTimestamp:2020-01-25 15:03:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 26c7a17b-2fe7-49f3-a0ff-47e54ce756be 0xc0026ae9c7 0xc0026ae9c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026aea40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026aea60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:41 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.253: INFO: Pod "nginx-deployment-55fb7cb77f-xzxpt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xzxpt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-55fb7cb77f-xzxpt,UID:ea3e4d20-93ad-47ef-9e65-ad07d0eda098,ResourceVersion:21827169,Generation:0,CreationTimestamp:2020-01-25 15:03:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 26c7a17b-2fe7-49f3-a0ff-47e54ce756be 0xc0026aeae7 0xc0026aeae8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026aeb50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026aeb70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:39 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.253: INFO: Pod "nginx-deployment-7b8c6f4498-29ds5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-29ds5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-7b8c6f4498-29ds5,UID:fb627a1b-deb9-46e4-bf4d-24214eb5ea64,ResourceVersion:21827171,Generation:0,CreationTimestamp:2020-01-25 15:03:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f47d2a78-f81a-4f36-85fa-8c6ee361ef66 0xc0026aebf7 0xc0026aebf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026aec70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026aec90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:39 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.253: INFO: Pod "nginx-deployment-7b8c6f4498-4d5pf" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4d5pf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-7b8c6f4498-4d5pf,UID:34ca7578-169b-4ca7-9e0c-f4f3b2828a45,ResourceVersion:21827044,Generation:0,CreationTimestamp:2020-01-25 15:03:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f47d2a78-f81a-4f36-85fa-8c6ee361ef66 0xc0026aed17 0xc0026aed18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026aeda0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026aedc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:05 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:05 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2020-01-25 15:03:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-25 15:03:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e630efbd68ff187a6af6bbb0721b73baec34a68af8962f466a96f4a2f1d440fd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.253: INFO: Pod "nginx-deployment-7b8c6f4498-8vvf8" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8vvf8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-7b8c6f4498-8vvf8,UID:0b33a0df-0009-4b65-bdbe-c32c4d18a6b1,ResourceVersion:21827046,Generation:0,CreationTimestamp:2020-01-25 15:03:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f47d2a78-f81a-4f36-85fa-8c6ee361ef66 0xc0026aeea7 0xc0026aeea8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026aef30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026aef50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:05 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:05 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-01-25 15:03:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-25 15:03:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://3ad8c5bbf54cafae9f9cd35e5e1faf68b6254c76634b19e5579fe70c81fe0381}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.253: INFO: Pod "nginx-deployment-7b8c6f4498-bqrh4" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bqrh4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-7b8c6f4498-bqrh4,UID:be1a85f6-b747-45ca-a3f1-087c44f0037d,ResourceVersion:21827051,Generation:0,CreationTimestamp:2020-01-25 15:03:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f47d2a78-f81a-4f36-85fa-8c6ee361ef66 0xc0026af047 0xc0026af048}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026af0c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026af0e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:05 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:05 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-25 15:03:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-25 15:03:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://824d4bada365181799da5b9efc4067c07d285c5e90540b8ef7601f206a9dcaff}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.254: INFO: Pod "nginx-deployment-7b8c6f4498-dr95m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dr95m,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-7b8c6f4498-dr95m,UID:1c966ca5-e1e6-4212-b6f6-c292a6bd90fb,ResourceVersion:21827179,Generation:0,CreationTimestamp:2020-01-25 15:03:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f47d2a78-f81a-4f36-85fa-8c6ee361ef66 0xc0026af1b7 0xc0026af1b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026af230} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026af250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.254: INFO: Pod "nginx-deployment-7b8c6f4498-f4sj4" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-f4sj4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-7b8c6f4498-f4sj4,UID:fb8b1b06-bd11-4a67-b056-e83046ee00ef,ResourceVersion:21827027,Generation:0,CreationTimestamp:2020-01-25 15:03:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f47d2a78-f81a-4f36-85fa-8c6ee361ef66 0xc0026af2d7 0xc0026af2d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026af350} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026af370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:05 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:05 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-25 15:03:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-25 15:03:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://817b2756756a367c25cb095dd3df4d6bd8cc8f1874b13c276e75280a07040fbb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.254: INFO: Pod "nginx-deployment-7b8c6f4498-flmbn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-flmbn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-7b8c6f4498-flmbn,UID:1a86b8ae-2e8d-4e0f-b14c-bfa48f876c63,ResourceVersion:21827159,Generation:0,CreationTimestamp:2020-01-25 15:03:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f47d2a78-f81a-4f36-85fa-8c6ee361ef66 0xc0026af447 0xc0026af448}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026af4b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026af4d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:39 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.254: INFO: Pod "nginx-deployment-7b8c6f4498-gxmhp" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gxmhp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-7b8c6f4498-gxmhp,UID:c361216a-04f8-4d66-ac48-a88a5a590136,ResourceVersion:21827039,Generation:0,CreationTimestamp:2020-01-25 15:03:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f47d2a78-f81a-4f36-85fa-8c6ee361ef66 0xc0026af557 0xc0026af558}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026af5d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026af5f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:05 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:05 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-01-25 15:03:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-25 15:03:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://7e6db6663b7f0814d594bc2bf18fc47f5b5fb5d1915e5f57038b56304ee5a3ae}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.254: INFO: Pod "nginx-deployment-7b8c6f4498-hwvjw" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hwvjw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-7b8c6f4498-hwvjw,UID:dd43514c-673e-4f03-aeea-82f2ebae6b6e,ResourceVersion:21827067,Generation:0,CreationTimestamp:2020-01-25 15:03:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f47d2a78-f81a-4f36-85fa-8c6ee361ef66 0xc0026af6c7 0xc0026af6c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026af730} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026af750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:05 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:34 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:34 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:05 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-01-25 15:03:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-25 15:03:33 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0703b4a013fdb18a5c5b3cceb3019a874b4fd928e5a7cbb2ce824eb6afe3d3a0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.254: INFO: Pod "nginx-deployment-7b8c6f4498-m88qc" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-m88qc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-7b8c6f4498-m88qc,UID:c23846f0-22a8-43f5-9184-892285328736,ResourceVersion:21827062,Generation:0,CreationTimestamp:2020-01-25 15:03:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f47d2a78-f81a-4f36-85fa-8c6ee361ef66 0xc0026af827 0xc0026af828}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026af890} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026af8b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:05 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:33 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:33 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:05 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-01-25 15:03:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-25 15:03:30 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2180d6e74d004be8a7ea91dd2ab7b94c056406cf5bbe33911dcdb36a2d01de48}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.254: INFO: Pod "nginx-deployment-7b8c6f4498-n9ddf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-n9ddf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-7b8c6f4498-n9ddf,UID:ae9033ab-8595-45c2-bd45-d58a869a0957,ResourceVersion:21827191,Generation:0,CreationTimestamp:2020-01-25 15:03:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f47d2a78-f81a-4f36-85fa-8c6ee361ef66 0xc0026af987 0xc0026af988}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026afa00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026afa20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:38 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-25 15:03:39 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.255: INFO: Pod "nginx-deployment-7b8c6f4498-q5wjh" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q5wjh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-7b8c6f4498-q5wjh,UID:8ddcf780-1c33-4dc6-9293-4157ae234d50,ResourceVersion:21827059,Generation:0,CreationTimestamp:2020-01-25 15:03:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f47d2a78-f81a-4f36-85fa-8c6ee361ef66 0xc0026afae7 0xc0026afae8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026afb50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026afb70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:05 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:33 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:33 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:05 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-01-25 15:03:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-25 15:03:30 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0cfbdeb9b9ded1ff4a469f9d9688bbc2745b607b8dfb102b53bfff2c4180ec23}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.255: INFO: Pod "nginx-deployment-7b8c6f4498-qvtd2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qvtd2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-7b8c6f4498-qvtd2,UID:2706c61c-1d29-4d02-81bd-7a2cc0dd708b,ResourceVersion:21827177,Generation:0,CreationTimestamp:2020-01-25 15:03:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f47d2a78-f81a-4f36-85fa-8c6ee361ef66 0xc0026afc47 0xc0026afc48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026afcc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026afce0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.255: INFO: Pod "nginx-deployment-7b8c6f4498-r5kg6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-r5kg6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-7b8c6f4498-r5kg6,UID:452ece77-9917-4968-964d-122727afb040,ResourceVersion:21827180,Generation:0,CreationTimestamp:2020-01-25 15:03:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f47d2a78-f81a-4f36-85fa-8c6ee361ef66 0xc0026afd67 0xc0026afd68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026afde0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026afe00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.256: INFO: Pod "nginx-deployment-7b8c6f4498-rbbdw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rbbdw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-7b8c6f4498-rbbdw,UID:8082cf08-8f25-40c5-b0a0-44ec4ad9b1dc,ResourceVersion:21827153,Generation:0,CreationTimestamp:2020-01-25 15:03:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f47d2a78-f81a-4f36-85fa-8c6ee361ef66 0xc0026afe87 0xc0026afe88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026afef0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026aff10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:38 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.256: INFO: Pod "nginx-deployment-7b8c6f4498-s2775" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-s2775,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-7b8c6f4498-s2775,UID:3bf5bfb5-04ae-403a-97e5-ae8ccdb4aa1f,ResourceVersion:21827181,Generation:0,CreationTimestamp:2020-01-25 15:03:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f47d2a78-f81a-4f36-85fa-8c6ee361ef66 0xc0026aff97 0xc0026aff98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002672000} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002672020}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.256: INFO: Pod "nginx-deployment-7b8c6f4498-sk4lv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sk4lv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-7b8c6f4498-sk4lv,UID:d6629f36-437f-49df-919f-d846c2e181ee,ResourceVersion:21827194,Generation:0,CreationTimestamp:2020-01-25 15:03:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f47d2a78-f81a-4f36-85fa-8c6ee361ef66 0xc0026720a7 0xc0026720a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002672110} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002672130}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:38 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-25 15:03:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.256: INFO: Pod "nginx-deployment-7b8c6f4498-tgq8p" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tgq8p,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-7b8c6f4498-tgq8p,UID:df235759-bc9a-4266-b0ad-404f7f938048,ResourceVersion:21827203,Generation:0,CreationTimestamp:2020-01-25 15:03:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f47d2a78-f81a-4f36-85fa-8c6ee361ef66 0xc0026721f7 0xc0026721f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002672270} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002672290}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:39 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-25 15:03:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.256: INFO: Pod "nginx-deployment-7b8c6f4498-twgp4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-twgp4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-7b8c6f4498-twgp4,UID:f2ad65f7-068c-4eb3-b6a0-a58e184eabdf,ResourceVersion:21827165,Generation:0,CreationTimestamp:2020-01-25 15:03:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f47d2a78-f81a-4f36-85fa-8c6ee361ef66 0xc002672357 0xc002672358}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026723d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026723f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:39 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 15:03:48.256: INFO: Pod "nginx-deployment-7b8c6f4498-vm2x9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vm2x9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3974,SelfLink:/api/v1/namespaces/deployment-3974/pods/nginx-deployment-7b8c6f4498-vm2x9,UID:bc786009-6f5a-4eee-a53c-255b80d7f425,ResourceVersion:21827178,Generation:0,CreationTimestamp:2020-01-25 15:03:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f47d2a78-f81a-4f36-85fa-8c6ee361ef66 0xc002672477 0xc002672478}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tlkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tlkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6tlkx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026724f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002672510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 15:03:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 15:03:48.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3974" for this suite.
Jan 25 15:04:36.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 15:04:37.381: INFO: namespace deployment-3974 deletion completed in 47.697504482s

• [SLOW TEST:92.293 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 15:04:37.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan 25 15:04:37.558: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 25 15:04:37.584: INFO: Waiting for terminating namespaces to be deleted...
Jan 25 15:04:37.591: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan 25 15:04:37.614: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan 25 15:04:37.614: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 25 15:04:37.614: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan 25 15:04:37.614: INFO: 	Container weave ready: true, restart count 0
Jan 25 15:04:37.614: INFO: 	Container weave-npc ready: true, restart count 0
Jan 25 15:04:37.614: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan 25 15:04:37.629: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan 25 15:04:37.629: INFO: 	Container etcd ready: true, restart count 0
Jan 25 15:04:37.629: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan 25 15:04:37.629: INFO: 	Container weave ready: true, restart count 0
Jan 25 15:04:37.629: INFO: 	Container weave-npc ready: true, restart count 0
Jan 25 15:04:37.629: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 25 15:04:37.629: INFO: 	Container coredns ready: true, restart count 0
Jan 25 15:04:37.629: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan 25 15:04:37.629: INFO: 	Container kube-controller-manager ready: true, restart count 19
Jan 25 15:04:37.629: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan 25 15:04:37.629: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 25 15:04:37.629: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan 25 15:04:37.629: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan 25 15:04:37.629: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan 25 15:04:37.629: INFO: 	Container kube-scheduler ready: true, restart count 13
Jan 25 15:04:37.629: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 25 15:04:37.629: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15ed2980df232e89], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 15:04:38.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9613" for this suite.
Jan 25 15:04:44.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 15:04:45.002: INFO: namespace sched-pred-9613 deletion completed in 6.245612384s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.621 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 15:04:45.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-997a4653-edd4-43b6-994f-426eef38b7e0
STEP: Creating a pod to test consume secrets
Jan 25 15:04:45.125: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-47e9052c-c954-4137-8bf4-227c3a4ed972" in namespace "projected-1872" to be "success or failure"
Jan 25 15:04:45.168: INFO: Pod "pod-projected-secrets-47e9052c-c954-4137-8bf4-227c3a4ed972": Phase="Pending", Reason="", readiness=false. Elapsed: 42.761621ms
Jan 25 15:04:47.180: INFO: Pod "pod-projected-secrets-47e9052c-c954-4137-8bf4-227c3a4ed972": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055165813s
Jan 25 15:04:49.191: INFO: Pod "pod-projected-secrets-47e9052c-c954-4137-8bf4-227c3a4ed972": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065467862s
Jan 25 15:04:51.203: INFO: Pod "pod-projected-secrets-47e9052c-c954-4137-8bf4-227c3a4ed972": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078123483s
Jan 25 15:04:53.214: INFO: Pod "pod-projected-secrets-47e9052c-c954-4137-8bf4-227c3a4ed972": Phase="Running", Reason="", readiness=true. Elapsed: 8.089109348s
Jan 25 15:04:55.221: INFO: Pod "pod-projected-secrets-47e9052c-c954-4137-8bf4-227c3a4ed972": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.09583288s
STEP: Saw pod success
Jan 25 15:04:55.221: INFO: Pod "pod-projected-secrets-47e9052c-c954-4137-8bf4-227c3a4ed972" satisfied condition "success or failure"
Jan 25 15:04:55.224: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-47e9052c-c954-4137-8bf4-227c3a4ed972 container projected-secret-volume-test: 
STEP: delete the pod
Jan 25 15:04:55.287: INFO: Waiting for pod pod-projected-secrets-47e9052c-c954-4137-8bf4-227c3a4ed972 to disappear
Jan 25 15:04:55.293: INFO: Pod pod-projected-secrets-47e9052c-c954-4137-8bf4-227c3a4ed972 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 15:04:55.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1872" for this suite.
Jan 25 15:05:01.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 15:05:01.413: INFO: namespace projected-1872 deletion completed in 6.114028006s

• [SLOW TEST:16.411 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 15:05:01.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-7441
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7441 to expose endpoints map[]
Jan 25 15:05:01.578: INFO: Get endpoints failed (9.3925ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan 25 15:05:02.598: INFO: successfully validated that service endpoint-test2 in namespace services-7441 exposes endpoints map[] (1.029349669s elapsed)
STEP: Creating pod pod1 in namespace services-7441
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7441 to expose endpoints map[pod1:[80]]
Jan 25 15:05:06.720: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.090109954s elapsed, will retry)
Jan 25 15:05:12.450: INFO: successfully validated that service endpoint-test2 in namespace services-7441 exposes endpoints map[pod1:[80]] (9.820346955s elapsed)
STEP: Creating pod pod2 in namespace services-7441
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7441 to expose endpoints map[pod1:[80] pod2:[80]]
Jan 25 15:05:17.680: INFO: Unexpected endpoints: found map[c86c2dd3-8342-4d6a-a30b-783b2682d211:[80]], expected map[pod1:[80] pod2:[80]] (5.21409068s elapsed, will retry)
Jan 25 15:05:19.708: INFO: successfully validated that service endpoint-test2 in namespace services-7441 exposes endpoints map[pod1:[80] pod2:[80]] (7.241588465s elapsed)
STEP: Deleting pod pod1 in namespace services-7441
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7441 to expose endpoints map[pod2:[80]]
Jan 25 15:05:19.748: INFO: successfully validated that service endpoint-test2 in namespace services-7441 exposes endpoints map[pod2:[80]] (19.766972ms elapsed)
STEP: Deleting pod pod2 in namespace services-7441
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7441 to expose endpoints map[]
Jan 25 15:05:20.775: INFO: successfully validated that service endpoint-test2 in namespace services-7441 exposes endpoints map[] (1.019824606s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 15:05:20.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7441" for this suite.
Jan 25 15:05:42.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 15:05:43.060: INFO: namespace services-7441 deletion completed in 22.234377538s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:41.647 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 15:05:43.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 25 15:05:43.304: INFO: Waiting up to 5m0s for pod "pod-a5201013-cdc0-42a3-b717-c03485c18d5b" in namespace "emptydir-306" to be "success or failure"
Jan 25 15:05:43.372: INFO: Pod "pod-a5201013-cdc0-42a3-b717-c03485c18d5b": Phase="Pending", Reason="", readiness=false. Elapsed: 67.87266ms
Jan 25 15:05:45.385: INFO: Pod "pod-a5201013-cdc0-42a3-b717-c03485c18d5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081132559s
Jan 25 15:05:47.393: INFO: Pod "pod-a5201013-cdc0-42a3-b717-c03485c18d5b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088235242s
Jan 25 15:05:49.399: INFO: Pod "pod-a5201013-cdc0-42a3-b717-c03485c18d5b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094861954s
Jan 25 15:05:51.415: INFO: Pod "pod-a5201013-cdc0-42a3-b717-c03485c18d5b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.111063921s
Jan 25 15:05:53.424: INFO: Pod "pod-a5201013-cdc0-42a3-b717-c03485c18d5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.11993794s
STEP: Saw pod success
Jan 25 15:05:53.424: INFO: Pod "pod-a5201013-cdc0-42a3-b717-c03485c18d5b" satisfied condition "success or failure"
Jan 25 15:05:53.428: INFO: Trying to get logs from node iruya-node pod pod-a5201013-cdc0-42a3-b717-c03485c18d5b container test-container: 
STEP: delete the pod
Jan 25 15:05:53.566: INFO: Waiting for pod pod-a5201013-cdc0-42a3-b717-c03485c18d5b to disappear
Jan 25 15:05:53.574: INFO: Pod pod-a5201013-cdc0-42a3-b717-c03485c18d5b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 15:05:53.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-306" for this suite.
Jan 25 15:05:59.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 15:05:59.754: INFO: namespace emptydir-306 deletion completed in 6.172894481s

• [SLOW TEST:16.692 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 15:05:59.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 25 15:06:07.959: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 15:06:07.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-788" for this suite.
Jan 25 15:06:14.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 15:06:14.188: INFO: namespace container-runtime-788 deletion completed in 6.201469981s

• [SLOW TEST:14.434 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 15:06:14.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Jan 25 15:06:14.251: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Jan 25 15:06:15.033: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Jan 25 15:06:17.205: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715561575, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715561575, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715561575, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715561574, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 15:06:19.213: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715561575, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715561575, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715561575, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715561574, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 15:06:21.216: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715561575, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715561575, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715561575, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715561574, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 15:06:23.283: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715561575, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715561575, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715561575, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715561574, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 15:06:29.702: INFO: Waited 4.45295893s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 15:06:30.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-3423" for this suite.
Jan 25 15:06:36.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 15:06:36.752: INFO: namespace aggregator-3423 deletion completed in 6.29201075s

• [SLOW TEST:22.563 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 15:06:36.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 25 15:06:36.846: INFO: Waiting up to 5m0s for pod "pod-dc4ca897-42d2-4a4c-a01a-a832b3a2d5b7" in namespace "emptydir-430" to be "success or failure"
Jan 25 15:06:36.875: INFO: Pod "pod-dc4ca897-42d2-4a4c-a01a-a832b3a2d5b7": Phase="Pending", Reason="", readiness=false. Elapsed: 29.113497ms
Jan 25 15:06:38.885: INFO: Pod "pod-dc4ca897-42d2-4a4c-a01a-a832b3a2d5b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038757346s
Jan 25 15:06:40.894: INFO: Pod "pod-dc4ca897-42d2-4a4c-a01a-a832b3a2d5b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047283443s
Jan 25 15:06:42.907: INFO: Pod "pod-dc4ca897-42d2-4a4c-a01a-a832b3a2d5b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060744723s
Jan 25 15:06:44.920: INFO: Pod "pod-dc4ca897-42d2-4a4c-a01a-a832b3a2d5b7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073490637s
Jan 25 15:06:46.930: INFO: Pod "pod-dc4ca897-42d2-4a4c-a01a-a832b3a2d5b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.083967963s
STEP: Saw pod success
Jan 25 15:06:46.930: INFO: Pod "pod-dc4ca897-42d2-4a4c-a01a-a832b3a2d5b7" satisfied condition "success or failure"
Jan 25 15:06:46.935: INFO: Trying to get logs from node iruya-node pod pod-dc4ca897-42d2-4a4c-a01a-a832b3a2d5b7 container test-container: 
STEP: delete the pod
Jan 25 15:06:47.001: INFO: Waiting for pod pod-dc4ca897-42d2-4a4c-a01a-a832b3a2d5b7 to disappear
Jan 25 15:06:47.005: INFO: Pod pod-dc4ca897-42d2-4a4c-a01a-a832b3a2d5b7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 15:06:47.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-430" for this suite.
Jan 25 15:06:53.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 15:06:53.175: INFO: namespace emptydir-430 deletion completed in 6.124174493s

• [SLOW TEST:16.422 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 25 15:06:53.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 25 15:06:53.285: INFO: Waiting up to 5m0s for pod "pod-cae06086-be08-4645-8cb9-9ae1fea3e02d" in namespace "emptydir-5213" to be "success or failure"
Jan 25 15:06:53.289: INFO: Pod "pod-cae06086-be08-4645-8cb9-9ae1fea3e02d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.270775ms
Jan 25 15:06:55.309: INFO: Pod "pod-cae06086-be08-4645-8cb9-9ae1fea3e02d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023814551s
Jan 25 15:06:57.318: INFO: Pod "pod-cae06086-be08-4645-8cb9-9ae1fea3e02d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032518087s
Jan 25 15:06:59.365: INFO: Pod "pod-cae06086-be08-4645-8cb9-9ae1fea3e02d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079422634s
Jan 25 15:07:01.383: INFO: Pod "pod-cae06086-be08-4645-8cb9-9ae1fea3e02d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.097489622s
STEP: Saw pod success
Jan 25 15:07:01.383: INFO: Pod "pod-cae06086-be08-4645-8cb9-9ae1fea3e02d" satisfied condition "success or failure"
Jan 25 15:07:01.396: INFO: Trying to get logs from node iruya-node pod pod-cae06086-be08-4645-8cb9-9ae1fea3e02d container test-container: 
STEP: delete the pod
Jan 25 15:07:01.475: INFO: Waiting for pod pod-cae06086-be08-4645-8cb9-9ae1fea3e02d to disappear
Jan 25 15:07:01.480: INFO: Pod pod-cae06086-be08-4645-8cb9-9ae1fea3e02d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 25 15:07:01.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5213" for this suite.
Jan 25 15:07:07.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 15:07:07.722: INFO: namespace emptydir-5213 deletion completed in 6.195695941s

• [SLOW TEST:14.547 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJan 25 15:07:07.722: INFO: Running AfterSuite actions on all nodes
Jan 25 15:07:07.722: INFO: Running AfterSuite actions on node 1
Jan 25 15:07:07.722: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 7857.378 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS