I0124 12:56:11.151100 9 e2e.go:243] Starting e2e run "529ef535-b631-48c0-b4e4-6ffb97774c23" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1579870570 - Will randomize all specs Will run 215 of 4412 specs Jan 24 12:56:11.381: INFO: >>> kubeConfig: /root/.kube/config Jan 24 12:56:11.384: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 24 12:56:11.419: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 24 12:56:11.557: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 24 12:56:11.557: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 24 12:56:11.557: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 24 12:56:11.572: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 24 12:56:11.572: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 24 12:56:11.572: INFO: e2e test version: v1.15.7 Jan 24 12:56:11.573: INFO: kube-apiserver version: v1.15.1 SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 12:56:11.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc Jan 24 12:56:11.759: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0124 12:56:13.221812 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 24 12:56:13.221: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 12:56:13.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2866" for this suite. Jan 24 12:56:19.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 12:56:19.423: INFO: namespace gc-2866 deletion completed in 6.198436471s • [SLOW TEST:7.850 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 12:56:19.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 24 12:56:19.691: INFO: Waiting up to 5m0s for pod "pod-b54c48a1-ecf0-40bb-a21a-74ab8c095f25" in namespace "emptydir-6955" to be "success or failure" Jan 24 12:56:19.701: INFO: Pod "pod-b54c48a1-ecf0-40bb-a21a-74ab8c095f25": Phase="Pending", Reason="", readiness=false. Elapsed: 10.081458ms Jan 24 12:56:22.794: INFO: Pod "pod-b54c48a1-ecf0-40bb-a21a-74ab8c095f25": Phase="Pending", Reason="", readiness=false. Elapsed: 3.102743724s Jan 24 12:56:24.804: INFO: Pod "pod-b54c48a1-ecf0-40bb-a21a-74ab8c095f25": Phase="Pending", Reason="", readiness=false. Elapsed: 5.112778592s Jan 24 12:56:26.811: INFO: Pod "pod-b54c48a1-ecf0-40bb-a21a-74ab8c095f25": Phase="Pending", Reason="", readiness=false. Elapsed: 7.11971603s Jan 24 12:56:28.818: INFO: Pod "pod-b54c48a1-ecf0-40bb-a21a-74ab8c095f25": Phase="Pending", Reason="", readiness=false. Elapsed: 9.127557156s Jan 24 12:56:30.827: INFO: Pod "pod-b54c48a1-ecf0-40bb-a21a-74ab8c095f25": Phase="Pending", Reason="", readiness=false. Elapsed: 11.135651982s Jan 24 12:56:32.834: INFO: Pod "pod-b54c48a1-ecf0-40bb-a21a-74ab8c095f25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.142998402s STEP: Saw pod success Jan 24 12:56:32.834: INFO: Pod "pod-b54c48a1-ecf0-40bb-a21a-74ab8c095f25" satisfied condition "success or failure" Jan 24 12:56:32.838: INFO: Trying to get logs from node iruya-node pod pod-b54c48a1-ecf0-40bb-a21a-74ab8c095f25 container test-container: STEP: delete the pod Jan 24 12:56:32.909: INFO: Waiting for pod pod-b54c48a1-ecf0-40bb-a21a-74ab8c095f25 to disappear Jan 24 12:56:32.918: INFO: Pod pod-b54c48a1-ecf0-40bb-a21a-74ab8c095f25 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 12:56:32.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6955" for this suite. Jan 24 12:56:40.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 12:56:41.047: INFO: namespace emptydir-6955 deletion completed in 8.122335164s • [SLOW TEST:21.624 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 12:56:41.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 12:57:41.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3448" for this suite. Jan 24 12:58:03.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 12:58:04.027: INFO: namespace container-probe-3448 deletion completed in 22.101463493s • [SLOW TEST:82.980 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 12:58:04.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 24 12:58:04.201: INFO: Waiting up to 5m0s for pod "pod-8099f2d9-c0aa-41d2-8882-f0086fb50773" in namespace "emptydir-9986" to be "success or failure" Jan 24 12:58:04.216: INFO: Pod "pod-8099f2d9-c0aa-41d2-8882-f0086fb50773": Phase="Pending", Reason="", readiness=false. Elapsed: 15.325878ms Jan 24 12:58:06.223: INFO: Pod "pod-8099f2d9-c0aa-41d2-8882-f0086fb50773": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022488383s Jan 24 12:58:08.259: INFO: Pod "pod-8099f2d9-c0aa-41d2-8882-f0086fb50773": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058098288s Jan 24 12:58:10.270: INFO: Pod "pod-8099f2d9-c0aa-41d2-8882-f0086fb50773": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069502205s Jan 24 12:58:13.208: INFO: Pod "pod-8099f2d9-c0aa-41d2-8882-f0086fb50773": Phase="Pending", Reason="", readiness=false. Elapsed: 9.006665344s Jan 24 12:58:15.215: INFO: Pod "pod-8099f2d9-c0aa-41d2-8882-f0086fb50773": Phase="Pending", Reason="", readiness=false. Elapsed: 11.014335863s Jan 24 12:58:17.224: INFO: Pod "pod-8099f2d9-c0aa-41d2-8882-f0086fb50773": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.022707082s STEP: Saw pod success Jan 24 12:58:17.224: INFO: Pod "pod-8099f2d9-c0aa-41d2-8882-f0086fb50773" satisfied condition "success or failure" Jan 24 12:58:17.226: INFO: Trying to get logs from node iruya-node pod pod-8099f2d9-c0aa-41d2-8882-f0086fb50773 container test-container: STEP: delete the pod Jan 24 12:58:17.332: INFO: Waiting for pod pod-8099f2d9-c0aa-41d2-8882-f0086fb50773 to disappear Jan 24 12:58:17.342: INFO: Pod pod-8099f2d9-c0aa-41d2-8882-f0086fb50773 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 12:58:17.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9986" for this suite. Jan 24 12:58:23.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 12:58:23.588: INFO: namespace emptydir-9986 deletion completed in 6.235404952s • [SLOW TEST:19.560 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 12:58:23.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 24 12:58:36.301: INFO: Successfully updated pod "pod-update-activedeadlineseconds-f2254321-ca1a-4bb2-b790-68a1fa4ec4f1" Jan 24 12:58:36.302: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-f2254321-ca1a-4bb2-b790-68a1fa4ec4f1" in namespace "pods-8033" to be "terminated due to deadline exceeded" Jan 24 12:58:36.341: INFO: Pod "pod-update-activedeadlineseconds-f2254321-ca1a-4bb2-b790-68a1fa4ec4f1": Phase="Running", Reason="", readiness=true. Elapsed: 38.897879ms Jan 24 12:58:38.352: INFO: Pod "pod-update-activedeadlineseconds-f2254321-ca1a-4bb2-b790-68a1fa4ec4f1": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.050076484s Jan 24 12:58:38.352: INFO: Pod "pod-update-activedeadlineseconds-f2254321-ca1a-4bb2-b790-68a1fa4ec4f1" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 12:58:38.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8033" for this suite. Jan 24 12:58:44.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 12:58:44.662: INFO: namespace pods-8033 deletion completed in 6.302879806s • [SLOW TEST:21.074 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 12:58:44.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Jan 24 12:58:44.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2363 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jan 24 12:58:59.531: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0124 12:58:57.937219 30 log.go:172] (0xc0007540b0) (0xc000554640) Create stream\nI0124 12:58:57.937378 30 log.go:172] (0xc0007540b0) (0xc000554640) Stream added, broadcasting: 1\nI0124 12:58:57.945672 30 log.go:172] (0xc0007540b0) Reply frame received for 1\nI0124 12:58:57.945736 30 log.go:172] (0xc0007540b0) (0xc0008aa000) Create stream\nI0124 12:58:57.945744 30 log.go:172] (0xc0007540b0) (0xc0008aa000) Stream added, broadcasting: 3\nI0124 12:58:57.947433 30 log.go:172] (0xc0007540b0) Reply frame received for 3\nI0124 12:58:57.947468 30 log.go:172] (0xc0007540b0) (0xc000430640) Create stream\nI0124 12:58:57.947475 30 log.go:172] (0xc0007540b0) (0xc000430640) Stream added, broadcasting: 5\nI0124 12:58:57.948789 30 log.go:172] (0xc0007540b0) Reply frame received for 5\nI0124 12:58:57.948808 30 log.go:172] (0xc0007540b0) (0xc0005546e0) Create stream\nI0124 12:58:57.948813 30 log.go:172] (0xc0007540b0) (0xc0005546e0) Stream added, broadcasting: 7\nI0124 12:58:57.950467 30 log.go:172] (0xc0007540b0) Reply frame received for 7\nI0124 12:58:57.950653 30 log.go:172] (0xc0008aa000) (3) Writing data frame\nI0124 12:58:57.950836 30 log.go:172] (0xc0008aa000) (3) Writing data frame\nI0124 12:58:57.963139 30 log.go:172] (0xc0007540b0) Data frame received for 5\nI0124 12:58:57.963151 30 log.go:172] (0xc000430640) (5) Data frame handling\nI0124 12:58:57.963162 30 log.go:172] (0xc000430640) (5) Data frame sent\nI0124 12:58:57.969277 30 log.go:172] (0xc0007540b0) Data frame received for 5\nI0124 12:58:57.969290 30 log.go:172] (0xc000430640) (5) Data frame handling\nI0124 12:58:57.969300 30 log.go:172] (0xc000430640) (5) Data frame sent\nI0124 12:58:59.491179 30 log.go:172] (0xc0007540b0) Data frame received for 1\nI0124 12:58:59.491546 30 log.go:172] (0xc0007540b0) (0xc0005546e0) Stream removed, broadcasting: 7\nI0124 12:58:59.491643 30 log.go:172] (0xc000554640) (1) Data frame handling\nI0124 12:58:59.491732 30 log.go:172] (0xc000554640) (1) Data frame sent\nI0124 12:58:59.491817 30 log.go:172] (0xc0007540b0) (0xc0008aa000) Stream removed, broadcasting: 3\nI0124 12:58:59.491907 30 log.go:172] (0xc0007540b0) (0xc000554640) Stream removed, broadcasting: 1\nI0124 12:58:59.491963 30 log.go:172] (0xc0007540b0) (0xc000430640) Stream removed, broadcasting: 5\nI0124 12:58:59.492014 30 log.go:172] (0xc0007540b0) Go away received\nI0124 12:58:59.492042 30 log.go:172] (0xc0007540b0) (0xc000554640) Stream removed, broadcasting: 1\nI0124 12:58:59.492065 30 log.go:172] (0xc0007540b0) (0xc0008aa000) Stream removed, broadcasting: 3\nI0124 12:58:59.492084 30 log.go:172] (0xc0007540b0) (0xc000430640) Stream removed, broadcasting: 5\nI0124 12:58:59.492102 30 log.go:172] (0xc0007540b0) (0xc0005546e0) Stream removed, broadcasting: 7\n" Jan 24 12:58:59.531: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 12:59:01.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2363" for this suite. Jan 24 12:59:07.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 12:59:07.722: INFO: namespace kubectl-2363 deletion completed in 6.170144437s • [SLOW TEST:23.061 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 12:59:07.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 24 12:59:07.837: INFO: Waiting up to 5m0s for pod "downward-api-6afe4408-4c9e-439c-8d9b-31bac07042a9" in namespace "downward-api-6119" to be "success or failure" Jan 24 12:59:07.903: INFO: Pod "downward-api-6afe4408-4c9e-439c-8d9b-31bac07042a9": Phase="Pending", Reason="", readiness=false. Elapsed: 66.199969ms Jan 24 12:59:09.910: INFO: Pod "downward-api-6afe4408-4c9e-439c-8d9b-31bac07042a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073569064s Jan 24 12:59:11.926: INFO: Pod "downward-api-6afe4408-4c9e-439c-8d9b-31bac07042a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088889049s Jan 24 12:59:13.995: INFO: Pod "downward-api-6afe4408-4c9e-439c-8d9b-31bac07042a9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.1582018s Jan 24 12:59:16.047: INFO: Pod "downward-api-6afe4408-4c9e-439c-8d9b-31bac07042a9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.210694324s Jan 24 12:59:18.058: INFO: Pod "downward-api-6afe4408-4c9e-439c-8d9b-31bac07042a9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.221555599s Jan 24 12:59:20.067: INFO: Pod "downward-api-6afe4408-4c9e-439c-8d9b-31bac07042a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.229841446s STEP: Saw pod success Jan 24 12:59:20.067: INFO: Pod "downward-api-6afe4408-4c9e-439c-8d9b-31bac07042a9" satisfied condition "success or failure" Jan 24 12:59:20.071: INFO: Trying to get logs from node iruya-node pod downward-api-6afe4408-4c9e-439c-8d9b-31bac07042a9 container dapi-container: STEP: delete the pod Jan 24 12:59:20.161: INFO: Waiting for pod downward-api-6afe4408-4c9e-439c-8d9b-31bac07042a9 to disappear Jan 24 12:59:20.170: INFO: Pod downward-api-6afe4408-4c9e-439c-8d9b-31bac07042a9 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 12:59:20.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6119" for this suite. Jan 24 12:59:26.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 12:59:26.356: INFO: namespace downward-api-6119 deletion completed in 6.17780109s • [SLOW TEST:18.634 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 12:59:26.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jan 24 12:59:26.501: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4407,SelfLink:/api/v1/namespaces/watch-4407/configmaps/e2e-watch-test-configmap-a,UID:ba14fe22-28b7-4ede-a95c-b8eccd5af533,ResourceVersion:21681140,Generation:0,CreationTimestamp:2020-01-24 12:59:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 24 12:59:26.501: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4407,SelfLink:/api/v1/namespaces/watch-4407/configmaps/e2e-watch-test-configmap-a,UID:ba14fe22-28b7-4ede-a95c-b8eccd5af533,ResourceVersion:21681140,Generation:0,CreationTimestamp:2020-01-24 12:59:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jan 24 12:59:36.524: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4407,SelfLink:/api/v1/namespaces/watch-4407/configmaps/e2e-watch-test-configmap-a,UID:ba14fe22-28b7-4ede-a95c-b8eccd5af533,ResourceVersion:21681153,Generation:0,CreationTimestamp:2020-01-24 12:59:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 24 12:59:36.525: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4407,SelfLink:/api/v1/namespaces/watch-4407/configmaps/e2e-watch-test-configmap-a,UID:ba14fe22-28b7-4ede-a95c-b8eccd5af533,ResourceVersion:21681153,Generation:0,CreationTimestamp:2020-01-24 12:59:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jan 24 12:59:46.542: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4407,SelfLink:/api/v1/namespaces/watch-4407/configmaps/e2e-watch-test-configmap-a,UID:ba14fe22-28b7-4ede-a95c-b8eccd5af533,ResourceVersion:21681169,Generation:0,CreationTimestamp:2020-01-24 12:59:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 24 12:59:46.543: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4407,SelfLink:/api/v1/namespaces/watch-4407/configmaps/e2e-watch-test-configmap-a,UID:ba14fe22-28b7-4ede-a95c-b8eccd5af533,ResourceVersion:21681169,Generation:0,CreationTimestamp:2020-01-24 12:59:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jan 24 12:59:56.558: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4407,SelfLink:/api/v1/namespaces/watch-4407/configmaps/e2e-watch-test-configmap-a,UID:ba14fe22-28b7-4ede-a95c-b8eccd5af533,ResourceVersion:21681183,Generation:0,CreationTimestamp:2020-01-24 12:59:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 24 12:59:56.558: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4407,SelfLink:/api/v1/namespaces/watch-4407/configmaps/e2e-watch-test-configmap-a,UID:ba14fe22-28b7-4ede-a95c-b8eccd5af533,ResourceVersion:21681183,Generation:0,CreationTimestamp:2020-01-24 12:59:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jan 24 13:00:06.581: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4407,SelfLink:/api/v1/namespaces/watch-4407/configmaps/e2e-watch-test-configmap-b,UID:54267375-8425-4cb5-a874-7938cf9adb50,ResourceVersion:21681197,Generation:0,CreationTimestamp:2020-01-24 13:00:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 24 13:00:06.582: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4407,SelfLink:/api/v1/namespaces/watch-4407/configmaps/e2e-watch-test-configmap-b,UID:54267375-8425-4cb5-a874-7938cf9adb50,ResourceVersion:21681197,Generation:0,CreationTimestamp:2020-01-24 13:00:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jan 24 13:00:16.599: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4407,SelfLink:/api/v1/namespaces/watch-4407/configmaps/e2e-watch-test-configmap-b,UID:54267375-8425-4cb5-a874-7938cf9adb50,ResourceVersion:21681211,Generation:0,CreationTimestamp:2020-01-24 13:00:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 24 13:00:16.600: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4407,SelfLink:/api/v1/namespaces/watch-4407/configmaps/e2e-watch-test-configmap-b,UID:54267375-8425-4cb5-a874-7938cf9adb50,ResourceVersion:21681211,Generation:0,CreationTimestamp:2020-01-24 13:00:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:00:26.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4407" for this suite. Jan 24 13:00:32.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:00:32.893: INFO: namespace watch-4407 deletion completed in 6.278056132s • [SLOW TEST:66.536 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:00:32.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-da22ffeb-b6dd-4749-b6a8-498a6afbae2c STEP: Creating a pod to test consume secrets Jan 24 13:00:32.995: INFO: Waiting up to 5m0s for pod "pod-secrets-0f271fd9-d452-4f0c-a22b-04e5176ef1f8" in namespace "secrets-4841" to be "success or failure" Jan 24 13:00:33.000: INFO: Pod "pod-secrets-0f271fd9-d452-4f0c-a22b-04e5176ef1f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.470817ms Jan 24 13:00:35.007: INFO: Pod "pod-secrets-0f271fd9-d452-4f0c-a22b-04e5176ef1f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011922295s Jan 24 13:00:37.016: INFO: Pod "pod-secrets-0f271fd9-d452-4f0c-a22b-04e5176ef1f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020484098s Jan 24 13:00:39.026: INFO: Pod "pod-secrets-0f271fd9-d452-4f0c-a22b-04e5176ef1f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030593454s Jan 24 13:00:41.034: INFO: Pod "pod-secrets-0f271fd9-d452-4f0c-a22b-04e5176ef1f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.038197163s STEP: Saw pod success Jan 24 13:00:41.034: INFO: Pod "pod-secrets-0f271fd9-d452-4f0c-a22b-04e5176ef1f8" satisfied condition "success or failure" Jan 24 13:00:41.037: INFO: Trying to get logs from node iruya-node pod pod-secrets-0f271fd9-d452-4f0c-a22b-04e5176ef1f8 container secret-volume-test: STEP: delete the pod Jan 24 13:00:41.084: INFO: Waiting for pod pod-secrets-0f271fd9-d452-4f0c-a22b-04e5176ef1f8 to disappear Jan 24 13:00:41.090: INFO: Pod pod-secrets-0f271fd9-d452-4f0c-a22b-04e5176ef1f8 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:00:41.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4841" for this suite. Jan 24 13:00:47.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:00:47.242: INFO: namespace secrets-4841 deletion completed in 6.145974324s • [SLOW TEST:14.349 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:00:47.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Jan 24 13:00:47.452: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix951783756/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:00:47.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6597" for this suite. Jan 24 13:00:53.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:00:53.740: INFO: namespace kubectl-6597 deletion completed in 6.163934027s • [SLOW TEST:6.498 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:00:53.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Jan 24 13:00:53.891: INFO: Waiting up to 5m0s for pod "var-expansion-8bc297d2-1002-44e7-8ac0-c6d32b0d0a77" in namespace "var-expansion-1356" to be "success or failure" Jan 24 13:00:53.968: INFO: Pod "var-expansion-8bc297d2-1002-44e7-8ac0-c6d32b0d0a77": Phase="Pending", Reason="", readiness=false. Elapsed: 77.323703ms Jan 24 13:00:55.978: INFO: Pod "var-expansion-8bc297d2-1002-44e7-8ac0-c6d32b0d0a77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08690235s Jan 24 13:00:57.986: INFO: Pod "var-expansion-8bc297d2-1002-44e7-8ac0-c6d32b0d0a77": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095509365s Jan 24 13:00:59.994: INFO: Pod "var-expansion-8bc297d2-1002-44e7-8ac0-c6d32b0d0a77": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102774733s Jan 24 13:01:02.000: INFO: Pod "var-expansion-8bc297d2-1002-44e7-8ac0-c6d32b0d0a77": Phase="Pending", Reason="", readiness=false. Elapsed: 8.109439383s Jan 24 13:01:04.016: INFO: Pod "var-expansion-8bc297d2-1002-44e7-8ac0-c6d32b0d0a77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.125367842s STEP: Saw pod success Jan 24 13:01:04.016: INFO: Pod "var-expansion-8bc297d2-1002-44e7-8ac0-c6d32b0d0a77" satisfied condition "success or failure" Jan 24 13:01:04.023: INFO: Trying to get logs from node iruya-node pod var-expansion-8bc297d2-1002-44e7-8ac0-c6d32b0d0a77 container dapi-container: STEP: delete the pod Jan 24 13:01:04.133: INFO: Waiting for pod var-expansion-8bc297d2-1002-44e7-8ac0-c6d32b0d0a77 to disappear Jan 24 13:01:04.138: INFO: Pod var-expansion-8bc297d2-1002-44e7-8ac0-c6d32b0d0a77 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:01:04.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1356" for this suite. Jan 24 13:01:10.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:01:10.360: INFO: namespace var-expansion-1356 deletion completed in 6.216668168s • [SLOW TEST:16.619 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:01:10.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 24 13:01:18.538: INFO: Waiting up to 5m0s for pod "client-envvars-9c0033ef-3e67-490e-838a-44964826e2dd" in namespace "pods-6034" to be "success or failure" Jan 24 13:01:18.557: INFO: Pod "client-envvars-9c0033ef-3e67-490e-838a-44964826e2dd": Phase="Pending", Reason="", readiness=false. Elapsed: 18.203585ms Jan 24 13:01:20.573: INFO: Pod "client-envvars-9c0033ef-3e67-490e-838a-44964826e2dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034607292s Jan 24 13:01:22.583: INFO: Pod "client-envvars-9c0033ef-3e67-490e-838a-44964826e2dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044495099s Jan 24 13:01:24.590: INFO: Pod "client-envvars-9c0033ef-3e67-490e-838a-44964826e2dd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051482552s Jan 24 13:01:26.620: INFO: Pod "client-envvars-9c0033ef-3e67-490e-838a-44964826e2dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.08173208s STEP: Saw pod success Jan 24 13:01:26.620: INFO: Pod "client-envvars-9c0033ef-3e67-490e-838a-44964826e2dd" satisfied condition "success or failure" Jan 24 13:01:26.626: INFO: Trying to get logs from node iruya-node pod client-envvars-9c0033ef-3e67-490e-838a-44964826e2dd container env3cont: STEP: delete the pod Jan 24 13:01:26.678: INFO: Waiting for pod client-envvars-9c0033ef-3e67-490e-838a-44964826e2dd to disappear Jan 24 13:01:26.684: INFO: Pod client-envvars-9c0033ef-3e67-490e-838a-44964826e2dd no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:01:26.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6034" for this suite. Jan 24 13:02:18.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:02:18.836: INFO: namespace pods-6034 deletion completed in 52.146402648s • [SLOW TEST:68.475 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:02:18.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-cee0472e-5d10-445f-9751-926467eefe29 STEP: Creating a pod to test consume configMaps Jan 24 13:02:19.118: INFO: Waiting up to 5m0s for pod "pod-configmaps-d4e7737d-ea4f-470d-8754-ea855ffa0a0d" in namespace "configmap-5493" to be "success or failure" Jan 24 13:02:19.126: INFO: Pod "pod-configmaps-d4e7737d-ea4f-470d-8754-ea855ffa0a0d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.807752ms Jan 24 13:02:21.135: INFO: Pod "pod-configmaps-d4e7737d-ea4f-470d-8754-ea855ffa0a0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016689156s Jan 24 13:02:23.144: INFO: Pod "pod-configmaps-d4e7737d-ea4f-470d-8754-ea855ffa0a0d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026000486s Jan 24 13:02:25.151: INFO: Pod "pod-configmaps-d4e7737d-ea4f-470d-8754-ea855ffa0a0d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033269809s Jan 24 13:02:27.175: INFO: Pod "pod-configmaps-d4e7737d-ea4f-470d-8754-ea855ffa0a0d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056735986s Jan 24 13:02:29.183: INFO: Pod "pod-configmaps-d4e7737d-ea4f-470d-8754-ea855ffa0a0d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.065393534s Jan 24 13:02:31.251: INFO: Pod "pod-configmaps-d4e7737d-ea4f-470d-8754-ea855ffa0a0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.133048714s STEP: Saw pod success Jan 24 13:02:31.251: INFO: Pod "pod-configmaps-d4e7737d-ea4f-470d-8754-ea855ffa0a0d" satisfied condition "success or failure" Jan 24 13:02:31.255: INFO: Trying to get logs from node iruya-node pod pod-configmaps-d4e7737d-ea4f-470d-8754-ea855ffa0a0d container configmap-volume-test: STEP: delete the pod Jan 24 13:02:31.385: INFO: Waiting for pod pod-configmaps-d4e7737d-ea4f-470d-8754-ea855ffa0a0d to disappear Jan 24 13:02:31.395: INFO: Pod pod-configmaps-d4e7737d-ea4f-470d-8754-ea855ffa0a0d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:02:31.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5493" for this suite. Jan 24 13:02:37.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:02:37.554: INFO: namespace configmap-5493 deletion completed in 6.153303755s • [SLOW TEST:18.718 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:02:37.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jan 24 13:02:37.639: INFO: namespace kubectl-9142 Jan 24 13:02:37.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9142' Jan 24 13:02:37.983: INFO: stderr: "" Jan 24 13:02:37.983: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 24 13:02:38.996: INFO: Selector matched 1 pods for map[app:redis] Jan 24 13:02:38.997: INFO: Found 0 / 1 Jan 24 13:02:39.994: INFO: Selector matched 1 pods for map[app:redis] Jan 24 13:02:39.994: INFO: Found 0 / 1 Jan 24 13:02:40.995: INFO: Selector matched 1 pods for map[app:redis] Jan 24 13:02:40.995: INFO: Found 0 / 1 Jan 24 13:02:42.014: INFO: Selector matched 1 pods for map[app:redis] Jan 24 13:02:42.014: INFO: Found 0 / 1 Jan 24 13:02:42.992: INFO: Selector matched 1 pods for map[app:redis] Jan 24 13:02:42.992: INFO: Found 0 / 1 Jan 24 13:02:43.996: INFO: Selector matched 1 pods for map[app:redis] Jan 24 13:02:43.996: INFO: Found 0 / 1 Jan 24 13:02:45.002: INFO: Selector matched 1 pods for map[app:redis] Jan 24 13:02:45.002: INFO: Found 0 / 1 Jan 24 13:02:45.992: INFO: Selector matched 1 pods for map[app:redis] Jan 24 13:02:45.992: INFO: Found 1 / 1 Jan 24 13:02:45.992: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 24 13:02:45.998: INFO: Selector matched 1 pods for map[app:redis] Jan 24 13:02:45.998: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 24 13:02:45.998: INFO: wait on redis-master startup in kubectl-9142 Jan 24 13:02:45.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-sjjk6 redis-master --namespace=kubectl-9142' Jan 24 13:02:46.130: INFO: stderr: "" Jan 24 13:02:46.130: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 24 Jan 13:02:45.184 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 24 Jan 13:02:45.184 # Server started, Redis version 3.2.12\n1:M 24 Jan 13:02:45.185 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 24 Jan 13:02:45.185 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jan 24 13:02:46.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-9142' Jan 24 13:02:46.258: INFO: stderr: "" Jan 24 13:02:46.258: INFO: stdout: "service/rm2 exposed\n" Jan 24 13:02:46.337: INFO: Service rm2 in namespace kubectl-9142 found. STEP: exposing service Jan 24 13:02:48.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-9142' Jan 24 13:02:48.586: INFO: stderr: "" Jan 24 13:02:48.586: INFO: stdout: "service/rm3 exposed\n" Jan 24 13:02:48.601: INFO: Service rm3 in namespace kubectl-9142 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:02:50.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9142" for this suite. Jan 24 13:03:14.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:03:14.743: INFO: namespace kubectl-9142 deletion completed in 24.117299337s • [SLOW TEST:37.188 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:03:14.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jan 24 13:03:14.827: INFO: Pod name pod-release: Found 0 pods out of 1 Jan 24 13:03:19.831: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:03:21.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9057" for this suite. Jan 24 13:03:27.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:03:27.913: INFO: namespace replication-controller-9057 deletion completed in 6.290066952s • [SLOW TEST:13.170 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:03:27.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-603c8d28-2ac1-47b1-9441-1629caf573d9 STEP: Creating a pod to test consume secrets Jan 24 13:03:28.213: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d1e78862-640e-42f2-97e1-e97d96ff2389" in namespace "projected-8070" to be "success or failure" Jan 24 13:03:28.394: INFO: Pod "pod-projected-secrets-d1e78862-640e-42f2-97e1-e97d96ff2389": Phase="Pending", Reason="", readiness=false. Elapsed: 181.288296ms Jan 24 13:03:30.418: INFO: Pod "pod-projected-secrets-d1e78862-640e-42f2-97e1-e97d96ff2389": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204902997s Jan 24 13:03:32.425: INFO: Pod "pod-projected-secrets-d1e78862-640e-42f2-97e1-e97d96ff2389": Phase="Pending", Reason="", readiness=false. Elapsed: 4.211780626s Jan 24 13:03:34.435: INFO: Pod "pod-projected-secrets-d1e78862-640e-42f2-97e1-e97d96ff2389": Phase="Pending", Reason="", readiness=false. Elapsed: 6.221979973s Jan 24 13:03:36.445: INFO: Pod "pod-projected-secrets-d1e78862-640e-42f2-97e1-e97d96ff2389": Phase="Pending", Reason="", readiness=false. Elapsed: 8.232472681s Jan 24 13:03:38.457: INFO: Pod "pod-projected-secrets-d1e78862-640e-42f2-97e1-e97d96ff2389": Phase="Pending", Reason="", readiness=false. Elapsed: 10.244139641s Jan 24 13:03:40.472: INFO: Pod "pod-projected-secrets-d1e78862-640e-42f2-97e1-e97d96ff2389": Phase="Pending", Reason="", readiness=false. Elapsed: 12.259056431s Jan 24 13:03:42.479: INFO: Pod "pod-projected-secrets-d1e78862-640e-42f2-97e1-e97d96ff2389": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.266533311s STEP: Saw pod success Jan 24 13:03:42.479: INFO: Pod "pod-projected-secrets-d1e78862-640e-42f2-97e1-e97d96ff2389" satisfied condition "success or failure" Jan 24 13:03:42.482: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-d1e78862-640e-42f2-97e1-e97d96ff2389 container projected-secret-volume-test: STEP: delete the pod Jan 24 13:03:42.536: INFO: Waiting for pod pod-projected-secrets-d1e78862-640e-42f2-97e1-e97d96ff2389 to disappear Jan 24 13:03:42.542: INFO: Pod pod-projected-secrets-d1e78862-640e-42f2-97e1-e97d96ff2389 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:03:42.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8070" for this suite. Jan 24 13:03:48.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:03:48.703: INFO: namespace projected-8070 deletion completed in 6.153884764s • [SLOW TEST:20.789 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:03:48.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 24 13:03:48.899: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b122cf4f-31f8-4bf9-b13e-9356e2e5c8b9" in namespace "projected-3194" to be "success or failure" Jan 24 13:03:48.907: INFO: Pod "downwardapi-volume-b122cf4f-31f8-4bf9-b13e-9356e2e5c8b9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.909153ms Jan 24 13:03:50.915: INFO: Pod "downwardapi-volume-b122cf4f-31f8-4bf9-b13e-9356e2e5c8b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01684392s Jan 24 13:03:52.925: INFO: Pod "downwardapi-volume-b122cf4f-31f8-4bf9-b13e-9356e2e5c8b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026284746s Jan 24 13:03:54.936: INFO: Pod "downwardapi-volume-b122cf4f-31f8-4bf9-b13e-9356e2e5c8b9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037240544s Jan 24 13:03:56.947: INFO: Pod "downwardapi-volume-b122cf4f-31f8-4bf9-b13e-9356e2e5c8b9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048095014s Jan 24 13:03:58.961: INFO: Pod "downwardapi-volume-b122cf4f-31f8-4bf9-b13e-9356e2e5c8b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061988904s STEP: Saw pod success Jan 24 13:03:58.961: INFO: Pod "downwardapi-volume-b122cf4f-31f8-4bf9-b13e-9356e2e5c8b9" satisfied condition "success or failure" Jan 24 13:03:58.966: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b122cf4f-31f8-4bf9-b13e-9356e2e5c8b9 container client-container: STEP: delete the pod Jan 24 13:03:59.035: INFO: Waiting for pod downwardapi-volume-b122cf4f-31f8-4bf9-b13e-9356e2e5c8b9 to disappear Jan 24 13:03:59.040: INFO: Pod downwardapi-volume-b122cf4f-31f8-4bf9-b13e-9356e2e5c8b9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:03:59.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3194" for this suite. Jan 24 13:04:05.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:04:05.180: INFO: namespace projected-3194 deletion completed in 6.135882395s • [SLOW TEST:16.477 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:04:05.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 24 13:04:05.354: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a0bf2a33-c630-4f73-ae42-d3580e9132cd" in namespace "projected-9982" to be "success or failure" Jan 24 13:04:05.376: INFO: Pod "downwardapi-volume-a0bf2a33-c630-4f73-ae42-d3580e9132cd": Phase="Pending", Reason="", readiness=false. Elapsed: 21.936405ms Jan 24 13:04:07.385: INFO: Pod "downwardapi-volume-a0bf2a33-c630-4f73-ae42-d3580e9132cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030980641s Jan 24 13:04:09.396: INFO: Pod "downwardapi-volume-a0bf2a33-c630-4f73-ae42-d3580e9132cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04124122s Jan 24 13:04:11.402: INFO: Pod "downwardapi-volume-a0bf2a33-c630-4f73-ae42-d3580e9132cd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047334542s Jan 24 13:04:13.439: INFO: Pod "downwardapi-volume-a0bf2a33-c630-4f73-ae42-d3580e9132cd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084322311s Jan 24 13:04:15.456: INFO: Pod "downwardapi-volume-a0bf2a33-c630-4f73-ae42-d3580e9132cd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.101545865s Jan 24 13:04:17.471: INFO: Pod "downwardapi-volume-a0bf2a33-c630-4f73-ae42-d3580e9132cd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.116415169s Jan 24 13:04:19.516: INFO: Pod "downwardapi-volume-a0bf2a33-c630-4f73-ae42-d3580e9132cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.161188692s STEP: Saw pod success Jan 24 13:04:19.516: INFO: Pod "downwardapi-volume-a0bf2a33-c630-4f73-ae42-d3580e9132cd" satisfied condition "success or failure" Jan 24 13:04:19.520: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a0bf2a33-c630-4f73-ae42-d3580e9132cd container client-container: STEP: delete the pod Jan 24 13:04:19.579: INFO: Waiting for pod downwardapi-volume-a0bf2a33-c630-4f73-ae42-d3580e9132cd to disappear Jan 24 13:04:19.585: INFO: Pod downwardapi-volume-a0bf2a33-c630-4f73-ae42-d3580e9132cd no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:04:19.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9982" for this suite. Jan 24 13:04:25.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:04:25.723: INFO: namespace projected-9982 deletion completed in 6.131107068s • [SLOW TEST:20.543 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:04:25.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 24 13:04:25.865: INFO: Waiting up to 5m0s for pod "pod-cfbc1b62-14c6-4555-8e38-022a82e1efe0" in namespace "emptydir-8547" to be "success or failure" Jan 24 13:04:25.872: INFO: Pod "pod-cfbc1b62-14c6-4555-8e38-022a82e1efe0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.765999ms Jan 24 13:04:27.883: INFO: Pod "pod-cfbc1b62-14c6-4555-8e38-022a82e1efe0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018528744s Jan 24 13:04:29.892: INFO: Pod "pod-cfbc1b62-14c6-4555-8e38-022a82e1efe0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027228257s Jan 24 13:04:31.939: INFO: Pod "pod-cfbc1b62-14c6-4555-8e38-022a82e1efe0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073717117s Jan 24 13:04:33.960: INFO: Pod "pod-cfbc1b62-14c6-4555-8e38-022a82e1efe0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.094843433s STEP: Saw pod success Jan 24 13:04:33.960: INFO: Pod "pod-cfbc1b62-14c6-4555-8e38-022a82e1efe0" satisfied condition "success or failure" Jan 24 13:04:33.965: INFO: Trying to get logs from node iruya-node pod pod-cfbc1b62-14c6-4555-8e38-022a82e1efe0 container test-container: STEP: delete the pod Jan 24 13:04:34.262: INFO: Waiting for pod pod-cfbc1b62-14c6-4555-8e38-022a82e1efe0 to disappear Jan 24 13:04:34.305: INFO: Pod pod-cfbc1b62-14c6-4555-8e38-022a82e1efe0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:04:34.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8547" for this suite. Jan 24 13:04:42.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:04:42.564: INFO: namespace emptydir-8547 deletion completed in 8.252350176s • [SLOW TEST:16.840 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:04:42.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Jan 24 13:04:42.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5707' Jan 24 13:04:42.995: INFO: stderr: "" Jan 24 13:04:42.995: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 24 13:04:42.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5707' Jan 24 13:04:43.138: INFO: stderr: "" Jan 24 13:04:43.138: INFO: stdout: "update-demo-nautilus-n66v9 update-demo-nautilus-shbx4 " Jan 24 13:04:43.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n66v9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5707' Jan 24 13:04:43.224: INFO: stderr: "" Jan 24 13:04:43.224: INFO: stdout: "" Jan 24 13:04:43.224: INFO: update-demo-nautilus-n66v9 is created but not running Jan 24 13:04:48.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5707' Jan 24 13:04:49.306: INFO: stderr: "" Jan 24 13:04:49.306: INFO: stdout: "update-demo-nautilus-n66v9 update-demo-nautilus-shbx4 " Jan 24 13:04:49.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n66v9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5707' Jan 24 13:04:49.710: INFO: stderr: "" Jan 24 13:04:49.710: INFO: stdout: "" Jan 24 13:04:49.710: INFO: update-demo-nautilus-n66v9 is created but not running Jan 24 13:04:54.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5707' Jan 24 13:04:54.872: INFO: stderr: "" Jan 24 13:04:54.872: INFO: stdout: "update-demo-nautilus-n66v9 update-demo-nautilus-shbx4 " Jan 24 13:04:54.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n66v9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5707' Jan 24 13:04:54.992: INFO: stderr: "" Jan 24 13:04:54.992: INFO: stdout: "true" Jan 24 13:04:54.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n66v9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5707' Jan 24 13:04:55.085: INFO: stderr: "" Jan 24 13:04:55.085: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 24 13:04:55.085: INFO: validating pod update-demo-nautilus-n66v9 Jan 24 13:04:55.102: INFO: got data: { "image": "nautilus.jpg" } Jan 24 13:04:55.102: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 24 13:04:55.102: INFO: update-demo-nautilus-n66v9 is verified up and running Jan 24 13:04:55.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-shbx4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5707' Jan 24 13:04:55.205: INFO: stderr: "" Jan 24 13:04:55.205: INFO: stdout: "true" Jan 24 13:04:55.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-shbx4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5707' Jan 24 13:04:55.281: INFO: stderr: "" Jan 24 13:04:55.281: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 24 13:04:55.281: INFO: validating pod update-demo-nautilus-shbx4 Jan 24 13:04:55.324: INFO: got data: { "image": "nautilus.jpg" } Jan 24 13:04:55.324: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 24 13:04:55.324: INFO: update-demo-nautilus-shbx4 is verified up and running STEP: scaling down the replication controller Jan 24 13:04:55.337: INFO: scanned /root for discovery docs: Jan 24 13:04:55.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-5707' Jan 24 13:04:56.641: INFO: stderr: "" Jan 24 13:04:56.641: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 24 13:04:56.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5707' Jan 24 13:04:56.777: INFO: stderr: "" Jan 24 13:04:56.777: INFO: stdout: "update-demo-nautilus-n66v9 update-demo-nautilus-shbx4 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 24 13:05:01.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5707' Jan 24 13:05:01.926: INFO: stderr: "" Jan 24 13:05:01.926: INFO: stdout: "update-demo-nautilus-n66v9 update-demo-nautilus-shbx4 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 24 13:05:06.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5707' Jan 24 13:05:07.017: INFO: stderr: "" Jan 24 13:05:07.017: INFO: stdout: "update-demo-nautilus-n66v9 update-demo-nautilus-shbx4 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 24 13:05:12.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5707' Jan 24 13:05:12.134: INFO: stderr: "" Jan 24 13:05:12.134: INFO: stdout: "update-demo-nautilus-shbx4 " Jan 24 13:05:12.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-shbx4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5707' Jan 24 13:05:12.235: INFO: stderr: "" Jan 24 13:05:12.235: INFO: stdout: "true" Jan 24 13:05:12.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-shbx4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5707' Jan 24 13:05:12.319: INFO: stderr: "" Jan 24 13:05:12.319: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 24 13:05:12.319: INFO: validating pod update-demo-nautilus-shbx4 Jan 24 13:05:12.328: INFO: got data: { "image": "nautilus.jpg" } Jan 24 13:05:12.328: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 24 13:05:12.328: INFO: update-demo-nautilus-shbx4 is verified up and running STEP: scaling up the replication controller Jan 24 13:05:12.330: INFO: scanned /root for discovery docs: Jan 24 13:05:12.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-5707' Jan 24 13:05:13.498: INFO: stderr: "" Jan 24 13:05:13.498: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 24 13:05:13.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5707' Jan 24 13:05:13.621: INFO: stderr: "" Jan 24 13:05:13.622: INFO: stdout: "update-demo-nautilus-hcn4s update-demo-nautilus-shbx4 " Jan 24 13:05:13.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hcn4s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5707' Jan 24 13:05:14.006: INFO: stderr: "" Jan 24 13:05:14.006: INFO: stdout: "" Jan 24 13:05:14.006: INFO: update-demo-nautilus-hcn4s is created but not running Jan 24 13:05:19.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5707' Jan 24 13:05:19.462: INFO: stderr: "" Jan 24 13:05:19.462: INFO: stdout: "update-demo-nautilus-hcn4s update-demo-nautilus-shbx4 " Jan 24 13:05:19.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hcn4s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5707' Jan 24 13:05:19.621: INFO: stderr: "" Jan 24 13:05:19.621: INFO: stdout: "" Jan 24 13:05:19.621: INFO: update-demo-nautilus-hcn4s is created but not running Jan 24 13:05:24.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5707' Jan 24 13:05:24.742: INFO: stderr: "" Jan 24 13:05:24.742: INFO: stdout: "update-demo-nautilus-hcn4s update-demo-nautilus-shbx4 " Jan 24 13:05:24.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hcn4s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5707' Jan 24 13:05:24.817: INFO: stderr: "" Jan 24 13:05:24.817: INFO: stdout: "true" Jan 24 13:05:24.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hcn4s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5707' Jan 24 13:05:24.895: INFO: stderr: "" Jan 24 13:05:24.895: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 24 13:05:24.895: INFO: validating pod update-demo-nautilus-hcn4s Jan 24 13:05:24.902: INFO: got data: { "image": "nautilus.jpg" } Jan 24 13:05:24.902: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 24 13:05:24.902: INFO: update-demo-nautilus-hcn4s is verified up and running Jan 24 13:05:24.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-shbx4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5707' Jan 24 13:05:25.049: INFO: stderr: "" Jan 24 13:05:25.050: INFO: stdout: "true" Jan 24 13:05:25.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-shbx4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5707' Jan 24 13:05:25.132: INFO: stderr: "" Jan 24 13:05:25.132: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 24 13:05:25.132: INFO: validating pod update-demo-nautilus-shbx4 Jan 24 13:05:25.141: INFO: got data: { "image": "nautilus.jpg" } Jan 24 13:05:25.141: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 24 13:05:25.141: INFO: update-demo-nautilus-shbx4 is verified up and running STEP: using delete to clean up resources Jan 24 13:05:25.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5707' Jan 24 13:05:25.218: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 24 13:05:25.218: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 24 13:05:25.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5707' Jan 24 13:05:25.304: INFO: stderr: "No resources found.\n" Jan 24 13:05:25.304: INFO: stdout: "" Jan 24 13:05:25.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5707 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 24 13:05:25.380: INFO: stderr: "" Jan 24 13:05:25.380: INFO: stdout: "update-demo-nautilus-hcn4s\nupdate-demo-nautilus-shbx4\n" Jan 24 13:05:25.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5707' Jan 24 13:05:26.781: INFO: stderr: "No resources found.\n" Jan 24 13:05:26.781: INFO: stdout: "" Jan 24 13:05:26.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5707 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 24 13:05:27.011: INFO: stderr: "" Jan 24 13:05:27.012: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:05:27.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5707" for this suite. Jan 24 13:05:49.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:05:49.391: INFO: namespace kubectl-5707 deletion completed in 22.347018167s • [SLOW TEST:66.827 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:05:49.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jan 24 13:05:58.217: INFO: Successfully updated pod "annotationupdatee7ac3ce6-7245-418b-9d8d-1ed2cd8e9bcf" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:06:00.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7328" for this suite. Jan 24 13:06:22.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:06:22.480: INFO: namespace projected-7328 deletion completed in 22.187766546s • [SLOW TEST:33.088 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:06:22.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Jan 24 13:06:22.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8764' Jan 24 13:06:22.826: INFO: stderr: "" Jan 24 13:06:22.826: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 24 13:06:22.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8764' Jan 24 13:06:22.983: INFO: stderr: "" Jan 24 13:06:22.983: INFO: stdout: "update-demo-nautilus-75k7r update-demo-nautilus-rhq6k " Jan 24 13:06:22.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-75k7r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8764' Jan 24 13:06:23.091: INFO: stderr: "" Jan 24 13:06:23.091: INFO: stdout: "" Jan 24 13:06:23.091: INFO: update-demo-nautilus-75k7r is created but not running Jan 24 13:06:28.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8764' Jan 24 13:06:29.356: INFO: stderr: "" Jan 24 13:06:29.356: INFO: stdout: "update-demo-nautilus-75k7r update-demo-nautilus-rhq6k " Jan 24 13:06:29.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-75k7r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8764' Jan 24 13:06:30.615: INFO: stderr: "" Jan 24 13:06:30.615: INFO: stdout: "" Jan 24 13:06:30.615: INFO: update-demo-nautilus-75k7r is created but not running Jan 24 13:06:35.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8764' Jan 24 13:06:35.731: INFO: stderr: "" Jan 24 13:06:35.732: INFO: stdout: "update-demo-nautilus-75k7r update-demo-nautilus-rhq6k " Jan 24 13:06:35.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-75k7r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8764' Jan 24 13:06:35.831: INFO: stderr: "" Jan 24 13:06:35.831: INFO: stdout: "true" Jan 24 13:06:35.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-75k7r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8764' Jan 24 13:06:35.920: INFO: stderr: "" Jan 24 13:06:35.920: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 24 13:06:35.920: INFO: validating pod update-demo-nautilus-75k7r Jan 24 13:06:35.940: INFO: got data: { "image": "nautilus.jpg" } Jan 24 13:06:35.940: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 24 13:06:35.940: INFO: update-demo-nautilus-75k7r is verified up and running Jan 24 13:06:35.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rhq6k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8764' Jan 24 13:06:36.019: INFO: stderr: "" Jan 24 13:06:36.019: INFO: stdout: "true" Jan 24 13:06:36.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rhq6k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8764' Jan 24 13:06:36.092: INFO: stderr: "" Jan 24 13:06:36.092: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 24 13:06:36.092: INFO: validating pod update-demo-nautilus-rhq6k Jan 24 13:06:36.111: INFO: got data: { "image": "nautilus.jpg" } Jan 24 13:06:36.111: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 24 13:06:36.111: INFO: update-demo-nautilus-rhq6k is verified up and running STEP: rolling-update to new replication controller Jan 24 13:06:36.113: INFO: scanned /root for discovery docs: Jan 24 13:06:36.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-8764' Jan 24 13:07:05.210: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 24 13:07:05.210: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 24 13:07:05.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8764' Jan 24 13:07:05.325: INFO: stderr: "" Jan 24 13:07:05.325: INFO: stdout: "update-demo-kitten-c74kr update-demo-kitten-n5dmm update-demo-nautilus-rhq6k " STEP: Replicas for name=update-demo: expected=2 actual=3 Jan 24 13:07:10.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8764' Jan 24 13:07:10.421: INFO: stderr: "" Jan 24 13:07:10.422: INFO: stdout: "update-demo-kitten-c74kr update-demo-kitten-n5dmm " Jan 24 13:07:10.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-c74kr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8764' Jan 24 13:07:10.494: INFO: stderr: "" Jan 24 13:07:10.494: INFO: stdout: "true" Jan 24 13:07:10.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-c74kr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8764' Jan 24 13:07:10.581: INFO: stderr: "" Jan 24 13:07:10.581: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 24 13:07:10.581: INFO: validating pod update-demo-kitten-c74kr Jan 24 13:07:10.616: INFO: got data: { "image": "kitten.jpg" } Jan 24 13:07:10.616: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 24 13:07:10.616: INFO: update-demo-kitten-c74kr is verified up and running Jan 24 13:07:10.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-n5dmm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8764' Jan 24 13:07:10.686: INFO: stderr: "" Jan 24 13:07:10.686: INFO: stdout: "true" Jan 24 13:07:10.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-n5dmm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8764' Jan 24 13:07:10.765: INFO: stderr: "" Jan 24 13:07:10.765: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 24 13:07:10.765: INFO: validating pod update-demo-kitten-n5dmm Jan 24 13:07:10.800: INFO: got data: { "image": "kitten.jpg" } Jan 24 13:07:10.800: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 24 13:07:10.800: INFO: update-demo-kitten-n5dmm is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:07:10.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8764" for this suite. Jan 24 13:07:34.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:07:34.989: INFO: namespace kubectl-8764 deletion completed in 24.183284967s • [SLOW TEST:72.507 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:07:34.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-b0798818-eef7-4391-ada6-1b43821672dc in namespace container-probe-7748 Jan 24 13:07:45.146: INFO: Started pod busybox-b0798818-eef7-4391-ada6-1b43821672dc in namespace container-probe-7748 STEP: checking the pod's current state and verifying that restartCount is present Jan 24 13:07:45.148: INFO: Initial restart count of pod busybox-b0798818-eef7-4391-ada6-1b43821672dc is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:11:45.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7748" for this suite. Jan 24 13:11:51.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:11:51.580: INFO: namespace container-probe-7748 deletion completed in 6.178983169s • [SLOW TEST:256.591 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:11:51.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-d7dcab56-0a12-4957-8426-9f61ef5531a2 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:11:51.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6458" for this suite. Jan 24 13:11:57.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:11:57.880: INFO: namespace secrets-6458 deletion completed in 6.182501652s • [SLOW TEST:6.299 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:11:57.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-f7xh STEP: Creating a pod to test atomic-volume-subpath Jan 24 13:11:58.006: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-f7xh" in namespace "subpath-4211" to be "success or failure" Jan 24 13:11:58.024: INFO: Pod "pod-subpath-test-secret-f7xh": Phase="Pending", Reason="", readiness=false. Elapsed: 18.179468ms Jan 24 13:12:00.035: INFO: Pod "pod-subpath-test-secret-f7xh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028893007s Jan 24 13:12:02.041: INFO: Pod "pod-subpath-test-secret-f7xh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035313772s Jan 24 13:12:04.050: INFO: Pod "pod-subpath-test-secret-f7xh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044580963s Jan 24 13:12:06.059: INFO: Pod "pod-subpath-test-secret-f7xh": Phase="Running", Reason="", readiness=true. Elapsed: 8.052798316s Jan 24 13:12:08.068: INFO: Pod "pod-subpath-test-secret-f7xh": Phase="Running", Reason="", readiness=true. Elapsed: 10.062389316s Jan 24 13:12:10.076: INFO: Pod "pod-subpath-test-secret-f7xh": Phase="Running", Reason="", readiness=true. Elapsed: 12.070418931s Jan 24 13:12:12.106: INFO: Pod "pod-subpath-test-secret-f7xh": Phase="Running", Reason="", readiness=true. Elapsed: 14.10006075s Jan 24 13:12:14.113: INFO: Pod "pod-subpath-test-secret-f7xh": Phase="Running", Reason="", readiness=true. Elapsed: 16.107612571s Jan 24 13:12:16.122: INFO: Pod "pod-subpath-test-secret-f7xh": Phase="Running", Reason="", readiness=true. Elapsed: 18.115839685s Jan 24 13:12:18.135: INFO: Pod "pod-subpath-test-secret-f7xh": Phase="Running", Reason="", readiness=true. Elapsed: 20.129360667s Jan 24 13:12:20.151: INFO: Pod "pod-subpath-test-secret-f7xh": Phase="Running", Reason="", readiness=true. Elapsed: 22.145174414s Jan 24 13:12:22.166: INFO: Pod "pod-subpath-test-secret-f7xh": Phase="Running", Reason="", readiness=true. Elapsed: 24.160087026s Jan 24 13:12:24.173: INFO: Pod "pod-subpath-test-secret-f7xh": Phase="Running", Reason="", readiness=true. Elapsed: 26.167726339s Jan 24 13:12:26.186: INFO: Pod "pod-subpath-test-secret-f7xh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.179846127s STEP: Saw pod success Jan 24 13:12:26.186: INFO: Pod "pod-subpath-test-secret-f7xh" satisfied condition "success or failure" Jan 24 13:12:26.195: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-f7xh container test-container-subpath-secret-f7xh: STEP: delete the pod Jan 24 13:12:26.261: INFO: Waiting for pod pod-subpath-test-secret-f7xh to disappear Jan 24 13:12:26.357: INFO: Pod pod-subpath-test-secret-f7xh no longer exists STEP: Deleting pod pod-subpath-test-secret-f7xh Jan 24 13:12:26.357: INFO: Deleting pod "pod-subpath-test-secret-f7xh" in namespace "subpath-4211" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:12:26.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4211" for this suite. Jan 24 13:12:32.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:12:32.493: INFO: namespace subpath-4211 deletion completed in 6.121796502s • [SLOW TEST:34.612 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:12:32.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-b8tm STEP: Creating a pod to test atomic-volume-subpath Jan 24 13:12:32.711: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-b8tm" in namespace "subpath-9715" to be "success or failure" Jan 24 13:12:32.718: INFO: Pod "pod-subpath-test-configmap-b8tm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.384253ms Jan 24 13:12:34.728: INFO: Pod "pod-subpath-test-configmap-b8tm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016123025s Jan 24 13:12:36.738: INFO: Pod "pod-subpath-test-configmap-b8tm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026318118s Jan 24 13:12:38.757: INFO: Pod "pod-subpath-test-configmap-b8tm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044898942s Jan 24 13:12:40.771: INFO: Pod "pod-subpath-test-configmap-b8tm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059266334s Jan 24 13:12:42.784: INFO: Pod "pod-subpath-test-configmap-b8tm": Phase="Running", Reason="", readiness=true. Elapsed: 10.072069578s Jan 24 13:12:44.792: INFO: Pod "pod-subpath-test-configmap-b8tm": Phase="Running", Reason="", readiness=true. Elapsed: 12.080215439s Jan 24 13:12:46.799: INFO: Pod "pod-subpath-test-configmap-b8tm": Phase="Running", Reason="", readiness=true. Elapsed: 14.087815523s Jan 24 13:12:48.808: INFO: Pod "pod-subpath-test-configmap-b8tm": Phase="Running", Reason="", readiness=true. Elapsed: 16.096131569s Jan 24 13:12:50.815: INFO: Pod "pod-subpath-test-configmap-b8tm": Phase="Running", Reason="", readiness=true. Elapsed: 18.103533547s Jan 24 13:12:52.829: INFO: Pod "pod-subpath-test-configmap-b8tm": Phase="Running", Reason="", readiness=true. Elapsed: 20.117601357s Jan 24 13:12:54.837: INFO: Pod "pod-subpath-test-configmap-b8tm": Phase="Running", Reason="", readiness=true. Elapsed: 22.125379388s Jan 24 13:12:56.848: INFO: Pod "pod-subpath-test-configmap-b8tm": Phase="Running", Reason="", readiness=true. Elapsed: 24.136665157s Jan 24 13:12:58.866: INFO: Pod "pod-subpath-test-configmap-b8tm": Phase="Running", Reason="", readiness=true. Elapsed: 26.154568206s Jan 24 13:13:00.875: INFO: Pod "pod-subpath-test-configmap-b8tm": Phase="Running", Reason="", readiness=true. Elapsed: 28.163208345s Jan 24 13:13:02.881: INFO: Pod "pod-subpath-test-configmap-b8tm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.169534727s STEP: Saw pod success Jan 24 13:13:02.881: INFO: Pod "pod-subpath-test-configmap-b8tm" satisfied condition "success or failure" Jan 24 13:13:02.884: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-b8tm container test-container-subpath-configmap-b8tm: STEP: delete the pod Jan 24 13:13:02.940: INFO: Waiting for pod pod-subpath-test-configmap-b8tm to disappear Jan 24 13:13:02.956: INFO: Pod pod-subpath-test-configmap-b8tm no longer exists STEP: Deleting pod pod-subpath-test-configmap-b8tm Jan 24 13:13:02.956: INFO: Deleting pod "pod-subpath-test-configmap-b8tm" in namespace "subpath-9715" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:13:02.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9715" for this suite. Jan 24 13:13:09.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:13:09.216: INFO: namespace subpath-9715 deletion completed in 6.191881179s • [SLOW TEST:36.722 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:13:09.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 24 13:13:29.400: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9107 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 24 13:13:29.400: INFO: >>> kubeConfig: /root/.kube/config I0124 13:13:29.543433 9 log.go:172] (0xc001f8c210) (0xc0016330e0) Create stream I0124 13:13:29.543553 9 log.go:172] (0xc001f8c210) (0xc0016330e0) Stream added, broadcasting: 1 I0124 13:13:29.555640 9 log.go:172] (0xc001f8c210) Reply frame received for 1 I0124 13:13:29.555704 9 log.go:172] (0xc001f8c210) (0xc001e57040) Create stream I0124 13:13:29.555713 9 log.go:172] (0xc001f8c210) (0xc001e57040) Stream added, broadcasting: 3 I0124 13:13:29.557517 9 log.go:172] (0xc001f8c210) Reply frame received for 3 I0124 13:13:29.557564 9 log.go:172] (0xc001f8c210) (0xc0015b77c0) Create stream I0124 13:13:29.557583 9 log.go:172] (0xc001f8c210) (0xc0015b77c0) Stream added, broadcasting: 5 I0124 13:13:29.560546 9 log.go:172] (0xc001f8c210) Reply frame received for 5 I0124 13:13:29.684971 9 log.go:172] (0xc001f8c210) Data frame received for 3 I0124 13:13:29.685061 9 log.go:172] (0xc001e57040) (3) Data frame handling I0124 13:13:29.685091 9 log.go:172] (0xc001e57040) (3) Data frame sent I0124 13:13:29.838009 9 log.go:172] (0xc001f8c210) (0xc001e57040) Stream removed, broadcasting: 3 I0124 13:13:29.838225 9 log.go:172] (0xc001f8c210) Data frame received for 1 I0124 13:13:29.838245 9 log.go:172] (0xc0016330e0) (1) Data frame handling I0124 13:13:29.838264 9 log.go:172] (0xc0016330e0) (1) Data frame sent I0124 13:13:29.838278 9 log.go:172] (0xc001f8c210) (0xc0016330e0) Stream removed, broadcasting: 1 I0124 13:13:29.838990 9 log.go:172] (0xc001f8c210) (0xc0015b77c0) Stream removed, broadcasting: 5 I0124 13:13:29.839094 9 log.go:172] (0xc001f8c210) Go away received I0124 13:13:29.839245 9 log.go:172] (0xc001f8c210) (0xc0016330e0) Stream removed, broadcasting: 1 I0124 13:13:29.839356 9 log.go:172] (0xc001f8c210) (0xc001e57040) Stream removed, broadcasting: 3 I0124 13:13:29.839397 9 log.go:172] (0xc001f8c210) (0xc0015b77c0) Stream removed, broadcasting: 5 Jan 24 13:13:29.839: INFO: Exec stderr: "" Jan 24 13:13:29.839: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9107 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 24 13:13:29.839: INFO: >>> kubeConfig: /root/.kube/config I0124 13:13:29.921737 9 log.go:172] (0xc0019d6c60) (0xc0015b7e00) Create stream I0124 13:13:29.921887 9 log.go:172] (0xc0019d6c60) (0xc0015b7e00) Stream added, broadcasting: 1 I0124 13:13:29.931687 9 log.go:172] (0xc0019d6c60) Reply frame received for 1 I0124 13:13:29.931786 9 log.go:172] (0xc0019d6c60) (0xc001e570e0) Create stream I0124 13:13:29.931793 9 log.go:172] (0xc0019d6c60) (0xc001e570e0) Stream added, broadcasting: 3 I0124 13:13:29.935363 9 log.go:172] (0xc0019d6c60) Reply frame received for 3 I0124 13:13:29.935414 9 log.go:172] (0xc0019d6c60) (0xc001ad1040) Create stream I0124 13:13:29.935425 9 log.go:172] (0xc0019d6c60) (0xc001ad1040) Stream added, broadcasting: 5 I0124 13:13:29.936999 9 log.go:172] (0xc0019d6c60) Reply frame received for 5 I0124 13:13:30.071320 9 log.go:172] (0xc0019d6c60) Data frame received for 3 I0124 13:13:30.071359 9 log.go:172] (0xc001e570e0) (3) Data frame handling I0124 13:13:30.071370 9 log.go:172] (0xc001e570e0) (3) Data frame sent I0124 13:13:30.217981 9 log.go:172] (0xc0019d6c60) (0xc001e570e0) Stream removed, broadcasting: 3 I0124 13:13:30.218066 9 log.go:172] (0xc0019d6c60) Data frame received for 1 I0124 13:13:30.218102 9 log.go:172] (0xc0015b7e00) (1) Data frame handling I0124 13:13:30.218131 9 log.go:172] (0xc0019d6c60) (0xc001ad1040) Stream removed, broadcasting: 5 I0124 13:13:30.218168 9 log.go:172] (0xc0015b7e00) (1) Data frame sent I0124 13:13:30.218179 9 log.go:172] (0xc0019d6c60) (0xc0015b7e00) Stream removed, broadcasting: 1 I0124 13:13:30.218203 9 log.go:172] (0xc0019d6c60) Go away received I0124 13:13:30.218482 9 log.go:172] (0xc0019d6c60) (0xc0015b7e00) Stream removed, broadcasting: 1 I0124 13:13:30.218606 9 log.go:172] (0xc0019d6c60) (0xc001e570e0) Stream removed, broadcasting: 3 I0124 13:13:30.218714 9 log.go:172] (0xc0019d6c60) (0xc001ad1040) Stream removed, broadcasting: 5 Jan 24 13:13:30.218: INFO: Exec stderr: "" Jan 24 13:13:30.218: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9107 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 24 13:13:30.218: INFO: >>> kubeConfig: /root/.kube/config I0124 13:13:30.296663 9 log.go:172] (0xc001868840) (0xc00204f9a0) Create stream I0124 13:13:30.296774 9 log.go:172] (0xc001868840) (0xc00204f9a0) Stream added, broadcasting: 1 I0124 13:13:30.304643 9 log.go:172] (0xc001868840) Reply frame received for 1 I0124 13:13:30.304693 9 log.go:172] (0xc001868840) (0xc00204fa40) Create stream I0124 13:13:30.304705 9 log.go:172] (0xc001868840) (0xc00204fa40) Stream added, broadcasting: 3 I0124 13:13:30.306314 9 log.go:172] (0xc001868840) Reply frame received for 3 I0124 13:13:30.306350 9 log.go:172] (0xc001868840) (0xc001e57180) Create stream I0124 13:13:30.306362 9 log.go:172] (0xc001868840) (0xc001e57180) Stream added, broadcasting: 5 I0124 13:13:30.309145 9 log.go:172] (0xc001868840) Reply frame received for 5 I0124 13:13:30.452756 9 log.go:172] (0xc001868840) Data frame received for 3 I0124 13:13:30.452899 9 log.go:172] (0xc00204fa40) (3) Data frame handling I0124 13:13:30.452971 9 log.go:172] (0xc00204fa40) (3) Data frame sent I0124 13:13:30.924886 9 log.go:172] (0xc001868840) Data frame received for 1 I0124 13:13:30.924996 9 log.go:172] (0xc001868840) (0xc00204fa40) Stream removed, broadcasting: 3 I0124 13:13:30.925037 9 log.go:172] (0xc00204f9a0) (1) Data frame handling I0124 13:13:30.925054 9 log.go:172] (0xc00204f9a0) (1) Data frame sent I0124 13:13:30.925061 9 log.go:172] (0xc001868840) (0xc00204f9a0) Stream removed, broadcasting: 1 I0124 13:13:30.925480 9 log.go:172] (0xc001868840) (0xc001e57180) Stream removed, broadcasting: 5 I0124 13:13:30.925501 9 log.go:172] (0xc001868840) (0xc00204f9a0) Stream removed, broadcasting: 1 I0124 13:13:30.925508 9 log.go:172] (0xc001868840) (0xc00204fa40) Stream removed, broadcasting: 3 I0124 13:13:30.925518 9 log.go:172] (0xc001868840) (0xc001e57180) Stream removed, broadcasting: 5 I0124 13:13:30.925760 9 log.go:172] (0xc001868840) Go away received Jan 24 13:13:30.925: INFO: Exec stderr: "" Jan 24 13:13:30.925: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9107 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 24 13:13:30.925: INFO: >>> kubeConfig: /root/.kube/config I0124 13:13:30.983032 9 log.go:172] (0xc0018f8fd0) (0xc001e57400) Create stream I0124 13:13:30.983134 9 log.go:172] (0xc0018f8fd0) (0xc001e57400) Stream added, broadcasting: 1 I0124 13:13:30.993750 9 log.go:172] (0xc0018f8fd0) Reply frame received for 1 I0124 13:13:30.993825 9 log.go:172] (0xc0018f8fd0) (0xc00204fc20) Create stream I0124 13:13:30.993837 9 log.go:172] (0xc0018f8fd0) (0xc00204fc20) Stream added, broadcasting: 3 I0124 13:13:30.995616 9 log.go:172] (0xc0018f8fd0) Reply frame received for 3 I0124 13:13:30.995644 9 log.go:172] (0xc0018f8fd0) (0xc001e574a0) Create stream I0124 13:13:30.995652 9 log.go:172] (0xc0018f8fd0) (0xc001e574a0) Stream added, broadcasting: 5 I0124 13:13:30.997306 9 log.go:172] (0xc0018f8fd0) Reply frame received for 5 I0124 13:13:31.110765 9 log.go:172] (0xc0018f8fd0) Data frame received for 3 I0124 13:13:31.110993 9 log.go:172] (0xc00204fc20) (3) Data frame handling I0124 13:13:31.111016 9 log.go:172] (0xc00204fc20) (3) Data frame sent I0124 13:13:31.200520 9 log.go:172] (0xc0018f8fd0) Data frame received for 1 I0124 13:13:31.200573 9 log.go:172] (0xc0018f8fd0) (0xc00204fc20) Stream removed, broadcasting: 3 I0124 13:13:31.200589 9 log.go:172] (0xc001e57400) (1) Data frame handling I0124 13:13:31.200596 9 log.go:172] (0xc001e57400) (1) Data frame sent I0124 13:13:31.200618 9 log.go:172] (0xc0018f8fd0) (0xc001e574a0) Stream removed, broadcasting: 5 I0124 13:13:31.200647 9 log.go:172] (0xc0018f8fd0) (0xc001e57400) Stream removed, broadcasting: 1 I0124 13:13:31.200664 9 log.go:172] (0xc0018f8fd0) Go away received I0124 13:13:31.200815 9 log.go:172] (0xc0018f8fd0) (0xc001e57400) Stream removed, broadcasting: 1 I0124 13:13:31.200834 9 log.go:172] (0xc0018f8fd0) (0xc00204fc20) Stream removed, broadcasting: 3 I0124 13:13:31.200843 9 log.go:172] (0xc0018f8fd0) (0xc001e574a0) Stream removed, broadcasting: 5 Jan 24 13:13:31.200: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 24 13:13:31.200: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9107 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 24 13:13:31.200: INFO: >>> kubeConfig: /root/.kube/config I0124 13:13:31.246384 9 log.go:172] (0xc0019d76b0) (0xc0020ca140) Create stream I0124 13:13:31.246497 9 log.go:172] (0xc0019d76b0) (0xc0020ca140) Stream added, broadcasting: 1 I0124 13:13:31.250965 9 log.go:172] (0xc0019d76b0) Reply frame received for 1 I0124 13:13:31.250980 9 log.go:172] (0xc0019d76b0) (0xc001e57540) Create stream I0124 13:13:31.250986 9 log.go:172] (0xc0019d76b0) (0xc001e57540) Stream added, broadcasting: 3 I0124 13:13:31.252581 9 log.go:172] (0xc0019d76b0) Reply frame received for 3 I0124 13:13:31.252607 9 log.go:172] (0xc0019d76b0) (0xc001e575e0) Create stream I0124 13:13:31.252620 9 log.go:172] (0xc0019d76b0) (0xc001e575e0) Stream added, broadcasting: 5 I0124 13:13:31.257118 9 log.go:172] (0xc0019d76b0) Reply frame received for 5 I0124 13:13:31.368318 9 log.go:172] (0xc0019d76b0) Data frame received for 3 I0124 13:13:31.368380 9 log.go:172] (0xc001e57540) (3) Data frame handling I0124 13:13:31.368391 9 log.go:172] (0xc001e57540) (3) Data frame sent I0124 13:13:31.462789 9 log.go:172] (0xc0019d76b0) (0xc001e57540) Stream removed, broadcasting: 3 I0124 13:13:31.463089 9 log.go:172] (0xc0019d76b0) Data frame received for 1 I0124 13:13:31.463117 9 log.go:172] (0xc0020ca140) (1) Data frame handling I0124 13:13:31.463169 9 log.go:172] (0xc0020ca140) (1) Data frame sent I0124 13:13:31.463262 9 log.go:172] (0xc0019d76b0) (0xc001e575e0) Stream removed, broadcasting: 5 I0124 13:13:31.463336 9 log.go:172] (0xc0019d76b0) (0xc0020ca140) Stream removed, broadcasting: 1 I0124 13:13:31.463367 9 log.go:172] (0xc0019d76b0) Go away received I0124 13:13:31.463635 9 log.go:172] (0xc0019d76b0) (0xc0020ca140) Stream removed, broadcasting: 1 I0124 13:13:31.463656 9 log.go:172] (0xc0019d76b0) (0xc001e57540) Stream removed, broadcasting: 3 I0124 13:13:31.463670 9 log.go:172] (0xc0019d76b0) (0xc001e575e0) Stream removed, broadcasting: 5 Jan 24 13:13:31.463: INFO: Exec stderr: "" Jan 24 13:13:31.463: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9107 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 24 13:13:31.463: INFO: >>> kubeConfig: /root/.kube/config I0124 13:13:31.506724 9 log.go:172] (0xc0019d7e40) (0xc0020ca280) Create stream I0124 13:13:31.506750 9 log.go:172] (0xc0019d7e40) (0xc0020ca280) Stream added, broadcasting: 1 I0124 13:13:31.511697 9 log.go:172] (0xc0019d7e40) Reply frame received for 1 I0124 13:13:31.511717 9 log.go:172] (0xc0019d7e40) (0xc00204fea0) Create stream I0124 13:13:31.511724 9 log.go:172] (0xc0019d7e40) (0xc00204fea0) Stream added, broadcasting: 3 I0124 13:13:31.512608 9 log.go:172] (0xc0019d7e40) Reply frame received for 3 I0124 13:13:31.512625 9 log.go:172] (0xc0019d7e40) (0xc001e57720) Create stream I0124 13:13:31.512632 9 log.go:172] (0xc0019d7e40) (0xc001e57720) Stream added, broadcasting: 5 I0124 13:13:31.516057 9 log.go:172] (0xc0019d7e40) Reply frame received for 5 I0124 13:13:31.607820 9 log.go:172] (0xc0019d7e40) Data frame received for 3 I0124 13:13:31.607857 9 log.go:172] (0xc00204fea0) (3) Data frame handling I0124 13:13:31.607881 9 log.go:172] (0xc00204fea0) (3) Data frame sent I0124 13:13:31.731774 9 log.go:172] (0xc0019d7e40) (0xc00204fea0) Stream removed, broadcasting: 3 I0124 13:13:31.731931 9 log.go:172] (0xc0019d7e40) Data frame received for 1 I0124 13:13:31.731944 9 log.go:172] (0xc0020ca280) (1) Data frame handling I0124 13:13:31.731958 9 log.go:172] (0xc0020ca280) (1) Data frame sent I0124 13:13:31.731963 9 log.go:172] (0xc0019d7e40) (0xc0020ca280) Stream removed, broadcasting: 1 I0124 13:13:31.732108 9 log.go:172] (0xc0019d7e40) (0xc001e57720) Stream removed, broadcasting: 5 I0124 13:13:31.732150 9 log.go:172] (0xc0019d7e40) (0xc0020ca280) Stream removed, broadcasting: 1 I0124 13:13:31.732163 9 log.go:172] (0xc0019d7e40) (0xc00204fea0) Stream removed, broadcasting: 3 I0124 13:13:31.732173 9 log.go:172] (0xc0019d7e40) (0xc001e57720) Stream removed, broadcasting: 5 I0124 13:13:31.732455 9 log.go:172] (0xc0019d7e40) Go away received Jan 24 13:13:31.732: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 24 13:13:31.732: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9107 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 24 13:13:31.733: INFO: >>> kubeConfig: /root/.kube/config I0124 13:13:31.803737 9 log.go:172] (0xc00192d290) (0xc001ad1360) Create stream I0124 13:13:31.803781 9 log.go:172] (0xc00192d290) (0xc001ad1360) Stream added, broadcasting: 1 I0124 13:13:31.810303 9 log.go:172] (0xc00192d290) Reply frame received for 1 I0124 13:13:31.810341 9 log.go:172] (0xc00192d290) (0xc001ad1400) Create stream I0124 13:13:31.810353 9 log.go:172] (0xc00192d290) (0xc001ad1400) Stream added, broadcasting: 3 I0124 13:13:31.812310 9 log.go:172] (0xc00192d290) Reply frame received for 3 I0124 13:13:31.812353 9 log.go:172] (0xc00192d290) (0xc00204ff40) Create stream I0124 13:13:31.812369 9 log.go:172] (0xc00192d290) (0xc00204ff40) Stream added, broadcasting: 5 I0124 13:13:31.815190 9 log.go:172] (0xc00192d290) Reply frame received for 5 I0124 13:13:31.909569 9 log.go:172] (0xc00192d290) Data frame received for 3 I0124 13:13:31.909992 9 log.go:172] (0xc001ad1400) (3) Data frame handling I0124 13:13:31.910045 9 log.go:172] (0xc001ad1400) (3) Data frame sent I0124 13:13:32.082881 9 log.go:172] (0xc00192d290) (0xc001ad1400) Stream removed, broadcasting: 3 I0124 13:13:32.083014 9 log.go:172] (0xc00192d290) Data frame received for 1 I0124 13:13:32.083025 9 log.go:172] (0xc001ad1360) (1) Data frame handling I0124 13:13:32.083044 9 log.go:172] (0xc001ad1360) (1) Data frame sent I0124 13:13:32.083061 9 log.go:172] (0xc00192d290) (0xc00204ff40) Stream removed, broadcasting: 5 I0124 13:13:32.083092 9 log.go:172] (0xc00192d290) (0xc001ad1360) Stream removed, broadcasting: 1 I0124 13:13:32.083109 9 log.go:172] (0xc00192d290) Go away received I0124 13:13:32.083303 9 log.go:172] (0xc00192d290) (0xc001ad1360) Stream removed, broadcasting: 1 I0124 13:13:32.083336 9 log.go:172] (0xc00192d290) (0xc001ad1400) Stream removed, broadcasting: 3 I0124 13:13:32.083350 9 log.go:172] (0xc00192d290) (0xc00204ff40) Stream removed, broadcasting: 5 Jan 24 13:13:32.083: INFO: Exec stderr: "" Jan 24 13:13:32.083: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9107 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 24 13:13:32.083: INFO: >>> kubeConfig: /root/.kube/config I0124 13:13:32.146166 9 log.go:172] (0xc002d5f130) (0xc001e57d60) Create stream I0124 13:13:32.146235 9 log.go:172] (0xc002d5f130) (0xc001e57d60) Stream added, broadcasting: 1 I0124 13:13:32.157345 9 log.go:172] (0xc002d5f130) Reply frame received for 1 I0124 13:13:32.157387 9 log.go:172] (0xc002d5f130) (0xc0009f6000) Create stream I0124 13:13:32.157393 9 log.go:172] (0xc002d5f130) (0xc0009f6000) Stream added, broadcasting: 3 I0124 13:13:32.162035 9 log.go:172] (0xc002d5f130) Reply frame received for 3 I0124 13:13:32.162113 9 log.go:172] (0xc002d5f130) (0xc001ad14a0) Create stream I0124 13:13:32.162122 9 log.go:172] (0xc002d5f130) (0xc001ad14a0) Stream added, broadcasting: 5 I0124 13:13:32.164018 9 log.go:172] (0xc002d5f130) Reply frame received for 5 I0124 13:13:32.273443 9 log.go:172] (0xc002d5f130) Data frame received for 3 I0124 13:13:32.273691 9 log.go:172] (0xc0009f6000) (3) Data frame handling I0124 13:13:32.273711 9 log.go:172] (0xc0009f6000) (3) Data frame sent I0124 13:13:32.417761 9 log.go:172] (0xc002d5f130) (0xc0009f6000) Stream removed, broadcasting: 3 I0124 13:13:32.417841 9 log.go:172] (0xc002d5f130) Data frame received for 1 I0124 13:13:32.417852 9 log.go:172] (0xc001e57d60) (1) Data frame handling I0124 13:13:32.417864 9 log.go:172] (0xc001e57d60) (1) Data frame sent I0124 13:13:32.417872 9 log.go:172] (0xc002d5f130) (0xc001e57d60) Stream removed, broadcasting: 1 I0124 13:13:32.417883 9 log.go:172] (0xc002d5f130) (0xc001ad14a0) Stream removed, broadcasting: 5 I0124 13:13:32.417906 9 log.go:172] (0xc002d5f130) Go away received I0124 13:13:32.418001 9 log.go:172] (0xc002d5f130) (0xc001e57d60) Stream removed, broadcasting: 1 I0124 13:13:32.418009 9 log.go:172] (0xc002d5f130) (0xc0009f6000) Stream removed, broadcasting: 3 I0124 13:13:32.418014 9 log.go:172] (0xc002d5f130) (0xc001ad14a0) Stream removed, broadcasting: 5 Jan 24 13:13:32.418: INFO: Exec stderr: "" Jan 24 13:13:32.418: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9107 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 24 13:13:32.418: INFO: >>> kubeConfig: /root/.kube/config I0124 13:13:32.477376 9 log.go:172] (0xc00179f1e0) (0xc0009f63c0) Create stream I0124 13:13:32.477429 9 log.go:172] (0xc00179f1e0) (0xc0009f63c0) Stream added, broadcasting: 1 I0124 13:13:32.485759 9 log.go:172] (0xc00179f1e0) Reply frame received for 1 I0124 13:13:32.485790 9 log.go:172] (0xc00179f1e0) (0xc0020ca320) Create stream I0124 13:13:32.485799 9 log.go:172] (0xc00179f1e0) (0xc0020ca320) Stream added, broadcasting: 3 I0124 13:13:32.487525 9 log.go:172] (0xc00179f1e0) Reply frame received for 3 I0124 13:13:32.487546 9 log.go:172] (0xc00179f1e0) (0xc001633180) Create stream I0124 13:13:32.487553 9 log.go:172] (0xc00179f1e0) (0xc001633180) Stream added, broadcasting: 5 I0124 13:13:32.499353 9 log.go:172] (0xc00179f1e0) Reply frame received for 5 I0124 13:13:32.654707 9 log.go:172] (0xc00179f1e0) Data frame received for 3 I0124 13:13:32.654825 9 log.go:172] (0xc0020ca320) (3) Data frame handling I0124 13:13:32.654864 9 log.go:172] (0xc0020ca320) (3) Data frame sent I0124 13:13:32.755372 9 log.go:172] (0xc00179f1e0) Data frame received for 1 I0124 13:13:32.755641 9 log.go:172] (0xc00179f1e0) (0xc001633180) Stream removed, broadcasting: 5 I0124 13:13:32.755692 9 log.go:172] (0xc0009f63c0) (1) Data frame handling I0124 13:13:32.755729 9 log.go:172] (0xc0009f63c0) (1) Data frame sent I0124 13:13:32.755809 9 log.go:172] (0xc00179f1e0) (0xc0020ca320) Stream removed, broadcasting: 3 I0124 13:13:32.755883 9 log.go:172] (0xc00179f1e0) (0xc0009f63c0) Stream removed, broadcasting: 1 I0124 13:13:32.755908 9 log.go:172] (0xc00179f1e0) Go away received I0124 13:13:32.756011 9 log.go:172] (0xc00179f1e0) (0xc0009f63c0) Stream removed, broadcasting: 1 I0124 13:13:32.756028 9 log.go:172] (0xc00179f1e0) (0xc0020ca320) Stream removed, broadcasting: 3 I0124 13:13:32.756035 9 log.go:172] (0xc00179f1e0) (0xc001633180) Stream removed, broadcasting: 5 Jan 24 13:13:32.756: INFO: Exec stderr: "" Jan 24 13:13:32.756: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9107 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 24 13:13:32.756: INFO: >>> kubeConfig: /root/.kube/config I0124 13:13:32.802865 9 log.go:172] (0xc001756630) (0xc0020ca460) Create stream I0124 13:13:32.803103 9 log.go:172] (0xc001756630) (0xc0020ca460) Stream added, broadcasting: 1 I0124 13:13:32.808302 9 log.go:172] (0xc001756630) Reply frame received for 1 I0124 13:13:32.808324 9 log.go:172] (0xc001756630) (0xc0016332c0) Create stream I0124 13:13:32.808331 9 log.go:172] (0xc001756630) (0xc0016332c0) Stream added, broadcasting: 3 I0124 13:13:32.810191 9 log.go:172] (0xc001756630) Reply frame received for 3 I0124 13:13:32.810242 9 log.go:172] (0xc001756630) (0xc001b020a0) Create stream I0124 13:13:32.810250 9 log.go:172] (0xc001756630) (0xc001b020a0) Stream added, broadcasting: 5 I0124 13:13:32.811493 9 log.go:172] (0xc001756630) Reply frame received for 5 I0124 13:13:32.949852 9 log.go:172] (0xc001756630) Data frame received for 3 I0124 13:13:32.949895 9 log.go:172] (0xc0016332c0) (3) Data frame handling I0124 13:13:32.949904 9 log.go:172] (0xc0016332c0) (3) Data frame sent I0124 13:13:33.065024 9 log.go:172] (0xc001756630) Data frame received for 1 I0124 13:13:33.065149 9 log.go:172] (0xc0020ca460) (1) Data frame handling I0124 13:13:33.065164 9 log.go:172] (0xc0020ca460) (1) Data frame sent I0124 13:13:33.065177 9 log.go:172] (0xc001756630) (0xc0020ca460) Stream removed, broadcasting: 1 I0124 13:13:33.068800 9 log.go:172] (0xc001756630) (0xc0016332c0) Stream removed, broadcasting: 3 I0124 13:13:33.068856 9 log.go:172] (0xc001756630) (0xc001b020a0) Stream removed, broadcasting: 5 I0124 13:13:33.068883 9 log.go:172] (0xc001756630) (0xc0020ca460) Stream removed, broadcasting: 1 I0124 13:13:33.068926 9 log.go:172] (0xc001756630) (0xc0016332c0) Stream removed, broadcasting: 3 I0124 13:13:33.068956 9 log.go:172] (0xc001756630) (0xc001b020a0) Stream removed, broadcasting: 5 Jan 24 13:13:33.068: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:13:33.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0124 13:13:33.069519 9 log.go:172] (0xc001756630) Go away received STEP: Destroying namespace "e2e-kubelet-etc-hosts-9107" for this suite. Jan 24 13:14:19.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:14:19.227: INFO: namespace e2e-kubelet-etc-hosts-9107 deletion completed in 46.146699021s • [SLOW TEST:70.010 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:14:19.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-aea13baa-0e62-4974-9f42-cb85a115a247 STEP: Creating a pod to test consume configMaps Jan 24 13:14:19.401: INFO: Waiting up to 5m0s for pod "pod-configmaps-44690f22-6ea8-4616-8bf5-7849c6cdaa95" in namespace "configmap-2993" to be "success or failure" Jan 24 13:14:19.413: INFO: Pod "pod-configmaps-44690f22-6ea8-4616-8bf5-7849c6cdaa95": Phase="Pending", Reason="", readiness=false. Elapsed: 11.62558ms Jan 24 13:14:21.423: INFO: Pod "pod-configmaps-44690f22-6ea8-4616-8bf5-7849c6cdaa95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021267613s Jan 24 13:14:23.514: INFO: Pod "pod-configmaps-44690f22-6ea8-4616-8bf5-7849c6cdaa95": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112190185s Jan 24 13:14:25.521: INFO: Pod "pod-configmaps-44690f22-6ea8-4616-8bf5-7849c6cdaa95": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11900635s Jan 24 13:14:27.528: INFO: Pod "pod-configmaps-44690f22-6ea8-4616-8bf5-7849c6cdaa95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.126410925s STEP: Saw pod success Jan 24 13:14:27.528: INFO: Pod "pod-configmaps-44690f22-6ea8-4616-8bf5-7849c6cdaa95" satisfied condition "success or failure" Jan 24 13:14:27.532: INFO: Trying to get logs from node iruya-node pod pod-configmaps-44690f22-6ea8-4616-8bf5-7849c6cdaa95 container configmap-volume-test: STEP: delete the pod Jan 24 13:14:27.616: INFO: Waiting for pod pod-configmaps-44690f22-6ea8-4616-8bf5-7849c6cdaa95 to disappear Jan 24 13:14:27.621: INFO: Pod pod-configmaps-44690f22-6ea8-4616-8bf5-7849c6cdaa95 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:14:27.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2993" for this suite. Jan 24 13:14:33.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:14:33.872: INFO: namespace configmap-2993 deletion completed in 6.244304817s • [SLOW TEST:14.645 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:14:33.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-8332/configmap-test-550d72d0-b474-4f29-b081-1217737ba7a7 STEP: Creating a pod to test consume configMaps Jan 24 13:14:34.024: INFO: Waiting up to 5m0s for pod "pod-configmaps-b3188ea2-71ac-49ad-960e-0a7d14464f24" in namespace "configmap-8332" to be "success or failure" Jan 24 13:14:34.032: INFO: Pod "pod-configmaps-b3188ea2-71ac-49ad-960e-0a7d14464f24": Phase="Pending", Reason="", readiness=false. Elapsed: 7.341158ms Jan 24 13:14:36.044: INFO: Pod "pod-configmaps-b3188ea2-71ac-49ad-960e-0a7d14464f24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019512865s Jan 24 13:14:38.051: INFO: Pod "pod-configmaps-b3188ea2-71ac-49ad-960e-0a7d14464f24": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026303445s Jan 24 13:14:40.058: INFO: Pod "pod-configmaps-b3188ea2-71ac-49ad-960e-0a7d14464f24": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032712901s Jan 24 13:14:42.064: INFO: Pod "pod-configmaps-b3188ea2-71ac-49ad-960e-0a7d14464f24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.039576832s STEP: Saw pod success Jan 24 13:14:42.065: INFO: Pod "pod-configmaps-b3188ea2-71ac-49ad-960e-0a7d14464f24" satisfied condition "success or failure" Jan 24 13:14:42.068: INFO: Trying to get logs from node iruya-node pod pod-configmaps-b3188ea2-71ac-49ad-960e-0a7d14464f24 container env-test: STEP: delete the pod Jan 24 13:14:42.198: INFO: Waiting for pod pod-configmaps-b3188ea2-71ac-49ad-960e-0a7d14464f24 to disappear Jan 24 13:14:42.210: INFO: Pod pod-configmaps-b3188ea2-71ac-49ad-960e-0a7d14464f24 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:14:42.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8332" for this suite. Jan 24 13:14:48.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:14:48.645: INFO: namespace configmap-8332 deletion completed in 6.415511283s • [SLOW TEST:14.772 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:14:48.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-2359c446-edf5-4f0e-9977-ad57ab7067bf STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-2359c446-edf5-4f0e-9977-ad57ab7067bf STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:14:59.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3592" for this suite. Jan 24 13:15:21.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:15:21.430: INFO: namespace projected-3592 deletion completed in 22.22043357s • [SLOW TEST:32.785 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:15:21.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 24 13:15:21.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-966' Jan 24 13:15:23.276: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 24 13:15:23.276: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jan 24 13:15:23.304: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-6n8db] Jan 24 13:15:23.305: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-6n8db" in namespace "kubectl-966" to be "running and ready" Jan 24 13:15:23.323: INFO: Pod "e2e-test-nginx-rc-6n8db": Phase="Pending", Reason="", readiness=false. Elapsed: 17.914359ms Jan 24 13:15:25.330: INFO: Pod "e2e-test-nginx-rc-6n8db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02499427s Jan 24 13:15:27.336: INFO: Pod "e2e-test-nginx-rc-6n8db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031591187s Jan 24 13:15:29.346: INFO: Pod "e2e-test-nginx-rc-6n8db": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041718529s Jan 24 13:15:31.357: INFO: Pod "e2e-test-nginx-rc-6n8db": Phase="Running", Reason="", readiness=true. Elapsed: 8.05235928s Jan 24 13:15:31.357: INFO: Pod "e2e-test-nginx-rc-6n8db" satisfied condition "running and ready" Jan 24 13:15:31.357: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-6n8db] Jan 24 13:15:31.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-966' Jan 24 13:15:31.615: INFO: stderr: "" Jan 24 13:15:31.615: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Jan 24 13:15:31.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-966' Jan 24 13:15:31.715: INFO: stderr: "" Jan 24 13:15:31.715: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:15:31.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-966" for this suite. Jan 24 13:15:53.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:15:53.903: INFO: namespace kubectl-966 deletion completed in 22.182854444s • [SLOW TEST:32.472 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:15:53.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 24 13:15:54.021: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0a3a0ebb-efb0-45b7-a633-2f5a0f940b32" in namespace "projected-7222" to be "success or failure" Jan 24 13:15:54.036: INFO: Pod "downwardapi-volume-0a3a0ebb-efb0-45b7-a633-2f5a0f940b32": Phase="Pending", Reason="", readiness=false. Elapsed: 14.513544ms Jan 24 13:15:56.045: INFO: Pod "downwardapi-volume-0a3a0ebb-efb0-45b7-a633-2f5a0f940b32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024390107s Jan 24 13:15:58.084: INFO: Pod "downwardapi-volume-0a3a0ebb-efb0-45b7-a633-2f5a0f940b32": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062625701s Jan 24 13:16:00.092: INFO: Pod "downwardapi-volume-0a3a0ebb-efb0-45b7-a633-2f5a0f940b32": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070905913s Jan 24 13:16:02.098: INFO: Pod "downwardapi-volume-0a3a0ebb-efb0-45b7-a633-2f5a0f940b32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.076903202s STEP: Saw pod success Jan 24 13:16:02.098: INFO: Pod "downwardapi-volume-0a3a0ebb-efb0-45b7-a633-2f5a0f940b32" satisfied condition "success or failure" Jan 24 13:16:02.103: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-0a3a0ebb-efb0-45b7-a633-2f5a0f940b32 container client-container: STEP: delete the pod Jan 24 13:16:02.211: INFO: Waiting for pod downwardapi-volume-0a3a0ebb-efb0-45b7-a633-2f5a0f940b32 to disappear Jan 24 13:16:02.341: INFO: Pod downwardapi-volume-0a3a0ebb-efb0-45b7-a633-2f5a0f940b32 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:16:02.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7222" for this suite. Jan 24 13:16:08.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:16:08.612: INFO: namespace projected-7222 deletion completed in 6.263119462s • [SLOW TEST:14.708 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:16:08.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-8952 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8952 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8952 Jan 24 13:16:08.781: INFO: Found 0 stateful pods, waiting for 1 Jan 24 13:16:18.788: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jan 24 13:16:18.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8952 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 24 13:16:19.327: INFO: stderr: "I0124 13:16:18.972347 1098 log.go:172] (0xc000836420) (0xc0006c6aa0) Create stream\nI0124 13:16:18.972443 1098 log.go:172] (0xc000836420) (0xc0006c6aa0) Stream added, broadcasting: 1\nI0124 13:16:18.981992 1098 log.go:172] (0xc000836420) Reply frame received for 1\nI0124 13:16:18.982031 1098 log.go:172] (0xc000836420) (0xc00086c000) Create stream\nI0124 13:16:18.982041 1098 log.go:172] (0xc000836420) (0xc00086c000) Stream added, broadcasting: 3\nI0124 13:16:18.984304 1098 log.go:172] (0xc000836420) Reply frame received for 3\nI0124 13:16:18.984350 1098 log.go:172] (0xc000836420) (0xc0009a0000) Create stream\nI0124 13:16:18.984363 1098 log.go:172] (0xc000836420) (0xc0009a0000) Stream added, broadcasting: 5\nI0124 13:16:18.986102 1098 log.go:172] (0xc000836420) Reply frame received for 5\nI0124 13:16:19.113221 1098 log.go:172] (0xc000836420) Data frame received for 5\nI0124 13:16:19.113278 1098 log.go:172] (0xc0009a0000) (5) Data frame handling\nI0124 13:16:19.113306 1098 log.go:172] (0xc0009a0000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0124 13:16:19.175438 1098 log.go:172] (0xc000836420) Data frame received for 3\nI0124 13:16:19.175490 1098 log.go:172] (0xc00086c000) (3) Data frame handling\nI0124 13:16:19.175517 1098 log.go:172] (0xc00086c000) (3) Data frame sent\nI0124 13:16:19.317580 1098 log.go:172] (0xc000836420) (0xc00086c000) Stream removed, broadcasting: 3\nI0124 13:16:19.317731 1098 log.go:172] (0xc000836420) Data frame received for 1\nI0124 13:16:19.317757 1098 log.go:172] (0xc0006c6aa0) (1) Data frame handling\nI0124 13:16:19.317794 1098 log.go:172] (0xc0006c6aa0) (1) Data frame sent\nI0124 13:16:19.317823 1098 log.go:172] (0xc000836420) (0xc0009a0000) Stream removed, broadcasting: 5\nI0124 13:16:19.317882 1098 log.go:172] (0xc000836420) (0xc0006c6aa0) Stream removed, broadcasting: 1\nI0124 13:16:19.317899 1098 log.go:172] (0xc000836420) Go away received\nI0124 13:16:19.318830 1098 log.go:172] (0xc000836420) (0xc0006c6aa0) Stream removed, broadcasting: 1\nI0124 13:16:19.318926 1098 log.go:172] (0xc000836420) (0xc00086c000) Stream removed, broadcasting: 3\nI0124 13:16:19.318941 1098 log.go:172] (0xc000836420) (0xc0009a0000) Stream removed, broadcasting: 5\n" Jan 24 13:16:19.327: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 24 13:16:19.327: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 24 13:16:19.337: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 24 13:16:29.471: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 24 13:16:29.471: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 13:16:29.523: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999555s Jan 24 13:16:30.544: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.971370896s Jan 24 13:16:31.553: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.950923049s Jan 24 13:16:32.568: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.941269556s Jan 24 13:16:33.577: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.926017033s Jan 24 13:16:34.596: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.916859186s Jan 24 13:16:35.614: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.898188162s Jan 24 13:16:36.621: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.880679987s Jan 24 13:16:37.628: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.873338841s Jan 24 13:16:38.637: INFO: Verifying statefulset ss doesn't scale past 1 for another 865.95537ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8952 Jan 24 13:16:39.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8952 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:16:40.147: INFO: stderr: "I0124 13:16:39.877598 1118 log.go:172] (0xc00012adc0) (0xc0007746e0) Create stream\nI0124 13:16:39.877753 1118 log.go:172] (0xc00012adc0) (0xc0007746e0) Stream added, broadcasting: 1\nI0124 13:16:39.884234 1118 log.go:172] (0xc00012adc0) Reply frame received for 1\nI0124 13:16:39.884271 1118 log.go:172] (0xc00012adc0) (0xc000774780) Create stream\nI0124 13:16:39.884279 1118 log.go:172] (0xc00012adc0) (0xc000774780) Stream added, broadcasting: 3\nI0124 13:16:39.886064 1118 log.go:172] (0xc00012adc0) Reply frame received for 3\nI0124 13:16:39.886104 1118 log.go:172] (0xc00012adc0) (0xc0003abcc0) Create stream\nI0124 13:16:39.886116 1118 log.go:172] (0xc00012adc0) (0xc0003abcc0) Stream added, broadcasting: 5\nI0124 13:16:39.889213 1118 log.go:172] (0xc00012adc0) Reply frame received for 5\nI0124 13:16:40.010363 1118 log.go:172] (0xc00012adc0) Data frame received for 3\nI0124 13:16:40.010475 1118 log.go:172] (0xc000774780) (3) Data frame handling\nI0124 13:16:40.010507 1118 log.go:172] (0xc000774780) (3) Data frame sent\nI0124 13:16:40.010711 1118 log.go:172] (0xc00012adc0) Data frame received for 5\nI0124 13:16:40.010729 1118 log.go:172] (0xc0003abcc0) (5) Data frame handling\nI0124 13:16:40.010736 1118 log.go:172] (0xc0003abcc0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0124 13:16:40.141153 1118 log.go:172] (0xc00012adc0) (0xc000774780) Stream removed, broadcasting: 3\nI0124 13:16:40.141303 1118 log.go:172] (0xc00012adc0) Data frame received for 1\nI0124 13:16:40.141324 1118 log.go:172] (0xc0007746e0) (1) Data frame handling\nI0124 13:16:40.141338 1118 log.go:172] (0xc0007746e0) (1) Data frame sent\nI0124 13:16:40.141349 1118 log.go:172] (0xc00012adc0) (0xc0007746e0) Stream removed, broadcasting: 1\nI0124 13:16:40.141390 1118 log.go:172] (0xc00012adc0) (0xc0003abcc0) Stream removed, broadcasting: 5\nI0124 13:16:40.141455 1118 log.go:172] (0xc00012adc0) Go away received\nI0124 13:16:40.141621 1118 log.go:172] (0xc00012adc0) (0xc0007746e0) Stream removed, broadcasting: 1\nI0124 13:16:40.141634 1118 log.go:172] (0xc00012adc0) (0xc000774780) Stream removed, broadcasting: 3\nI0124 13:16:40.141639 1118 log.go:172] (0xc00012adc0) (0xc0003abcc0) Stream removed, broadcasting: 5\n" Jan 24 13:16:40.147: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 24 13:16:40.147: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 24 13:16:40.155: INFO: Found 1 stateful pods, waiting for 3 Jan 24 13:16:50.164: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 24 13:16:50.164: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 24 13:16:50.164: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 24 13:17:00.166: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 24 13:17:00.166: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 24 13:17:00.166: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jan 24 13:17:00.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8952 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 24 13:17:00.899: INFO: stderr: "I0124 13:17:00.340248 1137 log.go:172] (0xc0005862c0) (0xc00030a640) Create stream\nI0124 13:17:00.340323 1137 log.go:172] (0xc0005862c0) (0xc00030a640) Stream added, broadcasting: 1\nI0124 13:17:00.347506 1137 log.go:172] (0xc0005862c0) Reply frame received for 1\nI0124 13:17:00.347540 1137 log.go:172] (0xc0005862c0) (0xc0005d43c0) Create stream\nI0124 13:17:00.347546 1137 log.go:172] (0xc0005862c0) (0xc0005d43c0) Stream added, broadcasting: 3\nI0124 13:17:00.349703 1137 log.go:172] (0xc0005862c0) Reply frame received for 3\nI0124 13:17:00.349724 1137 log.go:172] (0xc0005862c0) (0xc000782000) Create stream\nI0124 13:17:00.349732 1137 log.go:172] (0xc0005862c0) (0xc000782000) Stream added, broadcasting: 5\nI0124 13:17:00.353386 1137 log.go:172] (0xc0005862c0) Reply frame received for 5\nI0124 13:17:00.461690 1137 log.go:172] (0xc0005862c0) Data frame received for 5\nI0124 13:17:00.461795 1137 log.go:172] (0xc000782000) (5) Data frame handling\nI0124 13:17:00.461806 1137 log.go:172] (0xc000782000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0124 13:17:00.461925 1137 log.go:172] (0xc0005862c0) Data frame received for 3\nI0124 13:17:00.461950 1137 log.go:172] (0xc0005d43c0) (3) Data frame handling\nI0124 13:17:00.461988 1137 log.go:172] (0xc0005d43c0) (3) Data frame sent\nI0124 13:17:00.887068 1137 log.go:172] (0xc0005862c0) (0xc0005d43c0) Stream removed, broadcasting: 3\nI0124 13:17:00.889063 1137 log.go:172] (0xc0005862c0) Data frame received for 1\nI0124 13:17:00.889288 1137 log.go:172] (0xc0005862c0) (0xc000782000) Stream removed, broadcasting: 5\nI0124 13:17:00.889584 1137 log.go:172] (0xc00030a640) (1) Data frame handling\nI0124 13:17:00.889712 1137 log.go:172] (0xc00030a640) (1) Data frame sent\nI0124 13:17:00.889750 1137 log.go:172] (0xc0005862c0) (0xc00030a640) Stream removed, broadcasting: 1\nI0124 13:17:00.889776 1137 log.go:172] (0xc0005862c0) Go away received\nI0124 13:17:00.890272 1137 log.go:172] (0xc0005862c0) (0xc00030a640) Stream removed, broadcasting: 1\nI0124 13:17:00.890291 1137 log.go:172] (0xc0005862c0) (0xc0005d43c0) Stream removed, broadcasting: 3\nI0124 13:17:00.890297 1137 log.go:172] (0xc0005862c0) (0xc000782000) Stream removed, broadcasting: 5\n" Jan 24 13:17:00.900: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 24 13:17:00.900: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 24 13:17:00.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8952 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 24 13:17:01.307: INFO: stderr: "I0124 13:17:01.089598 1152 log.go:172] (0xc0005ec420) (0xc0006fe6e0) Create stream\nI0124 13:17:01.089659 1152 log.go:172] (0xc0005ec420) (0xc0006fe6e0) Stream added, broadcasting: 1\nI0124 13:17:01.093221 1152 log.go:172] (0xc0005ec420) Reply frame received for 1\nI0124 13:17:01.093245 1152 log.go:172] (0xc0005ec420) (0xc0005e8320) Create stream\nI0124 13:17:01.093250 1152 log.go:172] (0xc0005ec420) (0xc0005e8320) Stream added, broadcasting: 3\nI0124 13:17:01.094059 1152 log.go:172] (0xc0005ec420) Reply frame received for 3\nI0124 13:17:01.094074 1152 log.go:172] (0xc0005ec420) (0xc0006fe780) Create stream\nI0124 13:17:01.094079 1152 log.go:172] (0xc0005ec420) (0xc0006fe780) Stream added, broadcasting: 5\nI0124 13:17:01.095115 1152 log.go:172] (0xc0005ec420) Reply frame received for 5\nI0124 13:17:01.195976 1152 log.go:172] (0xc0005ec420) Data frame received for 5\nI0124 13:17:01.195994 1152 log.go:172] (0xc0006fe780) (5) Data frame handling\nI0124 13:17:01.196002 1152 log.go:172] (0xc0006fe780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0124 13:17:01.240939 1152 log.go:172] (0xc0005ec420) Data frame received for 3\nI0124 13:17:01.240957 1152 log.go:172] (0xc0005e8320) (3) Data frame handling\nI0124 13:17:01.240973 1152 log.go:172] (0xc0005e8320) (3) Data frame sent\nI0124 13:17:01.304219 1152 log.go:172] (0xc0005ec420) (0xc0005e8320) Stream removed, broadcasting: 3\nI0124 13:17:01.304374 1152 log.go:172] (0xc0005ec420) Data frame received for 1\nI0124 13:17:01.304462 1152 log.go:172] (0xc0005ec420) (0xc0006fe780) Stream removed, broadcasting: 5\nI0124 13:17:01.304509 1152 log.go:172] (0xc0006fe6e0) (1) Data frame handling\nI0124 13:17:01.304550 1152 log.go:172] (0xc0006fe6e0) (1) Data frame sent\nI0124 13:17:01.304612 1152 log.go:172] (0xc0005ec420) (0xc0006fe6e0) Stream removed, broadcasting: 1\nI0124 13:17:01.304655 1152 log.go:172] (0xc0005ec420) Go away received\nI0124 13:17:01.304885 1152 log.go:172] (0xc0005ec420) (0xc0006fe6e0) Stream removed, broadcasting: 1\nI0124 13:17:01.304901 1152 log.go:172] (0xc0005ec420) (0xc0005e8320) Stream removed, broadcasting: 3\nI0124 13:17:01.304906 1152 log.go:172] (0xc0005ec420) (0xc0006fe780) Stream removed, broadcasting: 5\n" Jan 24 13:17:01.307: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 24 13:17:01.307: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 24 13:17:01.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8952 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 24 13:17:01.711: INFO: stderr: "I0124 13:17:01.431539 1168 log.go:172] (0xc0008b6000) (0xc000a16140) Create stream\nI0124 13:17:01.431771 1168 log.go:172] (0xc0008b6000) (0xc000a16140) Stream added, broadcasting: 1\nI0124 13:17:01.438712 1168 log.go:172] (0xc0008b6000) Reply frame received for 1\nI0124 13:17:01.438773 1168 log.go:172] (0xc0008b6000) (0xc00053e3c0) Create stream\nI0124 13:17:01.438784 1168 log.go:172] (0xc0008b6000) (0xc00053e3c0) Stream added, broadcasting: 3\nI0124 13:17:01.439771 1168 log.go:172] (0xc0008b6000) Reply frame received for 3\nI0124 13:17:01.439789 1168 log.go:172] (0xc0008b6000) (0xc00053e460) Create stream\nI0124 13:17:01.439796 1168 log.go:172] (0xc0008b6000) (0xc00053e460) Stream added, broadcasting: 5\nI0124 13:17:01.440663 1168 log.go:172] (0xc0008b6000) Reply frame received for 5\nI0124 13:17:01.568204 1168 log.go:172] (0xc0008b6000) Data frame received for 5\nI0124 13:17:01.568341 1168 log.go:172] (0xc00053e460) (5) Data frame handling\nI0124 13:17:01.568368 1168 log.go:172] (0xc00053e460) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0124 13:17:01.609761 1168 log.go:172] (0xc0008b6000) Data frame received for 3\nI0124 13:17:01.609774 1168 log.go:172] (0xc00053e3c0) (3) Data frame handling\nI0124 13:17:01.609800 1168 log.go:172] (0xc00053e3c0) (3) Data frame sent\nI0124 13:17:01.705227 1168 log.go:172] (0xc0008b6000) Data frame received for 1\nI0124 13:17:01.705485 1168 log.go:172] (0xc0008b6000) (0xc00053e3c0) Stream removed, broadcasting: 3\nI0124 13:17:01.705523 1168 log.go:172] (0xc000a16140) (1) Data frame handling\nI0124 13:17:01.705548 1168 log.go:172] (0xc0008b6000) (0xc00053e460) Stream removed, broadcasting: 5\nI0124 13:17:01.705577 1168 log.go:172] (0xc000a16140) (1) Data frame sent\nI0124 13:17:01.705592 1168 log.go:172] (0xc0008b6000) (0xc000a16140) Stream removed, broadcasting: 1\nI0124 13:17:01.705611 1168 log.go:172] (0xc0008b6000) Go away received\nI0124 13:17:01.706103 1168 log.go:172] (0xc0008b6000) (0xc000a16140) Stream removed, broadcasting: 1\nI0124 13:17:01.706125 1168 log.go:172] (0xc0008b6000) (0xc00053e3c0) Stream removed, broadcasting: 3\nI0124 13:17:01.706134 1168 log.go:172] (0xc0008b6000) (0xc00053e460) Stream removed, broadcasting: 5\n" Jan 24 13:17:01.711: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 24 13:17:01.711: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 24 13:17:01.711: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 13:17:01.717: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jan 24 13:17:11.733: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 24 13:17:11.733: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 24 13:17:11.733: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 24 13:17:11.757: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999503s Jan 24 13:17:12.766: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989725455s Jan 24 13:17:13.797: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.980396018s Jan 24 13:17:14.809: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.949932528s Jan 24 13:17:15.818: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.937751329s Jan 24 13:17:16.832: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.92842201s Jan 24 13:17:18.632: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.915166893s Jan 24 13:17:19.648: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.115039575s Jan 24 13:17:20.659: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.098714971s Jan 24 13:17:21.673: INFO: Verifying statefulset ss doesn't scale past 3 for another 87.53838ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8952 Jan 24 13:17:22.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8952 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:17:23.150: INFO: stderr: "I0124 13:17:22.880018 1188 log.go:172] (0xc000958370) (0xc0008ac5a0) Create stream\nI0124 13:17:22.880144 1188 log.go:172] (0xc000958370) (0xc0008ac5a0) Stream added, broadcasting: 1\nI0124 13:17:22.885708 1188 log.go:172] (0xc000958370) Reply frame received for 1\nI0124 13:17:22.885738 1188 log.go:172] (0xc000958370) (0xc0008ac640) Create stream\nI0124 13:17:22.885750 1188 log.go:172] (0xc000958370) (0xc0008ac640) Stream added, broadcasting: 3\nI0124 13:17:22.887263 1188 log.go:172] (0xc000958370) Reply frame received for 3\nI0124 13:17:22.887292 1188 log.go:172] (0xc000958370) (0xc00099a000) Create stream\nI0124 13:17:22.887306 1188 log.go:172] (0xc000958370) (0xc00099a000) Stream added, broadcasting: 5\nI0124 13:17:22.888788 1188 log.go:172] (0xc000958370) Reply frame received for 5\nI0124 13:17:23.002299 1188 log.go:172] (0xc000958370) Data frame received for 3\nI0124 13:17:23.002390 1188 log.go:172] (0xc0008ac640) (3) Data frame handling\nI0124 13:17:23.002442 1188 log.go:172] (0xc000958370) Data frame received for 5\nI0124 13:17:23.002465 1188 log.go:172] (0xc00099a000) (5) Data frame handling\nI0124 13:17:23.002473 1188 log.go:172] (0xc00099a000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0124 13:17:23.002484 1188 log.go:172] (0xc0008ac640) (3) Data frame sent\nI0124 13:17:23.141710 1188 log.go:172] (0xc000958370) Data frame received for 1\nI0124 13:17:23.142143 1188 log.go:172] (0xc000958370) (0xc0008ac640) Stream removed, broadcasting: 3\nI0124 13:17:23.142324 1188 log.go:172] (0xc0008ac5a0) (1) Data frame handling\nI0124 13:17:23.142401 1188 log.go:172] (0xc0008ac5a0) (1) Data frame sent\nI0124 13:17:23.142462 1188 log.go:172] (0xc000958370) (0xc00099a000) Stream removed, broadcasting: 5\nI0124 13:17:23.142527 1188 log.go:172] (0xc000958370) (0xc0008ac5a0) Stream removed, broadcasting: 1\nI0124 13:17:23.142653 1188 log.go:172] (0xc000958370) Go away received\nI0124 13:17:23.143553 1188 log.go:172] (0xc000958370) (0xc0008ac5a0) Stream removed, broadcasting: 1\nI0124 13:17:23.143575 1188 log.go:172] (0xc000958370) (0xc0008ac640) Stream removed, broadcasting: 3\nI0124 13:17:23.143581 1188 log.go:172] (0xc000958370) (0xc00099a000) Stream removed, broadcasting: 5\n" Jan 24 13:17:23.150: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 24 13:17:23.150: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 24 13:17:23.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8952 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:17:23.696: INFO: stderr: "I0124 13:17:23.432722 1207 log.go:172] (0xc000116dc0) (0xc0005b6640) Create stream\nI0124 13:17:23.432990 1207 log.go:172] (0xc000116dc0) (0xc0005b6640) Stream added, broadcasting: 1\nI0124 13:17:23.437637 1207 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0124 13:17:23.437736 1207 log.go:172] (0xc000116dc0) (0xc0006c0320) Create stream\nI0124 13:17:23.437750 1207 log.go:172] (0xc000116dc0) (0xc0006c0320) Stream added, broadcasting: 3\nI0124 13:17:23.438935 1207 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0124 13:17:23.438956 1207 log.go:172] (0xc000116dc0) (0xc0005b66e0) Create stream\nI0124 13:17:23.438962 1207 log.go:172] (0xc000116dc0) (0xc0005b66e0) Stream added, broadcasting: 5\nI0124 13:17:23.440218 1207 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0124 13:17:23.541883 1207 log.go:172] (0xc000116dc0) Data frame received for 3\nI0124 13:17:23.542000 1207 log.go:172] (0xc0006c0320) (3) Data frame handling\nI0124 13:17:23.542014 1207 log.go:172] (0xc0006c0320) (3) Data frame sent\nI0124 13:17:23.542045 1207 log.go:172] (0xc000116dc0) Data frame received for 5\nI0124 13:17:23.542074 1207 log.go:172] (0xc0005b66e0) (5) Data frame handling\nI0124 13:17:23.542081 1207 log.go:172] (0xc0005b66e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0124 13:17:23.686941 1207 log.go:172] (0xc000116dc0) Data frame received for 1\nI0124 13:17:23.687082 1207 log.go:172] (0xc0005b6640) (1) Data frame handling\nI0124 13:17:23.687110 1207 log.go:172] (0xc0005b6640) (1) Data frame sent\nI0124 13:17:23.687122 1207 log.go:172] (0xc000116dc0) (0xc0005b6640) Stream removed, broadcasting: 1\nI0124 13:17:23.687316 1207 log.go:172] (0xc000116dc0) (0xc0006c0320) Stream removed, broadcasting: 3\nI0124 13:17:23.687367 1207 log.go:172] (0xc000116dc0) (0xc0005b66e0) Stream removed, broadcasting: 5\nI0124 13:17:23.687433 1207 log.go:172] (0xc000116dc0) Go away received\nI0124 13:17:23.687876 1207 log.go:172] (0xc000116dc0) (0xc0005b6640) Stream removed, broadcasting: 1\nI0124 13:17:23.687949 1207 log.go:172] (0xc000116dc0) (0xc0006c0320) Stream removed, broadcasting: 3\nI0124 13:17:23.687975 1207 log.go:172] (0xc000116dc0) (0xc0005b66e0) Stream removed, broadcasting: 5\n" Jan 24 13:17:23.696: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 24 13:17:23.697: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 24 13:17:23.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8952 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:17:24.424: INFO: stderr: "I0124 13:17:23.889885 1225 log.go:172] (0xc000858370) (0xc0007e66e0) Create stream\nI0124 13:17:23.890010 1225 log.go:172] (0xc000858370) (0xc0007e66e0) Stream added, broadcasting: 1\nI0124 13:17:23.897510 1225 log.go:172] (0xc000858370) Reply frame received for 1\nI0124 13:17:23.897546 1225 log.go:172] (0xc000858370) (0xc000784320) Create stream\nI0124 13:17:23.897551 1225 log.go:172] (0xc000858370) (0xc000784320) Stream added, broadcasting: 3\nI0124 13:17:23.898668 1225 log.go:172] (0xc000858370) Reply frame received for 3\nI0124 13:17:23.898692 1225 log.go:172] (0xc000858370) (0xc0007843c0) Create stream\nI0124 13:17:23.898706 1225 log.go:172] (0xc000858370) (0xc0007843c0) Stream added, broadcasting: 5\nI0124 13:17:23.900263 1225 log.go:172] (0xc000858370) Reply frame received for 5\nI0124 13:17:24.178062 1225 log.go:172] (0xc000858370) Data frame received for 3\nI0124 13:17:24.178166 1225 log.go:172] (0xc000784320) (3) Data frame handling\nI0124 13:17:24.178179 1225 log.go:172] (0xc000784320) (3) Data frame sent\nI0124 13:17:24.178245 1225 log.go:172] (0xc000858370) Data frame received for 5\nI0124 13:17:24.178263 1225 log.go:172] (0xc0007843c0) (5) Data frame handling\nI0124 13:17:24.178274 1225 log.go:172] (0xc0007843c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0124 13:17:24.413288 1225 log.go:172] (0xc000858370) Data frame received for 1\nI0124 13:17:24.413434 1225 log.go:172] (0xc0007e66e0) (1) Data frame handling\nI0124 13:17:24.413452 1225 log.go:172] (0xc0007e66e0) (1) Data frame sent\nI0124 13:17:24.415225 1225 log.go:172] (0xc000858370) (0xc0007843c0) Stream removed, broadcasting: 5\nI0124 13:17:24.415378 1225 log.go:172] (0xc000858370) (0xc0007e66e0) Stream removed, broadcasting: 1\nI0124 13:17:24.415609 1225 log.go:172] (0xc000858370) (0xc000784320) Stream removed, broadcasting: 3\nI0124 13:17:24.415678 1225 log.go:172] (0xc000858370) Go away received\nI0124 13:17:24.415808 1225 log.go:172] (0xc000858370) (0xc0007e66e0) Stream removed, broadcasting: 1\nI0124 13:17:24.415822 1225 log.go:172] (0xc000858370) (0xc000784320) Stream removed, broadcasting: 3\nI0124 13:17:24.415832 1225 log.go:172] (0xc000858370) (0xc0007843c0) Stream removed, broadcasting: 5\n" Jan 24 13:17:24.425: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 24 13:17:24.425: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 24 13:17:24.425: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 24 13:17:54.459: INFO: Deleting all statefulset in ns statefulset-8952 Jan 24 13:17:54.468: INFO: Scaling statefulset ss to 0 Jan 24 13:17:54.485: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 13:17:54.490: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:17:54.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8952" for this suite. Jan 24 13:18:00.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:18:00.645: INFO: namespace statefulset-8952 deletion completed in 6.113446495s • [SLOW TEST:112.033 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:18:00.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:18:00.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5034" for this suite. Jan 24 13:18:20.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:18:20.926: INFO: namespace pods-5034 deletion completed in 20.133415601s • [SLOW TEST:20.281 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:18:20.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Jan 24 13:18:21.039: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:18:21.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-701" for this suite. Jan 24 13:18:27.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:18:27.299: INFO: namespace kubectl-701 deletion completed in 6.165942311s • [SLOW TEST:6.372 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:18:27.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-1419 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 24 13:18:27.369: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 24 13:19:01.589: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1419 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 24 13:19:01.589: INFO: >>> kubeConfig: /root/.kube/config I0124 13:19:01.669852 9 log.go:172] (0xc000b2c420) (0xc0004a1860) Create stream I0124 13:19:01.669895 9 log.go:172] (0xc000b2c420) (0xc0004a1860) Stream added, broadcasting: 1 I0124 13:19:01.676037 9 log.go:172] (0xc000b2c420) Reply frame received for 1 I0124 13:19:01.676110 9 log.go:172] (0xc000b2c420) (0xc0020cbcc0) Create stream I0124 13:19:01.676123 9 log.go:172] (0xc000b2c420) (0xc0020cbcc0) Stream added, broadcasting: 3 I0124 13:19:01.677702 9 log.go:172] (0xc000b2c420) Reply frame received for 3 I0124 13:19:01.677733 9 log.go:172] (0xc000b2c420) (0xc0004a1900) Create stream I0124 13:19:01.677745 9 log.go:172] (0xc000b2c420) (0xc0004a1900) Stream added, broadcasting: 5 I0124 13:19:01.679496 9 log.go:172] (0xc000b2c420) Reply frame received for 5 I0124 13:19:01.945258 9 log.go:172] (0xc000b2c420) Data frame received for 3 I0124 13:19:01.945291 9 log.go:172] (0xc0020cbcc0) (3) Data frame handling I0124 13:19:01.945307 9 log.go:172] (0xc0020cbcc0) (3) Data frame sent I0124 13:19:02.115468 9 log.go:172] (0xc000b2c420) Data frame received for 1 I0124 13:19:02.115648 9 log.go:172] (0xc0004a1860) (1) Data frame handling I0124 13:19:02.115684 9 log.go:172] (0xc0004a1860) (1) Data frame sent I0124 13:19:02.115710 9 log.go:172] (0xc000b2c420) (0xc0004a1860) Stream removed, broadcasting: 1 I0124 13:19:02.116082 9 log.go:172] (0xc000b2c420) (0xc0020cbcc0) Stream removed, broadcasting: 3 I0124 13:19:02.116106 9 log.go:172] (0xc000b2c420) (0xc0004a1900) Stream removed, broadcasting: 5 I0124 13:19:02.116126 9 log.go:172] (0xc000b2c420) (0xc0004a1860) Stream removed, broadcasting: 1 I0124 13:19:02.116136 9 log.go:172] (0xc000b2c420) (0xc0020cbcc0) Stream removed, broadcasting: 3 I0124 13:19:02.116144 9 log.go:172] (0xc000b2c420) (0xc0004a1900) Stream removed, broadcasting: 5 Jan 24 13:19:02.116: INFO: Found all expected endpoints: [netserver-0] I0124 13:19:02.116524 9 log.go:172] (0xc000b2c420) Go away received Jan 24 13:19:02.123: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1419 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 24 13:19:02.123: INFO: >>> kubeConfig: /root/.kube/config I0124 13:19:02.225393 9 log.go:172] (0xc0009de580) (0xc0001135e0) Create stream I0124 13:19:02.225489 9 log.go:172] (0xc0009de580) (0xc0001135e0) Stream added, broadcasting: 1 I0124 13:19:02.234512 9 log.go:172] (0xc0009de580) Reply frame received for 1 I0124 13:19:02.234718 9 log.go:172] (0xc0009de580) (0xc0004a1cc0) Create stream I0124 13:19:02.234735 9 log.go:172] (0xc0009de580) (0xc0004a1cc0) Stream added, broadcasting: 3 I0124 13:19:02.236363 9 log.go:172] (0xc0009de580) Reply frame received for 3 I0124 13:19:02.236385 9 log.go:172] (0xc0009de580) (0xc000113680) Create stream I0124 13:19:02.236391 9 log.go:172] (0xc0009de580) (0xc000113680) Stream added, broadcasting: 5 I0124 13:19:02.240087 9 log.go:172] (0xc0009de580) Reply frame received for 5 I0124 13:19:02.419607 9 log.go:172] (0xc0009de580) Data frame received for 3 I0124 13:19:02.419744 9 log.go:172] (0xc0004a1cc0) (3) Data frame handling I0124 13:19:02.419793 9 log.go:172] (0xc0004a1cc0) (3) Data frame sent I0124 13:19:02.691792 9 log.go:172] (0xc0009de580) (0xc0004a1cc0) Stream removed, broadcasting: 3 I0124 13:19:02.691961 9 log.go:172] (0xc0009de580) Data frame received for 1 I0124 13:19:02.691988 9 log.go:172] (0xc0001135e0) (1) Data frame handling I0124 13:19:02.692002 9 log.go:172] (0xc0001135e0) (1) Data frame sent I0124 13:19:02.692049 9 log.go:172] (0xc0009de580) (0xc0001135e0) Stream removed, broadcasting: 1 I0124 13:19:02.692137 9 log.go:172] (0xc0009de580) (0xc000113680) Stream removed, broadcasting: 5 I0124 13:19:02.692222 9 log.go:172] (0xc0009de580) Go away received I0124 13:19:02.692320 9 log.go:172] (0xc0009de580) (0xc0001135e0) Stream removed, broadcasting: 1 I0124 13:19:02.692331 9 log.go:172] (0xc0009de580) (0xc0004a1cc0) Stream removed, broadcasting: 3 I0124 13:19:02.692337 9 log.go:172] (0xc0009de580) (0xc000113680) Stream removed, broadcasting: 5 Jan 24 13:19:02.692: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:19:02.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1419" for this suite. Jan 24 13:19:24.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:19:24.866: INFO: namespace pod-network-test-1419 deletion completed in 22.166145956s • [SLOW TEST:57.567 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:19:24.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-71127f76-f6d7-4f20-ad09-41d646209a2a STEP: Creating a pod to test consume configMaps Jan 24 13:19:24.948: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d8ded20e-fe86-40d5-bf38-280b12868d4f" in namespace "projected-5315" to be "success or failure" Jan 24 13:19:24.953: INFO: Pod "pod-projected-configmaps-d8ded20e-fe86-40d5-bf38-280b12868d4f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.188336ms Jan 24 13:19:26.985: INFO: Pod "pod-projected-configmaps-d8ded20e-fe86-40d5-bf38-280b12868d4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036684559s Jan 24 13:19:28.998: INFO: Pod "pod-projected-configmaps-d8ded20e-fe86-40d5-bf38-280b12868d4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050130444s Jan 24 13:19:31.006: INFO: Pod "pod-projected-configmaps-d8ded20e-fe86-40d5-bf38-280b12868d4f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058364433s Jan 24 13:19:33.019: INFO: Pod "pod-projected-configmaps-d8ded20e-fe86-40d5-bf38-280b12868d4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.071188736s STEP: Saw pod success Jan 24 13:19:33.019: INFO: Pod "pod-projected-configmaps-d8ded20e-fe86-40d5-bf38-280b12868d4f" satisfied condition "success or failure" Jan 24 13:19:33.024: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-d8ded20e-fe86-40d5-bf38-280b12868d4f container projected-configmap-volume-test: STEP: delete the pod Jan 24 13:19:33.108: INFO: Waiting for pod pod-projected-configmaps-d8ded20e-fe86-40d5-bf38-280b12868d4f to disappear Jan 24 13:19:33.134: INFO: Pod pod-projected-configmaps-d8ded20e-fe86-40d5-bf38-280b12868d4f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:19:33.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5315" for this suite. Jan 24 13:19:39.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:19:39.353: INFO: namespace projected-5315 deletion completed in 6.2117277s • [SLOW TEST:14.487 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:19:39.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 24 13:19:39.445: INFO: Waiting up to 5m0s for pod "downward-api-9688f03d-0075-4c21-913f-b8614bb4a701" in namespace "downward-api-7831" to be "success or failure" Jan 24 13:19:39.479: INFO: Pod "downward-api-9688f03d-0075-4c21-913f-b8614bb4a701": Phase="Pending", Reason="", readiness=false. Elapsed: 34.219171ms Jan 24 13:19:41.493: INFO: Pod "downward-api-9688f03d-0075-4c21-913f-b8614bb4a701": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047571642s Jan 24 13:19:43.505: INFO: Pod "downward-api-9688f03d-0075-4c21-913f-b8614bb4a701": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060164912s Jan 24 13:19:45.516: INFO: Pod "downward-api-9688f03d-0075-4c21-913f-b8614bb4a701": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071197819s Jan 24 13:19:47.525: INFO: Pod "downward-api-9688f03d-0075-4c21-913f-b8614bb4a701": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079535852s STEP: Saw pod success Jan 24 13:19:47.525: INFO: Pod "downward-api-9688f03d-0075-4c21-913f-b8614bb4a701" satisfied condition "success or failure" Jan 24 13:19:47.528: INFO: Trying to get logs from node iruya-node pod downward-api-9688f03d-0075-4c21-913f-b8614bb4a701 container dapi-container: STEP: delete the pod Jan 24 13:19:47.652: INFO: Waiting for pod downward-api-9688f03d-0075-4c21-913f-b8614bb4a701 to disappear Jan 24 13:19:47.666: INFO: Pod downward-api-9688f03d-0075-4c21-913f-b8614bb4a701 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:19:47.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7831" for this suite. Jan 24 13:19:53.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:19:53.833: INFO: namespace downward-api-7831 deletion completed in 6.144615703s • [SLOW TEST:14.479 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:19:53.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Jan 24 13:19:54.658: INFO: created pod pod-service-account-defaultsa Jan 24 13:19:54.658: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 24 13:19:54.677: INFO: created pod pod-service-account-mountsa Jan 24 13:19:54.677: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 24 13:19:54.800: INFO: created pod pod-service-account-nomountsa Jan 24 13:19:54.800: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 24 13:19:54.893: INFO: created pod pod-service-account-defaultsa-mountspec Jan 24 13:19:54.893: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 24 13:19:55.057: INFO: created pod pod-service-account-mountsa-mountspec Jan 24 13:19:55.057: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 24 13:19:55.074: INFO: created pod pod-service-account-nomountsa-mountspec Jan 24 13:19:55.074: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 24 13:19:55.098: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 24 13:19:55.098: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 24 13:19:56.012: INFO: created pod pod-service-account-mountsa-nomountspec Jan 24 13:19:56.012: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 24 13:19:56.730: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 24 13:19:56.730: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:19:56.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9052" for this suite. Jan 24 13:20:24.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:20:24.784: INFO: namespace svcaccounts-9052 deletion completed in 27.583063615s • [SLOW TEST:30.951 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:20:24.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Jan 24 13:20:24.973: INFO: Waiting up to 5m0s for pod "var-expansion-8113c093-0412-4da8-9bde-13cbeac5bf84" in namespace "var-expansion-4809" to be "success or failure" Jan 24 13:20:24.988: INFO: Pod "var-expansion-8113c093-0412-4da8-9bde-13cbeac5bf84": Phase="Pending", Reason="", readiness=false. Elapsed: 14.664369ms Jan 24 13:20:26.998: INFO: Pod "var-expansion-8113c093-0412-4da8-9bde-13cbeac5bf84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024893636s Jan 24 13:20:29.007: INFO: Pod "var-expansion-8113c093-0412-4da8-9bde-13cbeac5bf84": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033741971s Jan 24 13:20:31.012: INFO: Pod "var-expansion-8113c093-0412-4da8-9bde-13cbeac5bf84": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039248346s Jan 24 13:20:33.027: INFO: Pod "var-expansion-8113c093-0412-4da8-9bde-13cbeac5bf84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.053604043s STEP: Saw pod success Jan 24 13:20:33.027: INFO: Pod "var-expansion-8113c093-0412-4da8-9bde-13cbeac5bf84" satisfied condition "success or failure" Jan 24 13:20:33.031: INFO: Trying to get logs from node iruya-node pod var-expansion-8113c093-0412-4da8-9bde-13cbeac5bf84 container dapi-container: STEP: delete the pod Jan 24 13:20:33.104: INFO: Waiting for pod var-expansion-8113c093-0412-4da8-9bde-13cbeac5bf84 to disappear Jan 24 13:20:33.150: INFO: Pod var-expansion-8113c093-0412-4da8-9bde-13cbeac5bf84 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:20:33.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4809" for this suite. Jan 24 13:20:39.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:20:39.308: INFO: namespace var-expansion-4809 deletion completed in 6.149803145s • [SLOW TEST:14.523 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:20:39.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 24 13:20:39.437: INFO: Waiting up to 5m0s for pod "pod-f29e127a-c1e3-4910-a5bb-53268b3f6443" in namespace "emptydir-2632" to be "success or failure" Jan 24 13:20:39.459: INFO: Pod "pod-f29e127a-c1e3-4910-a5bb-53268b3f6443": Phase="Pending", Reason="", readiness=false. Elapsed: 21.613494ms Jan 24 13:20:41.469: INFO: Pod "pod-f29e127a-c1e3-4910-a5bb-53268b3f6443": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032419447s Jan 24 13:20:43.486: INFO: Pod "pod-f29e127a-c1e3-4910-a5bb-53268b3f6443": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049391859s Jan 24 13:20:45.499: INFO: Pod "pod-f29e127a-c1e3-4910-a5bb-53268b3f6443": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062567491s Jan 24 13:20:47.508: INFO: Pod "pod-f29e127a-c1e3-4910-a5bb-53268b3f6443": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.071484015s STEP: Saw pod success Jan 24 13:20:47.508: INFO: Pod "pod-f29e127a-c1e3-4910-a5bb-53268b3f6443" satisfied condition "success or failure" Jan 24 13:20:47.513: INFO: Trying to get logs from node iruya-node pod pod-f29e127a-c1e3-4910-a5bb-53268b3f6443 container test-container: STEP: delete the pod Jan 24 13:20:47.731: INFO: Waiting for pod pod-f29e127a-c1e3-4910-a5bb-53268b3f6443 to disappear Jan 24 13:20:47.756: INFO: Pod pod-f29e127a-c1e3-4910-a5bb-53268b3f6443 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:20:47.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2632" for this suite. Jan 24 13:20:53.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:20:53.969: INFO: namespace emptydir-2632 deletion completed in 6.204411657s • [SLOW TEST:14.661 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:20:53.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-2b8e05f3-1f46-403b-a8f4-a51be8b0d5f6 STEP: Creating a pod to test consume configMaps Jan 24 13:20:54.154: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6f54c5d1-e5cf-49d3-8df4-f00a36a06c30" in namespace "projected-9881" to be "success or failure" Jan 24 13:20:54.205: INFO: Pod "pod-projected-configmaps-6f54c5d1-e5cf-49d3-8df4-f00a36a06c30": Phase="Pending", Reason="", readiness=false. Elapsed: 50.593057ms Jan 24 13:20:56.225: INFO: Pod "pod-projected-configmaps-6f54c5d1-e5cf-49d3-8df4-f00a36a06c30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070623223s Jan 24 13:20:58.233: INFO: Pod "pod-projected-configmaps-6f54c5d1-e5cf-49d3-8df4-f00a36a06c30": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07896142s Jan 24 13:21:00.244: INFO: Pod "pod-projected-configmaps-6f54c5d1-e5cf-49d3-8df4-f00a36a06c30": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089836232s Jan 24 13:21:02.251: INFO: Pod "pod-projected-configmaps-6f54c5d1-e5cf-49d3-8df4-f00a36a06c30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.096202012s STEP: Saw pod success Jan 24 13:21:02.251: INFO: Pod "pod-projected-configmaps-6f54c5d1-e5cf-49d3-8df4-f00a36a06c30" satisfied condition "success or failure" Jan 24 13:21:02.253: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-6f54c5d1-e5cf-49d3-8df4-f00a36a06c30 container projected-configmap-volume-test: STEP: delete the pod Jan 24 13:21:02.347: INFO: Waiting for pod pod-projected-configmaps-6f54c5d1-e5cf-49d3-8df4-f00a36a06c30 to disappear Jan 24 13:21:02.365: INFO: Pod pod-projected-configmaps-6f54c5d1-e5cf-49d3-8df4-f00a36a06c30 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:21:02.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9881" for this suite. Jan 24 13:21:08.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:21:08.602: INFO: namespace projected-9881 deletion completed in 6.228067039s • [SLOW TEST:14.632 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:21:08.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-3338/configmap-test-82498162-ac16-4b83-bebb-4b517f466240 STEP: Creating a pod to test consume configMaps Jan 24 13:21:08.885: INFO: Waiting up to 5m0s for pod "pod-configmaps-2a0160be-17eb-4344-b2a4-7053f33a8067" in namespace "configmap-3338" to be "success or failure" Jan 24 13:21:08.899: INFO: Pod "pod-configmaps-2a0160be-17eb-4344-b2a4-7053f33a8067": Phase="Pending", Reason="", readiness=false. Elapsed: 13.391607ms Jan 24 13:21:10.918: INFO: Pod "pod-configmaps-2a0160be-17eb-4344-b2a4-7053f33a8067": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033027903s Jan 24 13:21:12.926: INFO: Pod "pod-configmaps-2a0160be-17eb-4344-b2a4-7053f33a8067": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040115475s Jan 24 13:21:14.936: INFO: Pod "pod-configmaps-2a0160be-17eb-4344-b2a4-7053f33a8067": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050197326s Jan 24 13:21:16.949: INFO: Pod "pod-configmaps-2a0160be-17eb-4344-b2a4-7053f33a8067": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06363653s STEP: Saw pod success Jan 24 13:21:16.949: INFO: Pod "pod-configmaps-2a0160be-17eb-4344-b2a4-7053f33a8067" satisfied condition "success or failure" Jan 24 13:21:16.955: INFO: Trying to get logs from node iruya-node pod pod-configmaps-2a0160be-17eb-4344-b2a4-7053f33a8067 container env-test: STEP: delete the pod Jan 24 13:21:17.119: INFO: Waiting for pod pod-configmaps-2a0160be-17eb-4344-b2a4-7053f33a8067 to disappear Jan 24 13:21:17.123: INFO: Pod pod-configmaps-2a0160be-17eb-4344-b2a4-7053f33a8067 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:21:17.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3338" for this suite. Jan 24 13:21:23.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:21:23.259: INFO: namespace configmap-3338 deletion completed in 6.130716261s • [SLOW TEST:14.657 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:21:23.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jan 24 13:21:23.376: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Jan 24 13:21:24.157: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jan 24 13:21:26.464: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715468884, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715468884, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715468884, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715468884, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 13:21:28.471: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715468884, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715468884, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715468884, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715468884, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 13:21:30.473: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715468884, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715468884, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715468884, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715468884, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 13:21:32.477: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715468884, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715468884, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715468884, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715468884, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 13:21:39.408: INFO: Waited 4.913627101s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:21:40.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-830" for this suite. Jan 24 13:21:46.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:21:46.384: INFO: namespace aggregator-830 deletion completed in 6.233560136s • [SLOW TEST:23.125 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:21:46.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2706.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2706.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2706.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2706.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2706.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2706.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 24 13:21:58.569: INFO: Unable to read wheezy_udp@PodARecord from pod dns-2706/dns-test-0e19179f-0ccc-434b-8fc4-d0841b839579: the server could not find the requested resource (get pods dns-test-0e19179f-0ccc-434b-8fc4-d0841b839579) Jan 24 13:21:58.575: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-2706/dns-test-0e19179f-0ccc-434b-8fc4-d0841b839579: the server could not find the requested resource (get pods dns-test-0e19179f-0ccc-434b-8fc4-d0841b839579) Jan 24 13:21:58.581: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-2706.svc.cluster.local from pod dns-2706/dns-test-0e19179f-0ccc-434b-8fc4-d0841b839579: the server could not find the requested resource (get pods dns-test-0e19179f-0ccc-434b-8fc4-d0841b839579) Jan 24 13:21:58.589: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-2706/dns-test-0e19179f-0ccc-434b-8fc4-d0841b839579: the server could not find the requested resource (get pods dns-test-0e19179f-0ccc-434b-8fc4-d0841b839579) Jan 24 13:21:58.601: INFO: Unable to read jessie_udp@PodARecord from pod dns-2706/dns-test-0e19179f-0ccc-434b-8fc4-d0841b839579: the server could not find the requested resource (get pods dns-test-0e19179f-0ccc-434b-8fc4-d0841b839579) Jan 24 13:21:58.615: INFO: Unable to read jessie_tcp@PodARecord from pod dns-2706/dns-test-0e19179f-0ccc-434b-8fc4-d0841b839579: the server could not find the requested resource (get pods dns-test-0e19179f-0ccc-434b-8fc4-d0841b839579) Jan 24 13:21:58.615: INFO: Lookups using dns-2706/dns-test-0e19179f-0ccc-434b-8fc4-d0841b839579 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-2706.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Jan 24 13:22:03.682: INFO: DNS probes using dns-2706/dns-test-0e19179f-0ccc-434b-8fc4-d0841b839579 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:22:03.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2706" for this suite. Jan 24 13:22:09.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:22:10.066: INFO: namespace dns-2706 deletion completed in 6.280435244s • [SLOW TEST:23.681 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:22:10.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 24 13:22:10.141: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3b1ee5c9-ecf1-46bf-a7de-6fdaf4b1beb1" in namespace "projected-7455" to be "success or failure" Jan 24 13:22:10.186: INFO: Pod "downwardapi-volume-3b1ee5c9-ecf1-46bf-a7de-6fdaf4b1beb1": Phase="Pending", Reason="", readiness=false. Elapsed: 44.649306ms Jan 24 13:22:12.193: INFO: Pod "downwardapi-volume-3b1ee5c9-ecf1-46bf-a7de-6fdaf4b1beb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051400408s Jan 24 13:22:14.203: INFO: Pod "downwardapi-volume-3b1ee5c9-ecf1-46bf-a7de-6fdaf4b1beb1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062234428s Jan 24 13:22:16.211: INFO: Pod "downwardapi-volume-3b1ee5c9-ecf1-46bf-a7de-6fdaf4b1beb1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070004927s Jan 24 13:22:18.222: INFO: Pod "downwardapi-volume-3b1ee5c9-ecf1-46bf-a7de-6fdaf4b1beb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.080309894s STEP: Saw pod success Jan 24 13:22:18.222: INFO: Pod "downwardapi-volume-3b1ee5c9-ecf1-46bf-a7de-6fdaf4b1beb1" satisfied condition "success or failure" Jan 24 13:22:18.225: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3b1ee5c9-ecf1-46bf-a7de-6fdaf4b1beb1 container client-container: STEP: delete the pod Jan 24 13:22:18.332: INFO: Waiting for pod downwardapi-volume-3b1ee5c9-ecf1-46bf-a7de-6fdaf4b1beb1 to disappear Jan 24 13:22:18.338: INFO: Pod downwardapi-volume-3b1ee5c9-ecf1-46bf-a7de-6fdaf4b1beb1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:22:18.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7455" for this suite. Jan 24 13:22:24.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:22:24.539: INFO: namespace projected-7455 deletion completed in 6.194162934s • [SLOW TEST:14.473 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:22:24.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 24 13:22:24.627: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:22:40.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-334" for this suite. Jan 24 13:23:02.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:23:03.077: INFO: namespace init-container-334 deletion completed in 22.114181881s • [SLOW TEST:38.537 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:23:03.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Jan 24 13:23:03.175: INFO: Waiting up to 5m0s for pod "client-containers-c07f7261-8086-4c7a-b07a-f3c072c356a0" in namespace "containers-7069" to be "success or failure" Jan 24 13:23:03.190: INFO: Pod "client-containers-c07f7261-8086-4c7a-b07a-f3c072c356a0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.90419ms Jan 24 13:23:05.202: INFO: Pod "client-containers-c07f7261-8086-4c7a-b07a-f3c072c356a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026315243s Jan 24 13:23:07.208: INFO: Pod "client-containers-c07f7261-8086-4c7a-b07a-f3c072c356a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032737502s Jan 24 13:23:09.215: INFO: Pod "client-containers-c07f7261-8086-4c7a-b07a-f3c072c356a0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039948998s Jan 24 13:23:11.222: INFO: Pod "client-containers-c07f7261-8086-4c7a-b07a-f3c072c356a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047032112s STEP: Saw pod success Jan 24 13:23:11.222: INFO: Pod "client-containers-c07f7261-8086-4c7a-b07a-f3c072c356a0" satisfied condition "success or failure" Jan 24 13:23:11.225: INFO: Trying to get logs from node iruya-node pod client-containers-c07f7261-8086-4c7a-b07a-f3c072c356a0 container test-container: STEP: delete the pod Jan 24 13:23:11.274: INFO: Waiting for pod client-containers-c07f7261-8086-4c7a-b07a-f3c072c356a0 to disappear Jan 24 13:23:11.280: INFO: Pod client-containers-c07f7261-8086-4c7a-b07a-f3c072c356a0 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:23:11.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7069" for this suite. Jan 24 13:23:17.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:23:17.434: INFO: namespace containers-7069 deletion completed in 6.149977771s • [SLOW TEST:14.357 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:23:17.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jan 24 13:23:17.528: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:23:36.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2953" for this suite. Jan 24 13:23:42.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:23:42.705: INFO: namespace pods-2953 deletion completed in 6.153669167s • [SLOW TEST:25.271 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:23:42.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-ab030078-30ad-4cd6-a58f-87be028ed922 in namespace container-probe-2203 Jan 24 13:23:50.829: INFO: Started pod test-webserver-ab030078-30ad-4cd6-a58f-87be028ed922 in namespace container-probe-2203 STEP: checking the pod's current state and verifying that restartCount is present Jan 24 13:23:50.833: INFO: Initial restart count of pod test-webserver-ab030078-30ad-4cd6-a58f-87be028ed922 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:27:52.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2203" for this suite. Jan 24 13:27:58.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:27:58.638: INFO: namespace container-probe-2203 deletion completed in 6.159784602s • [SLOW TEST:255.932 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:27:58.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jan 24 13:27:58.709: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 24 13:27:58.718: INFO: Waiting for terminating namespaces to be deleted... Jan 24 13:27:58.720: INFO: Logging pods the kubelet thinks is on node iruya-node before test Jan 24 13:27:58.734: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Jan 24 13:27:58.734: INFO: Container kube-proxy ready: true, restart count 0 Jan 24 13:27:58.734: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Jan 24 13:27:58.734: INFO: Container weave ready: true, restart count 0 Jan 24 13:27:58.734: INFO: Container weave-npc ready: true, restart count 0 Jan 24 13:27:58.734: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Jan 24 13:27:58.742: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Jan 24 13:27:58.742: INFO: Container etcd ready: true, restart count 0 Jan 24 13:27:58.742: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Jan 24 13:27:58.742: INFO: Container weave ready: true, restart count 0 Jan 24 13:27:58.742: INFO: Container weave-npc ready: true, restart count 0 Jan 24 13:27:58.742: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 24 13:27:58.742: INFO: Container coredns ready: true, restart count 0 Jan 24 13:27:58.742: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Jan 24 13:27:58.742: INFO: Container kube-controller-manager ready: true, restart count 19 Jan 24 13:27:58.742: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Jan 24 13:27:58.742: INFO: Container kube-proxy ready: true, restart count 0 Jan 24 13:27:58.742: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Jan 24 13:27:58.742: INFO: Container kube-apiserver ready: true, restart count 0 Jan 24 13:27:58.742: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Jan 24 13:27:58.742: INFO: Container kube-scheduler ready: true, restart count 13 Jan 24 13:27:58.742: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 24 13:27:58.742: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15ecd5a620a9d249], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:27:59.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-13" for this suite. Jan 24 13:28:07.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:28:08.048: INFO: namespace sched-pred-13 deletion completed in 8.23466899s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:9.410 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:28:08.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 24 13:28:08.113: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:28:09.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7085" for this suite. Jan 24 13:28:15.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:28:15.450: INFO: namespace custom-resource-definition-7085 deletion completed in 6.195795275s • [SLOW TEST:7.401 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:28:15.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Jan 24 13:28:15.521: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jan 24 13:28:15.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3819' Jan 24 13:28:17.674: INFO: stderr: "" Jan 24 13:28:17.675: INFO: stdout: "service/redis-slave created\n" Jan 24 13:28:17.675: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jan 24 13:28:17.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3819' Jan 24 13:28:18.033: INFO: stderr: "" Jan 24 13:28:18.033: INFO: stdout: "service/redis-master created\n" Jan 24 13:28:18.034: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 24 13:28:18.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3819' Jan 24 13:28:18.459: INFO: stderr: "" Jan 24 13:28:18.459: INFO: stdout: "service/frontend created\n" Jan 24 13:28:18.460: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jan 24 13:28:18.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3819' Jan 24 13:28:18.786: INFO: stderr: "" Jan 24 13:28:18.787: INFO: stdout: "deployment.apps/frontend created\n" Jan 24 13:28:18.787: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 24 13:28:18.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3819' Jan 24 13:28:19.183: INFO: stderr: "" Jan 24 13:28:19.184: INFO: stdout: "deployment.apps/redis-master created\n" Jan 24 13:28:19.184: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jan 24 13:28:19.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3819' Jan 24 13:28:19.609: INFO: stderr: "" Jan 24 13:28:19.609: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Jan 24 13:28:19.609: INFO: Waiting for all frontend pods to be Running. Jan 24 13:28:44.662: INFO: Waiting for frontend to serve content. Jan 24 13:28:44.753: INFO: Trying to add a new entry to the guestbook. Jan 24 13:28:44.793: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jan 24 13:28:44.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3819' Jan 24 13:28:45.010: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 24 13:28:45.010: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jan 24 13:28:45.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3819' Jan 24 13:28:45.199: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 24 13:28:45.199: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 24 13:28:45.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3819' Jan 24 13:28:45.357: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 24 13:28:45.357: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 24 13:28:45.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3819' Jan 24 13:28:45.531: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 24 13:28:45.531: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 24 13:28:45.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3819' Jan 24 13:28:45.656: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 24 13:28:45.656: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 24 13:28:45.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3819' Jan 24 13:28:46.043: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 24 13:28:46.043: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:28:46.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3819" for this suite. Jan 24 13:29:28.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:29:28.331: INFO: namespace kubectl-3819 deletion completed in 42.2556239s • [SLOW TEST:72.881 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:29:28.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 24 13:29:37.033: INFO: Successfully updated pod "pod-update-137ced29-0e65-4bc6-93ed-c23fbe4d1dd9" STEP: verifying the updated pod is in kubernetes Jan 24 13:29:37.074: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:29:37.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2703" for this suite. Jan 24 13:29:59.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:29:59.233: INFO: namespace pods-2703 deletion completed in 22.149746136s • [SLOW TEST:30.901 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:29:59.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-d75c3592-b5f3-4cb1-94ea-a68a52ba96f1 STEP: Creating a pod to test consume configMaps Jan 24 13:29:59.360: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c870f02e-c2db-4de0-a390-5e22a6f36716" in namespace "projected-2244" to be "success or failure" Jan 24 13:29:59.418: INFO: Pod "pod-projected-configmaps-c870f02e-c2db-4de0-a390-5e22a6f36716": Phase="Pending", Reason="", readiness=false. Elapsed: 56.839144ms Jan 24 13:30:01.427: INFO: Pod "pod-projected-configmaps-c870f02e-c2db-4de0-a390-5e22a6f36716": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06619395s Jan 24 13:30:03.453: INFO: Pod "pod-projected-configmaps-c870f02e-c2db-4de0-a390-5e22a6f36716": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092194065s Jan 24 13:30:05.463: INFO: Pod "pod-projected-configmaps-c870f02e-c2db-4de0-a390-5e22a6f36716": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102229278s Jan 24 13:30:07.518: INFO: Pod "pod-projected-configmaps-c870f02e-c2db-4de0-a390-5e22a6f36716": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.157462468s STEP: Saw pod success Jan 24 13:30:07.518: INFO: Pod "pod-projected-configmaps-c870f02e-c2db-4de0-a390-5e22a6f36716" satisfied condition "success or failure" Jan 24 13:30:07.524: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-c870f02e-c2db-4de0-a390-5e22a6f36716 container projected-configmap-volume-test: STEP: delete the pod Jan 24 13:30:07.612: INFO: Waiting for pod pod-projected-configmaps-c870f02e-c2db-4de0-a390-5e22a6f36716 to disappear Jan 24 13:30:07.640: INFO: Pod pod-projected-configmaps-c870f02e-c2db-4de0-a390-5e22a6f36716 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:30:07.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2244" for this suite. Jan 24 13:30:13.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:30:13.834: INFO: namespace projected-2244 deletion completed in 6.184218001s • [SLOW TEST:14.601 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:30:13.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-9ed757b0-1023-4881-ba2e-e70580ec40a9 STEP: Creating secret with name secret-projected-all-test-volume-ab075728-5994-47a7-b74d-db673ff363a7 STEP: Creating a pod to test Check all projections for projected volume plugin Jan 24 13:30:13.955: INFO: Waiting up to 5m0s for pod "projected-volume-0e8794e9-8645-4acc-b3e0-bb81ec2ce996" in namespace "projected-1832" to be "success or failure" Jan 24 13:30:13.974: INFO: Pod "projected-volume-0e8794e9-8645-4acc-b3e0-bb81ec2ce996": Phase="Pending", Reason="", readiness=false. Elapsed: 18.082217ms Jan 24 13:30:15.988: INFO: Pod "projected-volume-0e8794e9-8645-4acc-b3e0-bb81ec2ce996": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032455987s Jan 24 13:30:17.996: INFO: Pod "projected-volume-0e8794e9-8645-4acc-b3e0-bb81ec2ce996": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040451348s Jan 24 13:30:20.002: INFO: Pod "projected-volume-0e8794e9-8645-4acc-b3e0-bb81ec2ce996": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046677827s Jan 24 13:30:22.019: INFO: Pod "projected-volume-0e8794e9-8645-4acc-b3e0-bb81ec2ce996": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062738543s STEP: Saw pod success Jan 24 13:30:22.019: INFO: Pod "projected-volume-0e8794e9-8645-4acc-b3e0-bb81ec2ce996" satisfied condition "success or failure" Jan 24 13:30:22.024: INFO: Trying to get logs from node iruya-node pod projected-volume-0e8794e9-8645-4acc-b3e0-bb81ec2ce996 container projected-all-volume-test: STEP: delete the pod Jan 24 13:30:22.139: INFO: Waiting for pod projected-volume-0e8794e9-8645-4acc-b3e0-bb81ec2ce996 to disappear Jan 24 13:30:22.158: INFO: Pod projected-volume-0e8794e9-8645-4acc-b3e0-bb81ec2ce996 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:30:22.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1832" for this suite. Jan 24 13:30:28.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:30:28.482: INFO: namespace projected-1832 deletion completed in 6.317853343s • [SLOW TEST:14.648 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:30:28.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jan 24 13:30:28.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3523' Jan 24 13:30:28.820: INFO: stderr: "" Jan 24 13:30:28.820: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 24 13:30:29.830: INFO: Selector matched 1 pods for map[app:redis] Jan 24 13:30:29.830: INFO: Found 0 / 1 Jan 24 13:30:30.833: INFO: Selector matched 1 pods for map[app:redis] Jan 24 13:30:30.833: INFO: Found 0 / 1 Jan 24 13:30:31.833: INFO: Selector matched 1 pods for map[app:redis] Jan 24 13:30:31.833: INFO: Found 0 / 1 Jan 24 13:30:32.842: INFO: Selector matched 1 pods for map[app:redis] Jan 24 13:30:32.843: INFO: Found 0 / 1 Jan 24 13:30:33.839: INFO: Selector matched 1 pods for map[app:redis] Jan 24 13:30:33.839: INFO: Found 0 / 1 Jan 24 13:30:34.829: INFO: Selector matched 1 pods for map[app:redis] Jan 24 13:30:34.829: INFO: Found 0 / 1 Jan 24 13:30:35.830: INFO: Selector matched 1 pods for map[app:redis] Jan 24 13:30:35.830: INFO: Found 1 / 1 Jan 24 13:30:35.830: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 24 13:30:35.833: INFO: Selector matched 1 pods for map[app:redis] Jan 24 13:30:35.833: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 24 13:30:35.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-tn4zl --namespace=kubectl-3523 -p {"metadata":{"annotations":{"x":"y"}}}' Jan 24 13:30:35.993: INFO: stderr: "" Jan 24 13:30:35.993: INFO: stdout: "pod/redis-master-tn4zl patched\n" STEP: checking annotations Jan 24 13:30:36.001: INFO: Selector matched 1 pods for map[app:redis] Jan 24 13:30:36.001: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:30:36.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3523" for this suite. Jan 24 13:30:58.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:30:58.134: INFO: namespace kubectl-3523 deletion completed in 22.128689199s • [SLOW TEST:29.650 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:30:58.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-ee691a83-46b0-4e29-897a-add51fc8c13b STEP: Creating a pod to test consume secrets Jan 24 13:30:58.239: INFO: Waiting up to 5m0s for pod "pod-secrets-8dc122fe-ffb0-4a74-b063-a342c13f8ffb" in namespace "secrets-1820" to be "success or failure" Jan 24 13:30:58.246: INFO: Pod "pod-secrets-8dc122fe-ffb0-4a74-b063-a342c13f8ffb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.265408ms Jan 24 13:31:00.257: INFO: Pod "pod-secrets-8dc122fe-ffb0-4a74-b063-a342c13f8ffb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01726361s Jan 24 13:31:02.265: INFO: Pod "pod-secrets-8dc122fe-ffb0-4a74-b063-a342c13f8ffb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026185147s Jan 24 13:31:04.278: INFO: Pod "pod-secrets-8dc122fe-ffb0-4a74-b063-a342c13f8ffb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038317612s Jan 24 13:31:06.287: INFO: Pod "pod-secrets-8dc122fe-ffb0-4a74-b063-a342c13f8ffb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047746761s STEP: Saw pod success Jan 24 13:31:06.287: INFO: Pod "pod-secrets-8dc122fe-ffb0-4a74-b063-a342c13f8ffb" satisfied condition "success or failure" Jan 24 13:31:06.293: INFO: Trying to get logs from node iruya-node pod pod-secrets-8dc122fe-ffb0-4a74-b063-a342c13f8ffb container secret-volume-test: STEP: delete the pod Jan 24 13:31:06.416: INFO: Waiting for pod pod-secrets-8dc122fe-ffb0-4a74-b063-a342c13f8ffb to disappear Jan 24 13:31:06.493: INFO: Pod pod-secrets-8dc122fe-ffb0-4a74-b063-a342c13f8ffb no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:31:06.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1820" for this suite. Jan 24 13:31:12.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:31:12.698: INFO: namespace secrets-1820 deletion completed in 6.19472815s • [SLOW TEST:14.564 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:31:12.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jan 24 13:31:21.857: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:31:22.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5976" for this suite. Jan 24 13:32:00.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:32:01.080: INFO: namespace replicaset-5976 deletion completed in 38.114989295s • [SLOW TEST:48.380 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:32:01.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1960 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jan 24 13:32:01.171: INFO: Found 0 stateful pods, waiting for 3 Jan 24 13:32:11.181: INFO: Found 2 stateful pods, waiting for 3 Jan 24 13:32:21.182: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 24 13:32:21.182: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 24 13:32:21.182: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 24 13:32:31.181: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 24 13:32:31.181: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 24 13:32:31.181: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 24 13:32:31.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1960 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 24 13:32:31.659: INFO: stderr: "I0124 13:32:31.429981 1534 log.go:172] (0xc0007944d0) (0xc00087a8c0) Create stream\nI0124 13:32:31.430620 1534 log.go:172] (0xc0007944d0) (0xc00087a8c0) Stream added, broadcasting: 1\nI0124 13:32:31.439447 1534 log.go:172] (0xc0007944d0) Reply frame received for 1\nI0124 13:32:31.439489 1534 log.go:172] (0xc0007944d0) (0xc00087a000) Create stream\nI0124 13:32:31.439498 1534 log.go:172] (0xc0007944d0) (0xc00087a000) Stream added, broadcasting: 3\nI0124 13:32:31.440947 1534 log.go:172] (0xc0007944d0) Reply frame received for 3\nI0124 13:32:31.440979 1534 log.go:172] (0xc0007944d0) (0xc00064c140) Create stream\nI0124 13:32:31.440993 1534 log.go:172] (0xc0007944d0) (0xc00064c140) Stream added, broadcasting: 5\nI0124 13:32:31.442788 1534 log.go:172] (0xc0007944d0) Reply frame received for 5\nI0124 13:32:31.561641 1534 log.go:172] (0xc0007944d0) Data frame received for 5\nI0124 13:32:31.561702 1534 log.go:172] (0xc00064c140) (5) Data frame handling\nI0124 13:32:31.561719 1534 log.go:172] (0xc00064c140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0124 13:32:31.590665 1534 log.go:172] (0xc0007944d0) Data frame received for 3\nI0124 13:32:31.590721 1534 log.go:172] (0xc00087a000) (3) Data frame handling\nI0124 13:32:31.590737 1534 log.go:172] (0xc00087a000) (3) Data frame sent\nI0124 13:32:31.655259 1534 log.go:172] (0xc0007944d0) (0xc00087a000) Stream removed, broadcasting: 3\nI0124 13:32:31.655340 1534 log.go:172] (0xc0007944d0) Data frame received for 1\nI0124 13:32:31.655353 1534 log.go:172] (0xc00087a8c0) (1) Data frame handling\nI0124 13:32:31.655365 1534 log.go:172] (0xc00087a8c0) (1) Data frame sent\nI0124 13:32:31.655382 1534 log.go:172] (0xc0007944d0) (0xc00087a8c0) Stream removed, broadcasting: 1\nI0124 13:32:31.655596 1534 log.go:172] (0xc0007944d0) (0xc00064c140) Stream removed, broadcasting: 5\nI0124 13:32:31.655642 1534 log.go:172] (0xc0007944d0) Go away received\nI0124 13:32:31.655724 1534 log.go:172] (0xc0007944d0) (0xc00087a8c0) Stream removed, broadcasting: 1\nI0124 13:32:31.655735 1534 log.go:172] (0xc0007944d0) (0xc00087a000) Stream removed, broadcasting: 3\nI0124 13:32:31.655740 1534 log.go:172] (0xc0007944d0) (0xc00064c140) Stream removed, broadcasting: 5\n" Jan 24 13:32:31.659: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 24 13:32:31.659: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 24 13:32:41.761: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 24 13:32:51.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1960 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:32:52.343: INFO: stderr: "I0124 13:32:52.113321 1552 log.go:172] (0xc0007c4420) (0xc0009ec780) Create stream\nI0124 13:32:52.113434 1552 log.go:172] (0xc0007c4420) (0xc0009ec780) Stream added, broadcasting: 1\nI0124 13:32:52.123142 1552 log.go:172] (0xc0007c4420) Reply frame received for 1\nI0124 13:32:52.123181 1552 log.go:172] (0xc0007c4420) (0xc0001d4000) Create stream\nI0124 13:32:52.123188 1552 log.go:172] (0xc0007c4420) (0xc0001d4000) Stream added, broadcasting: 3\nI0124 13:32:52.124261 1552 log.go:172] (0xc0007c4420) Reply frame received for 3\nI0124 13:32:52.124409 1552 log.go:172] (0xc0007c4420) (0xc0001d40a0) Create stream\nI0124 13:32:52.124440 1552 log.go:172] (0xc0007c4420) (0xc0001d40a0) Stream added, broadcasting: 5\nI0124 13:32:52.125381 1552 log.go:172] (0xc0007c4420) Reply frame received for 5\nI0124 13:32:52.233672 1552 log.go:172] (0xc0007c4420) Data frame received for 3\nI0124 13:32:52.233773 1552 log.go:172] (0xc0001d4000) (3) Data frame handling\nI0124 13:32:52.233783 1552 log.go:172] (0xc0001d4000) (3) Data frame sent\nI0124 13:32:52.233906 1552 log.go:172] (0xc0007c4420) Data frame received for 5\nI0124 13:32:52.233984 1552 log.go:172] (0xc0001d40a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0124 13:32:52.234321 1552 log.go:172] (0xc0001d40a0) (5) Data frame sent\nI0124 13:32:52.334724 1552 log.go:172] (0xc0007c4420) (0xc0001d4000) Stream removed, broadcasting: 3\nI0124 13:32:52.335070 1552 log.go:172] (0xc0007c4420) Data frame received for 1\nI0124 13:32:52.335191 1552 log.go:172] (0xc0007c4420) (0xc0001d40a0) Stream removed, broadcasting: 5\nI0124 13:32:52.335270 1552 log.go:172] (0xc0009ec780) (1) Data frame handling\nI0124 13:32:52.335320 1552 log.go:172] (0xc0009ec780) (1) Data frame sent\nI0124 13:32:52.335368 1552 log.go:172] (0xc0007c4420) (0xc0009ec780) Stream removed, broadcasting: 1\nI0124 13:32:52.335449 1552 log.go:172] (0xc0007c4420) Go away received\nI0124 13:32:52.336174 1552 log.go:172] (0xc0007c4420) (0xc0009ec780) Stream removed, broadcasting: 1\nI0124 13:32:52.336192 1552 log.go:172] (0xc0007c4420) (0xc0001d4000) Stream removed, broadcasting: 3\nI0124 13:32:52.336203 1552 log.go:172] (0xc0007c4420) (0xc0001d40a0) Stream removed, broadcasting: 5\n" Jan 24 13:32:52.343: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 24 13:32:52.343: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 24 13:33:02.380: INFO: Waiting for StatefulSet statefulset-1960/ss2 to complete update Jan 24 13:33:02.380: INFO: Waiting for Pod statefulset-1960/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 24 13:33:02.380: INFO: Waiting for Pod statefulset-1960/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 24 13:33:02.380: INFO: Waiting for Pod statefulset-1960/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 24 13:33:12.390: INFO: Waiting for StatefulSet statefulset-1960/ss2 to complete update Jan 24 13:33:12.390: INFO: Waiting for Pod statefulset-1960/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 24 13:33:12.390: INFO: Waiting for Pod statefulset-1960/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 24 13:33:22.401: INFO: Waiting for StatefulSet statefulset-1960/ss2 to complete update Jan 24 13:33:22.401: INFO: Waiting for Pod statefulset-1960/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 24 13:33:22.401: INFO: Waiting for Pod statefulset-1960/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 24 13:33:32.723: INFO: Waiting for StatefulSet statefulset-1960/ss2 to complete update Jan 24 13:33:32.723: INFO: Waiting for Pod statefulset-1960/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 24 13:33:42.396: INFO: Waiting for StatefulSet statefulset-1960/ss2 to complete update STEP: Rolling back to a previous revision Jan 24 13:33:52.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1960 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 24 13:33:52.860: INFO: stderr: "I0124 13:33:52.637784 1574 log.go:172] (0xc0008c8420) (0xc00099e640) Create stream\nI0124 13:33:52.637944 1574 log.go:172] (0xc0008c8420) (0xc00099e640) Stream added, broadcasting: 1\nI0124 13:33:52.642936 1574 log.go:172] (0xc0008c8420) Reply frame received for 1\nI0124 13:33:52.642964 1574 log.go:172] (0xc0008c8420) (0xc00074e000) Create stream\nI0124 13:33:52.642975 1574 log.go:172] (0xc0008c8420) (0xc00074e000) Stream added, broadcasting: 3\nI0124 13:33:52.644902 1574 log.go:172] (0xc0008c8420) Reply frame received for 3\nI0124 13:33:52.645003 1574 log.go:172] (0xc0008c8420) (0xc00010c280) Create stream\nI0124 13:33:52.645033 1574 log.go:172] (0xc0008c8420) (0xc00010c280) Stream added, broadcasting: 5\nI0124 13:33:52.647414 1574 log.go:172] (0xc0008c8420) Reply frame received for 5\nI0124 13:33:52.760221 1574 log.go:172] (0xc0008c8420) Data frame received for 5\nI0124 13:33:52.760333 1574 log.go:172] (0xc00010c280) (5) Data frame handling\nI0124 13:33:52.760367 1574 log.go:172] (0xc00010c280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0124 13:33:52.788016 1574 log.go:172] (0xc0008c8420) Data frame received for 3\nI0124 13:33:52.788036 1574 log.go:172] (0xc00074e000) (3) Data frame handling\nI0124 13:33:52.788047 1574 log.go:172] (0xc00074e000) (3) Data frame sent\nI0124 13:33:52.851446 1574 log.go:172] (0xc0008c8420) (0xc00074e000) Stream removed, broadcasting: 3\nI0124 13:33:52.851579 1574 log.go:172] (0xc0008c8420) Data frame received for 1\nI0124 13:33:52.851593 1574 log.go:172] (0xc00099e640) (1) Data frame handling\nI0124 13:33:52.851602 1574 log.go:172] (0xc00099e640) (1) Data frame sent\nI0124 13:33:52.851622 1574 log.go:172] (0xc0008c8420) (0xc00099e640) Stream removed, broadcasting: 1\nI0124 13:33:52.851704 1574 log.go:172] (0xc0008c8420) (0xc00010c280) Stream removed, broadcasting: 5\nI0124 13:33:52.851837 1574 log.go:172] (0xc0008c8420) Go away received\nI0124 13:33:52.852437 1574 log.go:172] (0xc0008c8420) (0xc00099e640) Stream removed, broadcasting: 1\nI0124 13:33:52.852493 1574 log.go:172] (0xc0008c8420) (0xc00074e000) Stream removed, broadcasting: 3\nI0124 13:33:52.852505 1574 log.go:172] (0xc0008c8420) (0xc00010c280) Stream removed, broadcasting: 5\n" Jan 24 13:33:52.860: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 24 13:33:52.860: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 24 13:34:02.918: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jan 24 13:34:12.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1960 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:34:13.318: INFO: stderr: "I0124 13:34:13.158666 1594 log.go:172] (0xc0006a2a50) (0xc0005ca780) Create stream\nI0124 13:34:13.158914 1594 log.go:172] (0xc0006a2a50) (0xc0005ca780) Stream added, broadcasting: 1\nI0124 13:34:13.161648 1594 log.go:172] (0xc0006a2a50) Reply frame received for 1\nI0124 13:34:13.161680 1594 log.go:172] (0xc0006a2a50) (0xc0007de000) Create stream\nI0124 13:34:13.161699 1594 log.go:172] (0xc0006a2a50) (0xc0007de000) Stream added, broadcasting: 3\nI0124 13:34:13.163328 1594 log.go:172] (0xc0006a2a50) Reply frame received for 3\nI0124 13:34:13.163357 1594 log.go:172] (0xc0006a2a50) (0xc0006b6000) Create stream\nI0124 13:34:13.163367 1594 log.go:172] (0xc0006a2a50) (0xc0006b6000) Stream added, broadcasting: 5\nI0124 13:34:13.164418 1594 log.go:172] (0xc0006a2a50) Reply frame received for 5\nI0124 13:34:13.253051 1594 log.go:172] (0xc0006a2a50) Data frame received for 5\nI0124 13:34:13.253091 1594 log.go:172] (0xc0006b6000) (5) Data frame handling\nI0124 13:34:13.253099 1594 log.go:172] (0xc0006b6000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0124 13:34:13.253107 1594 log.go:172] (0xc0006a2a50) Data frame received for 3\nI0124 13:34:13.253111 1594 log.go:172] (0xc0007de000) (3) Data frame handling\nI0124 13:34:13.253116 1594 log.go:172] (0xc0007de000) (3) Data frame sent\nI0124 13:34:13.314335 1594 log.go:172] (0xc0006a2a50) Data frame received for 1\nI0124 13:34:13.314587 1594 log.go:172] (0xc0006a2a50) (0xc0006b6000) Stream removed, broadcasting: 5\nI0124 13:34:13.314635 1594 log.go:172] (0xc0005ca780) (1) Data frame handling\nI0124 13:34:13.314660 1594 log.go:172] (0xc0005ca780) (1) Data frame sent\nI0124 13:34:13.314671 1594 log.go:172] (0xc0006a2a50) (0xc0007de000) Stream removed, broadcasting: 3\nI0124 13:34:13.314697 1594 log.go:172] (0xc0006a2a50) (0xc0005ca780) Stream removed, broadcasting: 1\nI0124 13:34:13.314717 1594 log.go:172] (0xc0006a2a50) Go away received\nI0124 13:34:13.315201 1594 log.go:172] (0xc0006a2a50) (0xc0005ca780) Stream removed, broadcasting: 1\nI0124 13:34:13.315217 1594 log.go:172] (0xc0006a2a50) (0xc0007de000) Stream removed, broadcasting: 3\nI0124 13:34:13.315225 1594 log.go:172] (0xc0006a2a50) (0xc0006b6000) Stream removed, broadcasting: 5\n" Jan 24 13:34:13.318: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 24 13:34:13.318: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 24 13:34:23.370: INFO: Waiting for StatefulSet statefulset-1960/ss2 to complete update Jan 24 13:34:23.370: INFO: Waiting for Pod statefulset-1960/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 24 13:34:23.370: INFO: Waiting for Pod statefulset-1960/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 24 13:34:33.384: INFO: Waiting for StatefulSet statefulset-1960/ss2 to complete update Jan 24 13:34:33.384: INFO: Waiting for Pod statefulset-1960/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 24 13:34:43.399: INFO: Waiting for StatefulSet statefulset-1960/ss2 to complete update Jan 24 13:34:43.399: INFO: Waiting for Pod statefulset-1960/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 24 13:34:53.390: INFO: Waiting for StatefulSet statefulset-1960/ss2 to complete update Jan 24 13:34:53.390: INFO: Waiting for Pod statefulset-1960/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 24 13:35:03.382: INFO: Waiting for StatefulSet statefulset-1960/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 24 13:35:13.384: INFO: Deleting all statefulset in ns statefulset-1960 Jan 24 13:35:13.389: INFO: Scaling statefulset ss2 to 0 Jan 24 13:35:33.421: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 13:35:33.426: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:35:33.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1960" for this suite. Jan 24 13:35:41.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:35:41.664: INFO: namespace statefulset-1960 deletion completed in 8.212241065s • [SLOW TEST:220.584 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:35:41.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:36:12.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5784" for this suite. Jan 24 13:36:18.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:36:18.168: INFO: namespace namespaces-5784 deletion completed in 6.123370053s STEP: Destroying namespace "nsdeletetest-5252" for this suite. Jan 24 13:36:18.171: INFO: Namespace nsdeletetest-5252 was already deleted STEP: Destroying namespace "nsdeletetest-7051" for this suite. Jan 24 13:36:24.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:36:24.297: INFO: namespace nsdeletetest-7051 deletion completed in 6.125518548s • [SLOW TEST:42.632 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:36:24.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:36:32.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-658" for this suite. Jan 24 13:37:18.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:37:18.698: INFO: namespace kubelet-test-658 deletion completed in 46.169031666s • [SLOW TEST:54.401 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:37:18.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 24 13:37:18.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1683' Jan 24 13:37:19.329: INFO: stderr: "" Jan 24 13:37:19.329: INFO: stdout: "replicationcontroller/redis-master created\n" Jan 24 13:37:19.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1683' Jan 24 13:37:19.842: INFO: stderr: "" Jan 24 13:37:19.842: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Jan 24 13:37:20.878: INFO: Selector matched 1 pods for map[app:redis] Jan 24 13:37:20.878: INFO: Found 0 / 1 Jan 24 13:37:21.870: INFO: Selector matched 1 pods for map[app:redis] Jan 24 13:37:21.871: INFO: Found 0 / 1 Jan 24 13:37:22.854: INFO: Selector matched 1 pods for map[app:redis] Jan 24 13:37:22.854: INFO: Found 0 / 1 Jan 24 13:37:23.854: INFO: Selector matched 1 pods for map[app:redis] Jan 24 13:37:23.854: INFO: Found 0 / 1 Jan 24 13:37:24.861: INFO: Selector matched 1 pods for map[app:redis] Jan 24 13:37:24.861: INFO: Found 0 / 1 Jan 24 13:37:25.852: INFO: Selector matched 1 pods for map[app:redis] Jan 24 13:37:25.852: INFO: Found 0 / 1 Jan 24 13:37:26.854: INFO: Selector matched 1 pods for map[app:redis] Jan 24 13:37:26.854: INFO: Found 0 / 1 Jan 24 13:37:27.857: INFO: Selector matched 1 pods for map[app:redis] Jan 24 13:37:27.857: INFO: Found 1 / 1 Jan 24 13:37:27.857: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 24 13:37:27.865: INFO: Selector matched 1 pods for map[app:redis] Jan 24 13:37:27.865: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 24 13:37:27.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-gwbf7 --namespace=kubectl-1683' Jan 24 13:37:28.047: INFO: stderr: "" Jan 24 13:37:28.047: INFO: stdout: "Name: redis-master-gwbf7\nNamespace: kubectl-1683\nPriority: 0\nNode: iruya-node/10.96.3.65\nStart Time: Fri, 24 Jan 2020 13:37:19 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: docker://d7a00b8e1b763dadc82cdfbfbe5b43d5888cd060e3e0b69a9572ee01e38137b8\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 24 Jan 2020 13:37:26 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-btzxl (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-btzxl:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-btzxl\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 9s default-scheduler Successfully assigned kubectl-1683/redis-master-gwbf7 to iruya-node\n Normal Pulled 4s kubelet, iruya-node Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 3s kubelet, iruya-node Created container redis-master\n Normal Started 2s kubelet, iruya-node Started container redis-master\n" Jan 24 13:37:28.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-1683' Jan 24 13:37:28.257: INFO: stderr: "" Jan 24 13:37:28.257: INFO: stdout: "Name: redis-master\nNamespace: kubectl-1683\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 9s replication-controller Created pod: redis-master-gwbf7\n" Jan 24 13:37:28.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-1683' Jan 24 13:37:28.366: INFO: stderr: "" Jan 24 13:37:28.366: INFO: stdout: "Name: redis-master\nNamespace: kubectl-1683\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.105.252.110\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.44.0.1:6379\nSession Affinity: None\nEvents: \n" Jan 24 13:37:28.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node' Jan 24 13:37:28.483: INFO: stderr: "" Jan 24 13:37:28.483: INFO: stdout: "Name: iruya-node\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-node\n kubernetes.io/os=linux\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 04 Aug 2019 09:01:39 +0000\nTaints: \nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Sat, 12 Oct 2019 11:56:49 +0000 Sat, 12 Oct 2019 11:56:49 +0000 WeaveIsUp Weave pod has set this\n MemoryPressure False Fri, 24 Jan 2020 13:36:52 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 24 Jan 2020 13:36:52 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 24 Jan 2020 13:36:52 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 24 Jan 2020 13:36:52 +0000 Sun, 04 Aug 2019 09:02:19 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.96.3.65\n Hostname: iruya-node\nCapacity:\n cpu: 4\n ephemeral-storage: 20145724Ki\n hugepages-2Mi: 0\n memory: 4039076Ki\n pods: 110\nAllocatable:\n cpu: 4\n ephemeral-storage: 18566299208\n hugepages-2Mi: 0\n memory: 3936676Ki\n pods: 110\nSystem Info:\n Machine ID: f573dcf04d6f4a87856a35d266a2fa7a\n System UUID: F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID: 8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version: 4.15.0-52-generic\n OS Image: Ubuntu 18.04.2 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.9.7\n Kubelet Version: v1.15.1\n Kube-Proxy Version: v1.15.1\nPodCIDR: 10.96.1.0/24\nNon-terminated Pods: (3 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system kube-proxy-976zl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 173d\n kube-system weave-net-rlp57 20m (0%) 0 (0%) 0 (0%) 0 (0%) 104d\n kubectl-1683 redis-master-gwbf7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 20m (0%) 0 (0%)\n memory 0 (0%) 0 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jan 24 13:37:28.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-1683' Jan 24 13:37:28.584: INFO: stderr: "" Jan 24 13:37:28.584: INFO: stdout: "Name: kubectl-1683\nLabels: e2e-framework=kubectl\n e2e-run=529ef535-b631-48c0-b4e4-6ffb97774c23\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:37:28.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1683" for this suite. Jan 24 13:37:50.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:37:50.770: INFO: namespace kubectl-1683 deletion completed in 22.182420008s • [SLOW TEST:32.072 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:37:50.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 24 13:37:50.909: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9944af42-71e7-4e21-9217-4884137f15dc" in namespace "downward-api-1789" to be "success or failure" Jan 24 13:37:50.917: INFO: Pod "downwardapi-volume-9944af42-71e7-4e21-9217-4884137f15dc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.196308ms Jan 24 13:37:52.929: INFO: Pod "downwardapi-volume-9944af42-71e7-4e21-9217-4884137f15dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020378056s Jan 24 13:37:54.940: INFO: Pod "downwardapi-volume-9944af42-71e7-4e21-9217-4884137f15dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031144427s Jan 24 13:37:56.951: INFO: Pod "downwardapi-volume-9944af42-71e7-4e21-9217-4884137f15dc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041881079s Jan 24 13:37:58.963: INFO: Pod "downwardapi-volume-9944af42-71e7-4e21-9217-4884137f15dc": Phase="Running", Reason="", readiness=true. Elapsed: 8.054288604s Jan 24 13:38:00.980: INFO: Pod "downwardapi-volume-9944af42-71e7-4e21-9217-4884137f15dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071182988s STEP: Saw pod success Jan 24 13:38:00.980: INFO: Pod "downwardapi-volume-9944af42-71e7-4e21-9217-4884137f15dc" satisfied condition "success or failure" Jan 24 13:38:00.987: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-9944af42-71e7-4e21-9217-4884137f15dc container client-container: STEP: delete the pod Jan 24 13:38:01.075: INFO: Waiting for pod downwardapi-volume-9944af42-71e7-4e21-9217-4884137f15dc to disappear Jan 24 13:38:01.080: INFO: Pod downwardapi-volume-9944af42-71e7-4e21-9217-4884137f15dc no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:38:01.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1789" for this suite. Jan 24 13:38:07.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:38:07.220: INFO: namespace downward-api-1789 deletion completed in 6.135442648s • [SLOW TEST:16.450 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:38:07.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:38:15.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7637" for this suite. Jan 24 13:38:55.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:38:55.704: INFO: namespace kubelet-test-7637 deletion completed in 40.278476295s • [SLOW TEST:48.483 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:38:55.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-5b3f3437-69af-4672-bc3f-84e4389db0c6 STEP: Creating a pod to test consume secrets Jan 24 13:38:55.821: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e6208641-e91d-4d33-aeed-65bea4a61edd" in namespace "projected-1178" to be "success or failure" Jan 24 13:38:55.844: INFO: Pod "pod-projected-secrets-e6208641-e91d-4d33-aeed-65bea4a61edd": Phase="Pending", Reason="", readiness=false. Elapsed: 21.92648ms Jan 24 13:38:57.862: INFO: Pod "pod-projected-secrets-e6208641-e91d-4d33-aeed-65bea4a61edd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040001487s Jan 24 13:38:59.876: INFO: Pod "pod-projected-secrets-e6208641-e91d-4d33-aeed-65bea4a61edd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053887336s Jan 24 13:39:01.885: INFO: Pod "pod-projected-secrets-e6208641-e91d-4d33-aeed-65bea4a61edd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063123803s Jan 24 13:39:03.897: INFO: Pod "pod-projected-secrets-e6208641-e91d-4d33-aeed-65bea4a61edd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.075825695s STEP: Saw pod success Jan 24 13:39:03.898: INFO: Pod "pod-projected-secrets-e6208641-e91d-4d33-aeed-65bea4a61edd" satisfied condition "success or failure" Jan 24 13:39:03.902: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-e6208641-e91d-4d33-aeed-65bea4a61edd container projected-secret-volume-test: STEP: delete the pod Jan 24 13:39:04.003: INFO: Waiting for pod pod-projected-secrets-e6208641-e91d-4d33-aeed-65bea4a61edd to disappear Jan 24 13:39:04.019: INFO: Pod pod-projected-secrets-e6208641-e91d-4d33-aeed-65bea4a61edd no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:39:04.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1178" for this suite. Jan 24 13:39:10.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:39:10.284: INFO: namespace projected-1178 deletion completed in 6.251557716s • [SLOW TEST:14.579 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:39:10.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 24 13:39:10.363: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 24 13:39:10.374: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 24 13:39:15.384: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 24 13:39:17.396: INFO: Creating deployment "test-rolling-update-deployment" Jan 24 13:39:17.406: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 24 13:39:17.420: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 24 13:39:19.439: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 24 13:39:19.444: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715469957, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715469957, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715469957, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715469957, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 13:39:21.451: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715469957, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715469957, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715469957, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715469957, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 13:39:23.474: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715469957, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715469957, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715469957, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715469957, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 13:39:25.453: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 24 13:39:25.475: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-3117,SelfLink:/apis/apps/v1/namespaces/deployment-3117/deployments/test-rolling-update-deployment,UID:8618df8b-64ee-424e-ad56-969a31154b34,ResourceVersion:21686916,Generation:1,CreationTimestamp:2020-01-24 13:39:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-24 13:39:17 +0000 UTC 2020-01-24 13:39:17 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-24 13:39:24 +0000 UTC 2020-01-24 13:39:17 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 24 13:39:25.481: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-3117,SelfLink:/apis/apps/v1/namespaces/deployment-3117/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:5537acd7-2187-4ff2-b7df-c3f3db228db9,ResourceVersion:21686905,Generation:1,CreationTimestamp:2020-01-24 13:39:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 8618df8b-64ee-424e-ad56-969a31154b34 0xc0023afe37 0xc0023afe38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 24 13:39:25.481: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 24 13:39:25.482: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-3117,SelfLink:/apis/apps/v1/namespaces/deployment-3117/replicasets/test-rolling-update-controller,UID:099e7801-7242-44b4-a26e-d3047308f4be,ResourceVersion:21686915,Generation:2,CreationTimestamp:2020-01-24 13:39:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 8618df8b-64ee-424e-ad56-969a31154b34 0xc0023afd4f 0xc0023afd60}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 24 13:39:25.488: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-lw47r" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-lw47r,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-3117,SelfLink:/api/v1/namespaces/deployment-3117/pods/test-rolling-update-deployment-79f6b9d75c-lw47r,UID:cc7cfb06-4228-478d-8ad1-dd9b126202b4,ResourceVersion:21686904,Generation:0,CreationTimestamp:2020-01-24 13:39:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 5537acd7-2187-4ff2-b7df-c3f3db228db9 0xc002a62827 0xc002a62828}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-br2dq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-br2dq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-br2dq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a628d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a628f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:39:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:39:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:39:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:39:17 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-24 13:39:17 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-24 13:39:23 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://a1bfaad541d1a1668e38e837dc264bcb30ce21a0aa1252e037102dbac6db8823}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:39:25.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3117" for this suite. Jan 24 13:39:31.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:39:31.652: INFO: namespace deployment-3117 deletion completed in 6.157412427s • [SLOW TEST:21.367 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:39:31.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0124 13:39:41.928659 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 24 13:39:41.928: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:39:41.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8891" for this suite. Jan 24 13:39:47.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:39:48.211: INFO: namespace gc-8891 deletion completed in 6.278291522s • [SLOW TEST:16.559 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:39:48.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 24 13:39:48.270: INFO: Creating ReplicaSet my-hostname-basic-dd522583-ffb7-41be-8a32-0380448e1bdc Jan 24 13:39:48.282: INFO: Pod name my-hostname-basic-dd522583-ffb7-41be-8a32-0380448e1bdc: Found 0 pods out of 1 Jan 24 13:39:53.289: INFO: Pod name my-hostname-basic-dd522583-ffb7-41be-8a32-0380448e1bdc: Found 1 pods out of 1 Jan 24 13:39:53.289: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-dd522583-ffb7-41be-8a32-0380448e1bdc" is running Jan 24 13:39:55.299: INFO: Pod "my-hostname-basic-dd522583-ffb7-41be-8a32-0380448e1bdc-t74px" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-24 13:39:48 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-24 13:39:48 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-dd522583-ffb7-41be-8a32-0380448e1bdc]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-24 13:39:48 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-dd522583-ffb7-41be-8a32-0380448e1bdc]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-24 13:39:48 +0000 UTC Reason: Message:}]) Jan 24 13:39:55.300: INFO: Trying to dial the pod Jan 24 13:40:00.378: INFO: Controller my-hostname-basic-dd522583-ffb7-41be-8a32-0380448e1bdc: Got expected result from replica 1 [my-hostname-basic-dd522583-ffb7-41be-8a32-0380448e1bdc-t74px]: "my-hostname-basic-dd522583-ffb7-41be-8a32-0380448e1bdc-t74px", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:40:00.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7130" for this suite. Jan 24 13:40:06.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:40:06.535: INFO: namespace replicaset-7130 deletion completed in 6.145301514s • [SLOW TEST:18.324 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:40:06.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-da3589a9-0ca0-4495-be5d-ad2f6153e252 STEP: Creating a pod to test consume secrets Jan 24 13:40:06.763: INFO: Waiting up to 5m0s for pod "pod-secrets-3d5fe2b9-ac12-43eb-882d-301628f636a1" in namespace "secrets-8282" to be "success or failure" Jan 24 13:40:06.780: INFO: Pod "pod-secrets-3d5fe2b9-ac12-43eb-882d-301628f636a1": Phase="Pending", Reason="", readiness=false. Elapsed: 17.252914ms Jan 24 13:40:08.787: INFO: Pod "pod-secrets-3d5fe2b9-ac12-43eb-882d-301628f636a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023849047s Jan 24 13:40:10.798: INFO: Pod "pod-secrets-3d5fe2b9-ac12-43eb-882d-301628f636a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034684325s Jan 24 13:40:12.810: INFO: Pod "pod-secrets-3d5fe2b9-ac12-43eb-882d-301628f636a1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047275806s Jan 24 13:40:14.817: INFO: Pod "pod-secrets-3d5fe2b9-ac12-43eb-882d-301628f636a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05407434s STEP: Saw pod success Jan 24 13:40:14.817: INFO: Pod "pod-secrets-3d5fe2b9-ac12-43eb-882d-301628f636a1" satisfied condition "success or failure" Jan 24 13:40:14.821: INFO: Trying to get logs from node iruya-node pod pod-secrets-3d5fe2b9-ac12-43eb-882d-301628f636a1 container secret-volume-test: STEP: delete the pod Jan 24 13:40:14.895: INFO: Waiting for pod pod-secrets-3d5fe2b9-ac12-43eb-882d-301628f636a1 to disappear Jan 24 13:40:15.589: INFO: Pod pod-secrets-3d5fe2b9-ac12-43eb-882d-301628f636a1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:40:15.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8282" for this suite. Jan 24 13:40:21.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:40:22.073: INFO: namespace secrets-8282 deletion completed in 6.467832934s STEP: Destroying namespace "secret-namespace-7841" for this suite. Jan 24 13:40:28.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:40:28.216: INFO: namespace secret-namespace-7841 deletion completed in 6.143400573s • [SLOW TEST:21.681 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:40:28.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jan 24 13:40:28.363: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8405,SelfLink:/api/v1/namespaces/watch-8405/configmaps/e2e-watch-test-watch-closed,UID:39de1933-c83a-4b54-8c2f-6c2bca41d9ba,ResourceVersion:21687131,Generation:0,CreationTimestamp:2020-01-24 13:40:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 24 13:40:28.363: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8405,SelfLink:/api/v1/namespaces/watch-8405/configmaps/e2e-watch-test-watch-closed,UID:39de1933-c83a-4b54-8c2f-6c2bca41d9ba,ResourceVersion:21687132,Generation:0,CreationTimestamp:2020-01-24 13:40:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 24 13:40:28.419: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8405,SelfLink:/api/v1/namespaces/watch-8405/configmaps/e2e-watch-test-watch-closed,UID:39de1933-c83a-4b54-8c2f-6c2bca41d9ba,ResourceVersion:21687133,Generation:0,CreationTimestamp:2020-01-24 13:40:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 24 13:40:28.419: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8405,SelfLink:/api/v1/namespaces/watch-8405/configmaps/e2e-watch-test-watch-closed,UID:39de1933-c83a-4b54-8c2f-6c2bca41d9ba,ResourceVersion:21687134,Generation:0,CreationTimestamp:2020-01-24 13:40:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:40:28.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8405" for this suite. Jan 24 13:40:34.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:40:34.679: INFO: namespace watch-8405 deletion completed in 6.238447368s • [SLOW TEST:6.463 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:40:34.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 24 13:40:34.785: INFO: Waiting up to 5m0s for pod "downwardapi-volume-186fe835-4957-4caa-8a4d-045c77289a4b" in namespace "projected-7326" to be "success or failure" Jan 24 13:40:34.899: INFO: Pod "downwardapi-volume-186fe835-4957-4caa-8a4d-045c77289a4b": Phase="Pending", Reason="", readiness=false. Elapsed: 113.7508ms Jan 24 13:40:36.914: INFO: Pod "downwardapi-volume-186fe835-4957-4caa-8a4d-045c77289a4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128253075s Jan 24 13:40:38.922: INFO: Pod "downwardapi-volume-186fe835-4957-4caa-8a4d-045c77289a4b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136301898s Jan 24 13:40:40.934: INFO: Pod "downwardapi-volume-186fe835-4957-4caa-8a4d-045c77289a4b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148703952s Jan 24 13:40:42.945: INFO: Pod "downwardapi-volume-186fe835-4957-4caa-8a4d-045c77289a4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.159482927s STEP: Saw pod success Jan 24 13:40:42.945: INFO: Pod "downwardapi-volume-186fe835-4957-4caa-8a4d-045c77289a4b" satisfied condition "success or failure" Jan 24 13:40:42.949: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-186fe835-4957-4caa-8a4d-045c77289a4b container client-container: STEP: delete the pod Jan 24 13:40:43.135: INFO: Waiting for pod downwardapi-volume-186fe835-4957-4caa-8a4d-045c77289a4b to disappear Jan 24 13:40:43.147: INFO: Pod downwardapi-volume-186fe835-4957-4caa-8a4d-045c77289a4b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:40:43.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7326" for this suite. Jan 24 13:40:49.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:40:49.334: INFO: namespace projected-7326 deletion completed in 6.182188838s • [SLOW TEST:14.654 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:40:49.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:40:49.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-119" for this suite. Jan 24 13:40:55.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:40:55.843: INFO: namespace services-119 deletion completed in 6.360010676s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.509 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:40:55.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 24 13:40:55.939: INFO: Waiting up to 5m0s for pod "downwardapi-volume-19b87889-c252-44a6-8819-d556d11ad302" in namespace "downward-api-3950" to be "success or failure" Jan 24 13:40:55.947: INFO: Pod "downwardapi-volume-19b87889-c252-44a6-8819-d556d11ad302": Phase="Pending", Reason="", readiness=false. Elapsed: 8.697549ms Jan 24 13:40:57.955: INFO: Pod "downwardapi-volume-19b87889-c252-44a6-8819-d556d11ad302": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016250055s Jan 24 13:40:59.963: INFO: Pod "downwardapi-volume-19b87889-c252-44a6-8819-d556d11ad302": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024740613s Jan 24 13:41:01.973: INFO: Pod "downwardapi-volume-19b87889-c252-44a6-8819-d556d11ad302": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034183166s Jan 24 13:41:03.988: INFO: Pod "downwardapi-volume-19b87889-c252-44a6-8819-d556d11ad302": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048957078s STEP: Saw pod success Jan 24 13:41:03.988: INFO: Pod "downwardapi-volume-19b87889-c252-44a6-8819-d556d11ad302" satisfied condition "success or failure" Jan 24 13:41:03.993: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-19b87889-c252-44a6-8819-d556d11ad302 container client-container: STEP: delete the pod Jan 24 13:41:04.095: INFO: Waiting for pod downwardapi-volume-19b87889-c252-44a6-8819-d556d11ad302 to disappear Jan 24 13:41:04.103: INFO: Pod downwardapi-volume-19b87889-c252-44a6-8819-d556d11ad302 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:41:04.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3950" for this suite. Jan 24 13:41:10.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:41:10.286: INFO: namespace downward-api-3950 deletion completed in 6.168198813s • [SLOW TEST:14.443 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:41:10.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:41:10.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3007" for this suite. Jan 24 13:41:16.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:41:16.723: INFO: namespace kubelet-test-3007 deletion completed in 6.274556452s • [SLOW TEST:6.436 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:41:16.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 24 13:41:16.808: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a5c89f34-2272-446d-8506-f3a6eb854e76" in namespace "downward-api-9033" to be "success or failure" Jan 24 13:41:16.818: INFO: Pod "downwardapi-volume-a5c89f34-2272-446d-8506-f3a6eb854e76": Phase="Pending", Reason="", readiness=false. Elapsed: 9.825887ms Jan 24 13:41:18.828: INFO: Pod "downwardapi-volume-a5c89f34-2272-446d-8506-f3a6eb854e76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019459107s Jan 24 13:41:20.834: INFO: Pod "downwardapi-volume-a5c89f34-2272-446d-8506-f3a6eb854e76": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025477295s Jan 24 13:41:22.871: INFO: Pod "downwardapi-volume-a5c89f34-2272-446d-8506-f3a6eb854e76": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062343638s Jan 24 13:41:24.897: INFO: Pod "downwardapi-volume-a5c89f34-2272-446d-8506-f3a6eb854e76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.08885211s STEP: Saw pod success Jan 24 13:41:24.897: INFO: Pod "downwardapi-volume-a5c89f34-2272-446d-8506-f3a6eb854e76" satisfied condition "success or failure" Jan 24 13:41:24.901: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a5c89f34-2272-446d-8506-f3a6eb854e76 container client-container: STEP: delete the pod Jan 24 13:41:24.978: INFO: Waiting for pod downwardapi-volume-a5c89f34-2272-446d-8506-f3a6eb854e76 to disappear Jan 24 13:41:24.986: INFO: Pod downwardapi-volume-a5c89f34-2272-446d-8506-f3a6eb854e76 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:41:24.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9033" for this suite. Jan 24 13:41:31.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:41:31.384: INFO: namespace downward-api-9033 deletion completed in 6.393316417s • [SLOW TEST:14.661 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:41:31.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-5605 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 24 13:41:31.486: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 24 13:42:03.787: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-5605 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 24 13:42:03.787: INFO: >>> kubeConfig: /root/.kube/config I0124 13:42:03.911706 9 log.go:172] (0xc0019ca210) (0xc001b025a0) Create stream I0124 13:42:03.911833 9 log.go:172] (0xc0019ca210) (0xc001b025a0) Stream added, broadcasting: 1 I0124 13:42:03.927013 9 log.go:172] (0xc0019ca210) Reply frame received for 1 I0124 13:42:03.927142 9 log.go:172] (0xc0019ca210) (0xc001110d20) Create stream I0124 13:42:03.927162 9 log.go:172] (0xc0019ca210) (0xc001110d20) Stream added, broadcasting: 3 I0124 13:42:03.930666 9 log.go:172] (0xc0019ca210) Reply frame received for 3 I0124 13:42:03.930705 9 log.go:172] (0xc0019ca210) (0xc001b02640) Create stream I0124 13:42:03.930724 9 log.go:172] (0xc0019ca210) (0xc001b02640) Stream added, broadcasting: 5 I0124 13:42:03.934458 9 log.go:172] (0xc0019ca210) Reply frame received for 5 I0124 13:42:04.163665 9 log.go:172] (0xc0019ca210) Data frame received for 3 I0124 13:42:04.163767 9 log.go:172] (0xc001110d20) (3) Data frame handling I0124 13:42:04.163815 9 log.go:172] (0xc001110d20) (3) Data frame sent I0124 13:42:04.366655 9 log.go:172] (0xc0019ca210) (0xc001110d20) Stream removed, broadcasting: 3 I0124 13:42:04.366807 9 log.go:172] (0xc0019ca210) Data frame received for 1 I0124 13:42:04.366822 9 log.go:172] (0xc001b025a0) (1) Data frame handling I0124 13:42:04.366837 9 log.go:172] (0xc0019ca210) (0xc001b02640) Stream removed, broadcasting: 5 I0124 13:42:04.366869 9 log.go:172] (0xc001b025a0) (1) Data frame sent I0124 13:42:04.366886 9 log.go:172] (0xc0019ca210) (0xc001b025a0) Stream removed, broadcasting: 1 I0124 13:42:04.366908 9 log.go:172] (0xc0019ca210) Go away received I0124 13:42:04.367142 9 log.go:172] (0xc0019ca210) (0xc001b025a0) Stream removed, broadcasting: 1 I0124 13:42:04.367186 9 log.go:172] (0xc0019ca210) (0xc001110d20) Stream removed, broadcasting: 3 I0124 13:42:04.367209 9 log.go:172] (0xc0019ca210) (0xc001b02640) Stream removed, broadcasting: 5 Jan 24 13:42:04.367: INFO: Waiting for endpoints: map[] Jan 24 13:42:04.375: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-5605 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 24 13:42:04.375: INFO: >>> kubeConfig: /root/.kube/config I0124 13:42:04.457042 9 log.go:172] (0xc0019cb130) (0xc001b02c80) Create stream I0124 13:42:04.457145 9 log.go:172] (0xc0019cb130) (0xc001b02c80) Stream added, broadcasting: 1 I0124 13:42:04.485378 9 log.go:172] (0xc0019cb130) Reply frame received for 1 I0124 13:42:04.485480 9 log.go:172] (0xc0019cb130) (0xc001b02dc0) Create stream I0124 13:42:04.485488 9 log.go:172] (0xc0019cb130) (0xc001b02dc0) Stream added, broadcasting: 3 I0124 13:42:04.497540 9 log.go:172] (0xc0019cb130) Reply frame received for 3 I0124 13:42:04.497563 9 log.go:172] (0xc0019cb130) (0xc001110dc0) Create stream I0124 13:42:04.497573 9 log.go:172] (0xc0019cb130) (0xc001110dc0) Stream added, broadcasting: 5 I0124 13:42:04.500428 9 log.go:172] (0xc0019cb130) Reply frame received for 5 I0124 13:42:04.853916 9 log.go:172] (0xc0019cb130) Data frame received for 3 I0124 13:42:04.854077 9 log.go:172] (0xc001b02dc0) (3) Data frame handling I0124 13:42:04.854120 9 log.go:172] (0xc001b02dc0) (3) Data frame sent I0124 13:42:05.051036 9 log.go:172] (0xc0019cb130) (0xc001b02dc0) Stream removed, broadcasting: 3 I0124 13:42:05.051162 9 log.go:172] (0xc0019cb130) Data frame received for 1 I0124 13:42:05.051192 9 log.go:172] (0xc0019cb130) (0xc001110dc0) Stream removed, broadcasting: 5 I0124 13:42:05.051233 9 log.go:172] (0xc001b02c80) (1) Data frame handling I0124 13:42:05.051253 9 log.go:172] (0xc001b02c80) (1) Data frame sent I0124 13:42:05.051259 9 log.go:172] (0xc0019cb130) (0xc001b02c80) Stream removed, broadcasting: 1 I0124 13:42:05.051289 9 log.go:172] (0xc0019cb130) Go away received I0124 13:42:05.051430 9 log.go:172] (0xc0019cb130) (0xc001b02c80) Stream removed, broadcasting: 1 I0124 13:42:05.051443 9 log.go:172] (0xc0019cb130) (0xc001b02dc0) Stream removed, broadcasting: 3 I0124 13:42:05.051446 9 log.go:172] (0xc0019cb130) (0xc001110dc0) Stream removed, broadcasting: 5 Jan 24 13:42:05.051: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:42:05.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5605" for this suite. Jan 24 13:42:29.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:42:29.251: INFO: namespace pod-network-test-5605 deletion completed in 24.192008825s • [SLOW TEST:57.866 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:42:29.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4798 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-4798 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4798 Jan 24 13:42:29.390: INFO: Found 0 stateful pods, waiting for 1 Jan 24 13:42:39.401: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 24 13:42:39.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 24 13:42:41.827: INFO: stderr: "I0124 13:42:41.482687 1760 log.go:172] (0xc00013a840) (0xc00069a780) Create stream\nI0124 13:42:41.482798 1760 log.go:172] (0xc00013a840) (0xc00069a780) Stream added, broadcasting: 1\nI0124 13:42:41.493198 1760 log.go:172] (0xc00013a840) Reply frame received for 1\nI0124 13:42:41.493249 1760 log.go:172] (0xc00013a840) (0xc00079c0a0) Create stream\nI0124 13:42:41.493266 1760 log.go:172] (0xc00013a840) (0xc00079c0a0) Stream added, broadcasting: 3\nI0124 13:42:41.496164 1760 log.go:172] (0xc00013a840) Reply frame received for 3\nI0124 13:42:41.496219 1760 log.go:172] (0xc00013a840) (0xc0006b0000) Create stream\nI0124 13:42:41.496247 1760 log.go:172] (0xc00013a840) (0xc0006b0000) Stream added, broadcasting: 5\nI0124 13:42:41.498843 1760 log.go:172] (0xc00013a840) Reply frame received for 5\nI0124 13:42:41.629829 1760 log.go:172] (0xc00013a840) Data frame received for 5\nI0124 13:42:41.629869 1760 log.go:172] (0xc0006b0000) (5) Data frame handling\nI0124 13:42:41.629888 1760 log.go:172] (0xc0006b0000) (5) Data frame sent\nI0124 13:42:41.629896 1760 log.go:172] (0xc00013a840) Data frame received for 5\n+ mv -vI0124 13:42:41.629902 1760 log.go:172] (0xc0006b0000) (5) Data frame handling\nI0124 13:42:41.629952 1760 log.go:172] (0xc0006b0000) (5) Data frame sent\n /usr/share/nginx/html/index.html /tmp/\nI0124 13:42:41.666233 1760 log.go:172] (0xc00013a840) Data frame received for 3\nI0124 13:42:41.666258 1760 log.go:172] (0xc00079c0a0) (3) Data frame handling\nI0124 13:42:41.666271 1760 log.go:172] (0xc00079c0a0) (3) Data frame sent\nI0124 13:42:41.814439 1760 log.go:172] (0xc00013a840) (0xc00079c0a0) Stream removed, broadcasting: 3\nI0124 13:42:41.814610 1760 log.go:172] (0xc00013a840) Data frame received for 1\nI0124 13:42:41.814654 1760 log.go:172] (0xc00069a780) (1) Data frame handling\nI0124 13:42:41.814682 1760 log.go:172] (0xc00069a780) (1) Data frame sent\nI0124 13:42:41.814752 1760 log.go:172] (0xc00013a840) (0xc00069a780) Stream removed, broadcasting: 1\nI0124 13:42:41.815466 1760 log.go:172] (0xc00013a840) (0xc0006b0000) Stream removed, broadcasting: 5\nI0124 13:42:41.815502 1760 log.go:172] (0xc00013a840) Go away received\nI0124 13:42:41.815707 1760 log.go:172] (0xc00013a840) (0xc00069a780) Stream removed, broadcasting: 1\nI0124 13:42:41.815741 1760 log.go:172] (0xc00013a840) (0xc00079c0a0) Stream removed, broadcasting: 3\nI0124 13:42:41.815754 1760 log.go:172] (0xc00013a840) (0xc0006b0000) Stream removed, broadcasting: 5\n" Jan 24 13:42:41.827: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 24 13:42:41.827: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 24 13:42:41.843: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 24 13:42:51.857: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 24 13:42:51.858: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 13:42:51.911: INFO: POD NODE PHASE GRACE CONDITIONS Jan 24 13:42:51.911: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:29 +0000 UTC }] Jan 24 13:42:51.911: INFO: ss-1 Pending [] Jan 24 13:42:51.911: INFO: Jan 24 13:42:51.911: INFO: StatefulSet ss has not reached scale 3, at 2 Jan 24 13:42:53.501: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.981872772s Jan 24 13:42:54.865: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.391947959s Jan 24 13:42:55.884: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.027059439s Jan 24 13:42:56.917: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.009231855s Jan 24 13:42:58.467: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.97578873s Jan 24 13:42:59.802: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.425541112s Jan 24 13:43:01.178: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.090614065s STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4798 Jan 24 13:43:02.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:43:02.775: INFO: stderr: "I0124 13:43:02.399249 1794 log.go:172] (0xc000116dc0) (0xc000208820) Create stream\nI0124 13:43:02.399372 1794 log.go:172] (0xc000116dc0) (0xc000208820) Stream added, broadcasting: 1\nI0124 13:43:02.404411 1794 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0124 13:43:02.404433 1794 log.go:172] (0xc000116dc0) (0xc0008f4000) Create stream\nI0124 13:43:02.404451 1794 log.go:172] (0xc000116dc0) (0xc0008f4000) Stream added, broadcasting: 3\nI0124 13:43:02.405882 1794 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0124 13:43:02.405901 1794 log.go:172] (0xc000116dc0) (0xc000798320) Create stream\nI0124 13:43:02.405912 1794 log.go:172] (0xc000116dc0) (0xc000798320) Stream added, broadcasting: 5\nI0124 13:43:02.407343 1794 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0124 13:43:02.546027 1794 log.go:172] (0xc000116dc0) Data frame received for 3\nI0124 13:43:02.546253 1794 log.go:172] (0xc0008f4000) (3) Data frame handling\nI0124 13:43:02.546289 1794 log.go:172] (0xc0008f4000) (3) Data frame sent\nI0124 13:43:02.546349 1794 log.go:172] (0xc000116dc0) Data frame received for 5\nI0124 13:43:02.546371 1794 log.go:172] (0xc000798320) (5) Data frame handling\nI0124 13:43:02.546397 1794 log.go:172] (0xc000798320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0124 13:43:02.765802 1794 log.go:172] (0xc000116dc0) Data frame received for 1\nI0124 13:43:02.765993 1794 log.go:172] (0xc000116dc0) (0xc0008f4000) Stream removed, broadcasting: 3\nI0124 13:43:02.766054 1794 log.go:172] (0xc000208820) (1) Data frame handling\nI0124 13:43:02.766073 1794 log.go:172] (0xc000208820) (1) Data frame sent\nI0124 13:43:02.766344 1794 log.go:172] (0xc000116dc0) (0xc000798320) Stream removed, broadcasting: 5\nI0124 13:43:02.766394 1794 log.go:172] (0xc000116dc0) (0xc000208820) Stream removed, broadcasting: 1\nI0124 13:43:02.766419 1794 log.go:172] (0xc000116dc0) Go away received\nI0124 13:43:02.767157 1794 log.go:172] (0xc000116dc0) (0xc000208820) Stream removed, broadcasting: 1\nI0124 13:43:02.767173 1794 log.go:172] (0xc000116dc0) (0xc0008f4000) Stream removed, broadcasting: 3\nI0124 13:43:02.767180 1794 log.go:172] (0xc000116dc0) (0xc000798320) Stream removed, broadcasting: 5\n" Jan 24 13:43:02.775: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 24 13:43:02.775: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 24 13:43:02.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:43:03.084: INFO: stderr: "I0124 13:43:02.907115 1815 log.go:172] (0xc0007d2160) (0xc00079a0a0) Create stream\nI0124 13:43:02.907224 1815 log.go:172] (0xc0007d2160) (0xc00079a0a0) Stream added, broadcasting: 1\nI0124 13:43:02.910382 1815 log.go:172] (0xc0007d2160) Reply frame received for 1\nI0124 13:43:02.910422 1815 log.go:172] (0xc0007d2160) (0xc00079a140) Create stream\nI0124 13:43:02.910429 1815 log.go:172] (0xc0007d2160) (0xc00079a140) Stream added, broadcasting: 3\nI0124 13:43:02.911806 1815 log.go:172] (0xc0007d2160) Reply frame received for 3\nI0124 13:43:02.911828 1815 log.go:172] (0xc0007d2160) (0xc000868000) Create stream\nI0124 13:43:02.911837 1815 log.go:172] (0xc0007d2160) (0xc000868000) Stream added, broadcasting: 5\nI0124 13:43:02.912620 1815 log.go:172] (0xc0007d2160) Reply frame received for 5\nI0124 13:43:02.999803 1815 log.go:172] (0xc0007d2160) Data frame received for 3\nI0124 13:43:02.999859 1815 log.go:172] (0xc00079a140) (3) Data frame handling\nI0124 13:43:02.999874 1815 log.go:172] (0xc00079a140) (3) Data frame sent\nI0124 13:43:02.999911 1815 log.go:172] (0xc0007d2160) Data frame received for 5\nI0124 13:43:02.999917 1815 log.go:172] (0xc000868000) (5) Data frame handling\nI0124 13:43:02.999926 1815 log.go:172] (0xc000868000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0124 13:43:03.081329 1815 log.go:172] (0xc0007d2160) Data frame received for 1\nI0124 13:43:03.081412 1815 log.go:172] (0xc0007d2160) (0xc00079a140) Stream removed, broadcasting: 3\nI0124 13:43:03.081426 1815 log.go:172] (0xc00079a0a0) (1) Data frame handling\nI0124 13:43:03.081432 1815 log.go:172] (0xc00079a0a0) (1) Data frame sent\nI0124 13:43:03.081437 1815 log.go:172] (0xc0007d2160) (0xc00079a0a0) Stream removed, broadcasting: 1\nI0124 13:43:03.081703 1815 log.go:172] (0xc0007d2160) (0xc000868000) Stream removed, broadcasting: 5\nI0124 13:43:03.081719 1815 log.go:172] (0xc0007d2160) (0xc00079a0a0) Stream removed, broadcasting: 1\nI0124 13:43:03.081725 1815 log.go:172] (0xc0007d2160) (0xc00079a140) Stream removed, broadcasting: 3\nI0124 13:43:03.081733 1815 log.go:172] (0xc0007d2160) (0xc000868000) Stream removed, broadcasting: 5\nI0124 13:43:03.081766 1815 log.go:172] (0xc0007d2160) Go away received\n" Jan 24 13:43:03.084: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 24 13:43:03.085: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 24 13:43:03.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:43:03.628: INFO: stderr: "I0124 13:43:03.303052 1829 log.go:172] (0xc0008009a0) (0xc0001ee780) Create stream\nI0124 13:43:03.303837 1829 log.go:172] (0xc0008009a0) (0xc0001ee780) Stream added, broadcasting: 1\nI0124 13:43:03.313439 1829 log.go:172] (0xc0008009a0) Reply frame received for 1\nI0124 13:43:03.313514 1829 log.go:172] (0xc0008009a0) (0xc0008220a0) Create stream\nI0124 13:43:03.313543 1829 log.go:172] (0xc0008009a0) (0xc0008220a0) Stream added, broadcasting: 3\nI0124 13:43:03.314969 1829 log.go:172] (0xc0008009a0) Reply frame received for 3\nI0124 13:43:03.315040 1829 log.go:172] (0xc0008009a0) (0xc0005c0000) Create stream\nI0124 13:43:03.315058 1829 log.go:172] (0xc0008009a0) (0xc0005c0000) Stream added, broadcasting: 5\nI0124 13:43:03.316708 1829 log.go:172] (0xc0008009a0) Reply frame received for 5\nI0124 13:43:03.440544 1829 log.go:172] (0xc0008009a0) Data frame received for 5\nI0124 13:43:03.440670 1829 log.go:172] (0xc0005c0000) (5) Data frame handling\nI0124 13:43:03.440728 1829 log.go:172] (0xc0005c0000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0124 13:43:03.440756 1829 log.go:172] (0xc0008009a0) Data frame received for 3\nI0124 13:43:03.440847 1829 log.go:172] (0xc0008220a0) (3) Data frame handling\nI0124 13:43:03.440880 1829 log.go:172] (0xc0008220a0) (3) Data frame sent\nI0124 13:43:03.613390 1829 log.go:172] (0xc0008009a0) (0xc0008220a0) Stream removed, broadcasting: 3\nI0124 13:43:03.613645 1829 log.go:172] (0xc0008009a0) Data frame received for 1\nI0124 13:43:03.613680 1829 log.go:172] (0xc0001ee780) (1) Data frame handling\nI0124 13:43:03.613744 1829 log.go:172] (0xc0001ee780) (1) Data frame sent\nI0124 13:43:03.613902 1829 log.go:172] (0xc0008009a0) (0xc0001ee780) Stream removed, broadcasting: 1\nI0124 13:43:03.614850 1829 log.go:172] (0xc0008009a0) (0xc0005c0000) Stream removed, broadcasting: 5\nI0124 13:43:03.614902 1829 log.go:172] (0xc0008009a0) Go away received\nI0124 13:43:03.614983 1829 log.go:172] (0xc0008009a0) (0xc0001ee780) Stream removed, broadcasting: 1\nI0124 13:43:03.614999 1829 log.go:172] (0xc0008009a0) (0xc0008220a0) Stream removed, broadcasting: 3\nI0124 13:43:03.615016 1829 log.go:172] (0xc0008009a0) (0xc0005c0000) Stream removed, broadcasting: 5\n" Jan 24 13:43:03.629: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 24 13:43:03.629: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 24 13:43:03.644: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 24 13:43:03.644: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=false Jan 24 13:43:13.659: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 24 13:43:13.659: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 24 13:43:13.659: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jan 24 13:43:13.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 24 13:43:14.431: INFO: stderr: "I0124 13:43:13.931159 1850 log.go:172] (0xc00079c420) (0xc0005aaa00) Create stream\nI0124 13:43:13.931299 1850 log.go:172] (0xc00079c420) (0xc0005aaa00) Stream added, broadcasting: 1\nI0124 13:43:13.941086 1850 log.go:172] (0xc00079c420) Reply frame received for 1\nI0124 13:43:13.941192 1850 log.go:172] (0xc00079c420) (0xc00075c0a0) Create stream\nI0124 13:43:13.941223 1850 log.go:172] (0xc00079c420) (0xc00075c0a0) Stream added, broadcasting: 3\nI0124 13:43:13.945314 1850 log.go:172] (0xc00079c420) Reply frame received for 3\nI0124 13:43:13.945375 1850 log.go:172] (0xc00079c420) (0xc0006bc1e0) Create stream\nI0124 13:43:13.945427 1850 log.go:172] (0xc00079c420) (0xc0006bc1e0) Stream added, broadcasting: 5\nI0124 13:43:13.950504 1850 log.go:172] (0xc00079c420) Reply frame received for 5\nI0124 13:43:14.275072 1850 log.go:172] (0xc00079c420) Data frame received for 3\nI0124 13:43:14.275131 1850 log.go:172] (0xc00075c0a0) (3) Data frame handling\nI0124 13:43:14.275150 1850 log.go:172] (0xc00075c0a0) (3) Data frame sent\nI0124 13:43:14.275188 1850 log.go:172] (0xc00079c420) Data frame received for 5\nI0124 13:43:14.275212 1850 log.go:172] (0xc0006bc1e0) (5) Data frame handling\nI0124 13:43:14.275254 1850 log.go:172] (0xc0006bc1e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0124 13:43:14.422784 1850 log.go:172] (0xc00079c420) Data frame received for 1\nI0124 13:43:14.422874 1850 log.go:172] (0xc0005aaa00) (1) Data frame handling\nI0124 13:43:14.422959 1850 log.go:172] (0xc0005aaa00) (1) Data frame sent\nI0124 13:43:14.422986 1850 log.go:172] (0xc00079c420) (0xc0005aaa00) Stream removed, broadcasting: 1\nI0124 13:43:14.423276 1850 log.go:172] (0xc00079c420) (0xc00075c0a0) Stream removed, broadcasting: 3\nI0124 13:43:14.423347 1850 log.go:172] (0xc00079c420) (0xc0006bc1e0) Stream removed, broadcasting: 5\nI0124 13:43:14.423549 1850 log.go:172] (0xc00079c420) Go away received\nI0124 13:43:14.423817 1850 log.go:172] (0xc00079c420) (0xc0005aaa00) Stream removed, broadcasting: 1\nI0124 13:43:14.423855 1850 log.go:172] (0xc00079c420) (0xc00075c0a0) Stream removed, broadcasting: 3\nI0124 13:43:14.423869 1850 log.go:172] (0xc00079c420) (0xc0006bc1e0) Stream removed, broadcasting: 5\n" Jan 24 13:43:14.431: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 24 13:43:14.431: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 24 13:43:14.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 24 13:43:14.840: INFO: stderr: "I0124 13:43:14.624747 1870 log.go:172] (0xc0009362c0) (0xc0005cc8c0) Create stream\nI0124 13:43:14.625052 1870 log.go:172] (0xc0009362c0) (0xc0005cc8c0) Stream added, broadcasting: 1\nI0124 13:43:14.636347 1870 log.go:172] (0xc0009362c0) Reply frame received for 1\nI0124 13:43:14.636387 1870 log.go:172] (0xc0009362c0) (0xc00034e000) Create stream\nI0124 13:43:14.636394 1870 log.go:172] (0xc0009362c0) (0xc00034e000) Stream added, broadcasting: 3\nI0124 13:43:14.639576 1870 log.go:172] (0xc0009362c0) Reply frame received for 3\nI0124 13:43:14.639607 1870 log.go:172] (0xc0009362c0) (0xc000354000) Create stream\nI0124 13:43:14.639615 1870 log.go:172] (0xc0009362c0) (0xc000354000) Stream added, broadcasting: 5\nI0124 13:43:14.640957 1870 log.go:172] (0xc0009362c0) Reply frame received for 5\nI0124 13:43:14.722253 1870 log.go:172] (0xc0009362c0) Data frame received for 5\nI0124 13:43:14.722300 1870 log.go:172] (0xc000354000) (5) Data frame handling\nI0124 13:43:14.722311 1870 log.go:172] (0xc000354000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0124 13:43:14.744632 1870 log.go:172] (0xc0009362c0) Data frame received for 3\nI0124 13:43:14.744663 1870 log.go:172] (0xc00034e000) (3) Data frame handling\nI0124 13:43:14.744683 1870 log.go:172] (0xc00034e000) (3) Data frame sent\nI0124 13:43:14.828323 1870 log.go:172] (0xc0009362c0) Data frame received for 1\nI0124 13:43:14.828666 1870 log.go:172] (0xc0009362c0) (0xc00034e000) Stream removed, broadcasting: 3\nI0124 13:43:14.828736 1870 log.go:172] (0xc0005cc8c0) (1) Data frame handling\nI0124 13:43:14.828772 1870 log.go:172] (0xc0005cc8c0) (1) Data frame sent\nI0124 13:43:14.828794 1870 log.go:172] (0xc0009362c0) (0xc0005cc8c0) Stream removed, broadcasting: 1\nI0124 13:43:14.829345 1870 log.go:172] (0xc0009362c0) (0xc000354000) Stream removed, broadcasting: 5\nI0124 13:43:14.829392 1870 log.go:172] (0xc0009362c0) (0xc0005cc8c0) Stream removed, broadcasting: 1\nI0124 13:43:14.829405 1870 log.go:172] (0xc0009362c0) (0xc00034e000) Stream removed, broadcasting: 3\nI0124 13:43:14.829414 1870 log.go:172] (0xc0009362c0) (0xc000354000) Stream removed, broadcasting: 5\n" Jan 24 13:43:14.840: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 24 13:43:14.841: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 24 13:43:14.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 24 13:43:15.366: INFO: stderr: "I0124 13:43:15.040479 1887 log.go:172] (0xc0009de6e0) (0xc0009b0a00) Create stream\nI0124 13:43:15.040597 1887 log.go:172] (0xc0009de6e0) (0xc0009b0a00) Stream added, broadcasting: 1\nI0124 13:43:15.052469 1887 log.go:172] (0xc0009de6e0) Reply frame received for 1\nI0124 13:43:15.052518 1887 log.go:172] (0xc0009de6e0) (0xc0009b0000) Create stream\nI0124 13:43:15.052527 1887 log.go:172] (0xc0009de6e0) (0xc0009b0000) Stream added, broadcasting: 3\nI0124 13:43:15.055349 1887 log.go:172] (0xc0009de6e0) Reply frame received for 3\nI0124 13:43:15.055382 1887 log.go:172] (0xc0009de6e0) (0xc0005b6280) Create stream\nI0124 13:43:15.055396 1887 log.go:172] (0xc0009de6e0) (0xc0005b6280) Stream added, broadcasting: 5\nI0124 13:43:15.059874 1887 log.go:172] (0xc0009de6e0) Reply frame received for 5\nI0124 13:43:15.184811 1887 log.go:172] (0xc0009de6e0) Data frame received for 5\nI0124 13:43:15.184842 1887 log.go:172] (0xc0005b6280) (5) Data frame handling\nI0124 13:43:15.184857 1887 log.go:172] (0xc0005b6280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0124 13:43:15.208940 1887 log.go:172] (0xc0009de6e0) Data frame received for 3\nI0124 13:43:15.209002 1887 log.go:172] (0xc0009b0000) (3) Data frame handling\nI0124 13:43:15.209028 1887 log.go:172] (0xc0009b0000) (3) Data frame sent\nI0124 13:43:15.358133 1887 log.go:172] (0xc0009de6e0) (0xc0009b0000) Stream removed, broadcasting: 3\nI0124 13:43:15.358247 1887 log.go:172] (0xc0009de6e0) Data frame received for 1\nI0124 13:43:15.358277 1887 log.go:172] (0xc0009de6e0) (0xc0005b6280) Stream removed, broadcasting: 5\nI0124 13:43:15.358373 1887 log.go:172] (0xc0009b0a00) (1) Data frame handling\nI0124 13:43:15.358401 1887 log.go:172] (0xc0009b0a00) (1) Data frame sent\nI0124 13:43:15.358422 1887 log.go:172] (0xc0009de6e0) (0xc0009b0a00) Stream removed, broadcasting: 1\nI0124 13:43:15.358446 1887 log.go:172] (0xc0009de6e0) Go away received\nI0124 13:43:15.359199 1887 log.go:172] (0xc0009de6e0) (0xc0009b0a00) Stream removed, broadcasting: 1\nI0124 13:43:15.359257 1887 log.go:172] (0xc0009de6e0) (0xc0009b0000) Stream removed, broadcasting: 3\nI0124 13:43:15.359272 1887 log.go:172] (0xc0009de6e0) (0xc0005b6280) Stream removed, broadcasting: 5\n" Jan 24 13:43:15.366: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 24 13:43:15.366: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 24 13:43:15.366: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 13:43:15.439: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 24 13:43:25.456: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 24 13:43:25.456: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 24 13:43:25.456: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 24 13:43:25.499: INFO: POD NODE PHASE GRACE CONDITIONS Jan 24 13:43:25.499: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:29 +0000 UTC }] Jan 24 13:43:25.499: INFO: ss-1 iruya-server-sfge57q7djm7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:51 +0000 UTC }] Jan 24 13:43:25.499: INFO: ss-2 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:52 +0000 UTC }] Jan 24 13:43:25.499: INFO: Jan 24 13:43:25.499: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 24 13:43:27.294: INFO: POD NODE PHASE GRACE CONDITIONS Jan 24 13:43:27.294: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:29 +0000 UTC }] Jan 24 13:43:27.294: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:51 +0000 UTC }] Jan 24 13:43:27.295: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:52 +0000 UTC }] Jan 24 13:43:27.295: INFO: Jan 24 13:43:27.295: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 24 13:43:28.309: INFO: POD NODE PHASE GRACE CONDITIONS Jan 24 13:43:28.309: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:29 +0000 UTC }] Jan 24 13:43:28.309: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:51 +0000 UTC }] Jan 24 13:43:28.309: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:52 +0000 UTC }] Jan 24 13:43:28.309: INFO: Jan 24 13:43:28.309: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 24 13:43:29.769: INFO: POD NODE PHASE GRACE CONDITIONS Jan 24 13:43:29.770: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:29 +0000 UTC }] Jan 24 13:43:29.770: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:51 +0000 UTC }] Jan 24 13:43:29.770: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:52 +0000 UTC }] Jan 24 13:43:29.770: INFO: Jan 24 13:43:29.770: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 24 13:43:30.780: INFO: POD NODE PHASE GRACE CONDITIONS Jan 24 13:43:30.780: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:29 +0000 UTC }] Jan 24 13:43:30.780: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:51 +0000 UTC }] Jan 24 13:43:30.780: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:52 +0000 UTC }] Jan 24 13:43:30.780: INFO: Jan 24 13:43:30.780: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 24 13:43:31.868: INFO: POD NODE PHASE GRACE CONDITIONS Jan 24 13:43:31.868: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:29 +0000 UTC }] Jan 24 13:43:31.869: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:51 +0000 UTC }] Jan 24 13:43:31.869: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:52 +0000 UTC }] Jan 24 13:43:31.869: INFO: Jan 24 13:43:31.869: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 24 13:43:32.885: INFO: POD NODE PHASE GRACE CONDITIONS Jan 24 13:43:32.885: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:29 +0000 UTC }] Jan 24 13:43:32.885: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:51 +0000 UTC }] Jan 24 13:43:32.885: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:52 +0000 UTC }] Jan 24 13:43:32.886: INFO: Jan 24 13:43:32.886: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 24 13:43:33.904: INFO: POD NODE PHASE GRACE CONDITIONS Jan 24 13:43:33.904: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:29 +0000 UTC }] Jan 24 13:43:33.904: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:51 +0000 UTC }] Jan 24 13:43:33.904: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:52 +0000 UTC }] Jan 24 13:43:33.904: INFO: Jan 24 13:43:33.904: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 24 13:43:34.920: INFO: POD NODE PHASE GRACE CONDITIONS Jan 24 13:43:34.921: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:29 +0000 UTC }] Jan 24 13:43:34.921: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:51 +0000 UTC }] Jan 24 13:43:34.921: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:43:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:42:52 +0000 UTC }] Jan 24 13:43:34.921: INFO: Jan 24 13:43:34.921: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4798 Jan 24 13:43:35.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:43:36.151: INFO: rc: 1 Jan 24 13:43:36.152: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc001ea29f0 exit status 1 true [0xc0006ee9e8 0xc0006eebf0 0xc0006ef220] [0xc0006ee9e8 0xc0006eebf0 0xc0006ef220] [0xc0006eeaf0 0xc0006ef1b0] [0xba6c50 0xba6c50] 0xc001738480 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jan 24 13:43:46.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:43:46.287: INFO: rc: 1 Jan 24 13:43:46.287: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ea2ab0 exit status 1 true [0xc0006ef260 0xc0006ef3b0 0xc0006ef468] [0xc0006ef260 0xc0006ef3b0 0xc0006ef468] [0xc0006ef328 0xc0006ef450] [0xba6c50 0xba6c50] 0xc001738780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 13:43:56.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:43:56.395: INFO: rc: 1 Jan 24 13:43:56.396: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ea2ba0 exit status 1 true [0xc0006ef478 0xc0006ef508 0xc0006ef618] [0xc0006ef478 0xc0006ef508 0xc0006ef618] [0xc0006ef4e0 0xc0006ef580] [0xba6c50 0xba6c50] 0xc001738e40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 13:44:06.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:44:06.527: INFO: rc: 1 Jan 24 13:44:06.527: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ea2c60 exit status 1 true [0xc0006ef688 0xc0006ef7d8 0xc0006ef880] [0xc0006ef688 0xc0006ef7d8 0xc0006ef880] [0xc0006ef738 0xc0006ef850] [0xba6c50 0xba6c50] 0xc001739620 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 13:44:16.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:44:16.663: INFO: rc: 1 Jan 24 13:44:16.663: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ea2d20 exit status 1 true [0xc0006ef8a0 0xc0006ef958 0xc0006efa70] [0xc0006ef8a0 0xc0006ef958 0xc0006efa70] [0xc0006ef8f0 0xc0006ef9c0] [0xba6c50 0xba6c50] 0xc001739b60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 13:44:26.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:44:26.830: INFO: rc: 1 Jan 24 13:44:26.830: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001b770b0 exit status 1 true [0xc000375818 0xc000375e88 0xc000375f98] [0xc000375818 0xc000375e88 0xc000375f98] [0xc000375bf8 0xc000375f60] [0xba6c50 0xba6c50] 0xc00243bb00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 13:44:36.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:44:36.986: INFO: rc: 1 Jan 24 13:44:36.986: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0006ad4d0 exit status 1 true [0xc000d5e038 0xc000d5e100 0xc000d5e1d0] [0xc000d5e038 0xc000d5e100 0xc000d5e1d0] [0xc000d5e0e0 0xc000d5e118] [0xba6c50 0xba6c50] 0xc001fc3260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 13:44:46.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:44:47.161: INFO: rc: 1 Jan 24 13:44:47.161: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001b771a0 exit status 1 true [0xc002afe000 0xc002afe018 0xc002afe038] [0xc002afe000 0xc002afe018 0xc002afe038] [0xc002afe010 0xc002afe028] [0xba6c50 0xba6c50] 0xc00243be00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 13:44:57.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:44:57.334: INFO: rc: 1 Jan 24 13:44:57.335: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ea2e10 exit status 1 true [0xc0006efa80 0xc0006efb50 0xc0006efbd0] [0xc0006efa80 0xc0006efb50 0xc0006efbd0] [0xc0006efaf8 0xc0006efb70] [0xba6c50 0xba6c50] 0xc001cc4240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 13:45:07.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:45:07.481: INFO: rc: 1 Jan 24 13:45:07.481: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001b77260 exit status 1 true [0xc002afe040 0xc002afe058 0xc002afe070] [0xc002afe040 0xc002afe058 0xc002afe070] [0xc002afe050 0xc002afe068] [0xba6c50 0xba6c50] 0xc0025806c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 13:45:17.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:45:17.642: INFO: rc: 1 Jan 24 13:45:17.642: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0006ad590 exit status 1 true [0xc000d5e248 0xc000d5e2c8 0xc000d5e468] [0xc000d5e248 0xc000d5e2c8 0xc000d5e468] [0xc000d5e2b8 0xc000d5e438] [0xba6c50 0xba6c50] 0xc001fc3c80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 13:45:27.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:45:27.781: INFO: rc: 1 Jan 24 13:45:27.781: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0006ad650 exit status 1 true [0xc000d5e480 0xc000d5e5d0 0xc000d5e6c0] [0xc000d5e480 0xc000d5e5d0 0xc000d5e6c0] [0xc000d5e550 0xc000d5e678] [0xba6c50 0xba6c50] 0xc002a38240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 13:45:37.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:45:38.000: INFO: rc: 1 Jan 24 13:45:38.000: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00070ad80 exit status 1 true [0xc000374ab8 0xc0003752e8 0xc000375470] [0xc000374ab8 0xc0003752e8 0xc000375470] [0xc0003751b8 0xc000375418] [0xba6c50 0xba6c50] 0xc001fc3260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 13:45:48.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:45:48.166: INFO: rc: 1 Jan 24 13:45:48.166: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00070af90 exit status 1 true [0xc000375520 0xc000375958 0xc000375f38] [0xc000375520 0xc000375958 0xc000375f38] [0xc000375818 0xc000375e88] [0xba6c50 0xba6c50] 0xc001fc3c80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 13:45:58.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:45:58.306: INFO: rc: 1 Jan 24 13:45:58.306: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00070b200 exit status 1 true [0xc000375f60 0xc0006ee670 0xc0006ee8b8] [0xc000375f60 0xc0006ee670 0xc0006ee8b8] [0xc0006ee640 0xc0006ee790] [0xba6c50 0xba6c50] 0xc0017383c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 13:46:08.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:46:08.489: INFO: rc: 1 Jan 24 13:46:08.490: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001660e70 exit status 1 true [0xc000d5e038 0xc000d5e100 0xc000d5e1d0] [0xc000d5e038 0xc000d5e100 0xc000d5e1d0] [0xc000d5e0e0 0xc000d5e118] [0xba6c50 0xba6c50] 0xc00243a480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 13:46:18.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:46:18.650: INFO: rc: 1 Jan 24 13:46:18.650: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00070b4d0 exit status 1 true [0xc0006ee9e8 0xc0006eebf0 0xc0006ef220] [0xc0006ee9e8 0xc0006eebf0 0xc0006ef220] [0xc0006eeaf0 0xc0006ef1b0] [0xba6c50 0xba6c50] 0xc0017386c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 13:46:28.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:46:28.775: INFO: rc: 1 Jan 24 13:46:28.775: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0006ad530 exit status 1 true [0xc001622098 0xc001622208 0xc0016223a8] [0xc001622098 0xc001622208 0xc0016223a8] [0xc0016221a0 0xc0016222e0] [0xba6c50 0xba6c50] 0xc001cc4180 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 13:46:38.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:46:38.915: INFO: rc: 1 Jan 24 13:46:38.915: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00070b7a0 exit status 1 true [0xc0006ef260 0xc0006ef3b0 0xc0006ef468] [0xc0006ef260 0xc0006ef3b0 0xc0006ef468] [0xc0006ef328 0xc0006ef450] [0xba6c50 0xba6c50] 0xc001738b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 13:46:48.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:46:49.104: INFO: rc: 1 Jan 24 13:46:49.104: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00070b890 exit status 1 true [0xc0006ef478 0xc0006ef508 0xc0006ef618] [0xc0006ef478 0xc0006ef508 0xc0006ef618] [0xc0006ef4e0 0xc0006ef580] [0xba6c50 0xba6c50] 0xc0017393e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 13:46:59.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:46:59.248: INFO: rc: 1 Jan 24 13:46:59.249: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0006ad680 exit status 1 true [0xc001622408 0xc001622608 0xc001622708] [0xc001622408 0xc001622608 0xc001622708] [0xc001622518 0xc001622648] [0xba6c50 0xba6c50] 0xc001cc52c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 13:47:09.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:47:09.416: INFO: rc: 1 Jan 24 13:47:09.417: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00070b9e0 exit status 1 true [0xc0006ef688 0xc0006ef7d8 0xc0006ef880] [0xc0006ef688 0xc0006ef7d8 0xc0006ef880] [0xc0006ef738 0xc0006ef850] [0xba6c50 0xba6c50] 0xc001739aa0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 13:47:19.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:47:19.542: INFO: rc: 1 Jan 24 13:47:19.542: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0006ad770 exit status 1 true [0xc001622778 0xc001622a90 0xc001622c20] [0xc001622778 0xc001622a90 0xc001622c20] [0xc001622a60 0xc001622b88] [0xba6c50 0xba6c50] 0xc001cc5d40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 13:47:29.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:47:29.712: INFO: rc: 1 Jan 24 13:47:29.713: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ea20c0 exit status 1 true [0xc002afe000 0xc002afe018 0xc002afe038] [0xc002afe000 0xc002afe018 0xc002afe038] [0xc002afe010 0xc002afe028] [0xba6c50 0xba6c50] 0xc002a38240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 13:47:39.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:47:39.840: INFO: rc: 1 Jan 24 13:47:39.841: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001660ea0 exit status 1 true [0xc000374ab8 0xc0003752e8 0xc000375470] [0xc000374ab8 0xc0003752e8 0xc000375470] [0xc0003751b8 0xc000375418] [0xba6c50 0xba6c50] 0xc001fc2c00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 13:47:49.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:47:49.965: INFO: rc: 1 Jan 24 13:47:49.965: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ea20f0 exit status 1 true [0xc000d5e038 0xc000d5e100 0xc000d5e1d0] [0xc000d5e038 0xc000d5e100 0xc000d5e1d0] [0xc000d5e0e0 0xc000d5e118] [0xba6c50 0xba6c50] 0xc00243a480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 13:47:59.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:48:00.444: INFO: rc: 1 Jan 24 13:48:00.444: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001660f60 exit status 1 true [0xc000375520 0xc000375958 0xc000375f38] [0xc000375520 0xc000375958 0xc000375f38] [0xc000375818 0xc000375e88] [0xba6c50 0xba6c50] 0xc001fc3740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 13:48:10.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:48:10.644: INFO: rc: 1 Jan 24 13:48:10.644: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001661020 exit status 1 true [0xc000375f60 0xc002afe008 0xc002afe020] [0xc000375f60 0xc002afe008 0xc002afe020] [0xc002afe000 0xc002afe018] [0xba6c50 0xba6c50] 0xc002a380c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 13:48:20.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:48:20.763: INFO: rc: 1 Jan 24 13:48:20.763: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0016610e0 exit status 1 true [0xc002afe028 0xc002afe048 0xc002afe060] [0xc002afe028 0xc002afe048 0xc002afe060] [0xc002afe040 0xc002afe058] [0xba6c50 0xba6c50] 0xc002a38480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 13:48:30.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:48:31.012: INFO: rc: 1 Jan 24 13:48:31.012: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ea21e0 exit status 1 true [0xc000d5e248 0xc000d5e2c8 0xc000d5e468] [0xc000d5e248 0xc000d5e2c8 0xc000d5e468] [0xc000d5e2b8 0xc000d5e438] [0xba6c50 0xba6c50] 0xc00243b8c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 13:48:41.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4798 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 24 13:48:41.141: INFO: rc: 1 Jan 24 13:48:41.142: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Jan 24 13:48:41.142: INFO: Scaling statefulset ss to 0 Jan 24 13:48:41.155: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 24 13:48:41.158: INFO: Deleting all statefulset in ns statefulset-4798 Jan 24 13:48:41.161: INFO: Scaling statefulset ss to 0 Jan 24 13:48:41.172: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 13:48:41.175: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:48:41.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4798" for this suite. Jan 24 13:48:47.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:48:47.382: INFO: namespace statefulset-4798 deletion completed in 6.151643417s • [SLOW TEST:378.131 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:48:47.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 24 13:48:47.467: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jan 24 13:48:50.063: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:48:50.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5088" for this suite. Jan 24 13:48:58.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:48:58.157: INFO: namespace replication-controller-5088 deletion completed in 7.980729859s • [SLOW TEST:10.775 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:48:58.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-c167d202-aea4-4d93-9725-fa2315ccb699 STEP: Creating a pod to test consume secrets Jan 24 13:48:58.677: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1940cca5-251e-4c32-a390-f7ca894dbc9d" in namespace "projected-957" to be "success or failure" Jan 24 13:48:58.688: INFO: Pod "pod-projected-secrets-1940cca5-251e-4c32-a390-f7ca894dbc9d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.486597ms Jan 24 13:49:00.970: INFO: Pod "pod-projected-secrets-1940cca5-251e-4c32-a390-f7ca894dbc9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292940996s Jan 24 13:49:02.986: INFO: Pod "pod-projected-secrets-1940cca5-251e-4c32-a390-f7ca894dbc9d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308898333s Jan 24 13:49:04.993: INFO: Pod "pod-projected-secrets-1940cca5-251e-4c32-a390-f7ca894dbc9d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.316195711s Jan 24 13:49:07.000: INFO: Pod "pod-projected-secrets-1940cca5-251e-4c32-a390-f7ca894dbc9d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.323224314s Jan 24 13:49:09.008: INFO: Pod "pod-projected-secrets-1940cca5-251e-4c32-a390-f7ca894dbc9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.331151571s STEP: Saw pod success Jan 24 13:49:09.008: INFO: Pod "pod-projected-secrets-1940cca5-251e-4c32-a390-f7ca894dbc9d" satisfied condition "success or failure" Jan 24 13:49:09.012: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-1940cca5-251e-4c32-a390-f7ca894dbc9d container projected-secret-volume-test: STEP: delete the pod Jan 24 13:49:09.101: INFO: Waiting for pod pod-projected-secrets-1940cca5-251e-4c32-a390-f7ca894dbc9d to disappear Jan 24 13:49:09.110: INFO: Pod pod-projected-secrets-1940cca5-251e-4c32-a390-f7ca894dbc9d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:49:09.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-957" for this suite. Jan 24 13:49:15.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:49:15.263: INFO: namespace projected-957 deletion completed in 6.145960401s • [SLOW TEST:17.105 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:49:15.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-5457 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 24 13:49:15.319: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 24 13:49:47.618: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5457 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 24 13:49:47.618: INFO: >>> kubeConfig: /root/.kube/config I0124 13:49:47.707645 9 log.go:172] (0xc0019ca210) (0xc0004a0960) Create stream I0124 13:49:47.707705 9 log.go:172] (0xc0019ca210) (0xc0004a0960) Stream added, broadcasting: 1 I0124 13:49:47.715988 9 log.go:172] (0xc0019ca210) Reply frame received for 1 I0124 13:49:47.716044 9 log.go:172] (0xc0019ca210) (0xc00204e0a0) Create stream I0124 13:49:47.716066 9 log.go:172] (0xc0019ca210) (0xc00204e0a0) Stream added, broadcasting: 3 I0124 13:49:47.718197 9 log.go:172] (0xc0019ca210) Reply frame received for 3 I0124 13:49:47.718233 9 log.go:172] (0xc0019ca210) (0xc00204e140) Create stream I0124 13:49:47.718248 9 log.go:172] (0xc0019ca210) (0xc00204e140) Stream added, broadcasting: 5 I0124 13:49:47.721583 9 log.go:172] (0xc0019ca210) Reply frame received for 5 I0124 13:49:48.914489 9 log.go:172] (0xc0019ca210) Data frame received for 3 I0124 13:49:48.914602 9 log.go:172] (0xc00204e0a0) (3) Data frame handling I0124 13:49:48.914623 9 log.go:172] (0xc00204e0a0) (3) Data frame sent I0124 13:49:49.064453 9 log.go:172] (0xc0019ca210) (0xc00204e0a0) Stream removed, broadcasting: 3 I0124 13:49:49.064885 9 log.go:172] (0xc0019ca210) Data frame received for 1 I0124 13:49:49.065010 9 log.go:172] (0xc0019ca210) (0xc00204e140) Stream removed, broadcasting: 5 I0124 13:49:49.065055 9 log.go:172] (0xc0004a0960) (1) Data frame handling I0124 13:49:49.065070 9 log.go:172] (0xc0004a0960) (1) Data frame sent I0124 13:49:49.065080 9 log.go:172] (0xc0019ca210) (0xc0004a0960) Stream removed, broadcasting: 1 I0124 13:49:49.065090 9 log.go:172] (0xc0019ca210) Go away received I0124 13:49:49.065382 9 log.go:172] (0xc0019ca210) (0xc0004a0960) Stream removed, broadcasting: 1 I0124 13:49:49.065436 9 log.go:172] (0xc0019ca210) (0xc00204e0a0) Stream removed, broadcasting: 3 I0124 13:49:49.065499 9 log.go:172] (0xc0019ca210) (0xc00204e140) Stream removed, broadcasting: 5 Jan 24 13:49:49.065: INFO: Found all expected endpoints: [netserver-0] Jan 24 13:49:49.076: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5457 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 24 13:49:49.077: INFO: >>> kubeConfig: /root/.kube/config I0124 13:49:49.142543 9 log.go:172] (0xc0009c1a20) (0xc001dbe140) Create stream I0124 13:49:49.142632 9 log.go:172] (0xc0009c1a20) (0xc001dbe140) Stream added, broadcasting: 1 I0124 13:49:49.153653 9 log.go:172] (0xc0009c1a20) Reply frame received for 1 I0124 13:49:49.153695 9 log.go:172] (0xc0009c1a20) (0xc00204e1e0) Create stream I0124 13:49:49.153709 9 log.go:172] (0xc0009c1a20) (0xc00204e1e0) Stream added, broadcasting: 3 I0124 13:49:49.156543 9 log.go:172] (0xc0009c1a20) Reply frame received for 3 I0124 13:49:49.156584 9 log.go:172] (0xc0009c1a20) (0xc0004a0c80) Create stream I0124 13:49:49.156598 9 log.go:172] (0xc0009c1a20) (0xc0004a0c80) Stream added, broadcasting: 5 I0124 13:49:49.160178 9 log.go:172] (0xc0009c1a20) Reply frame received for 5 I0124 13:49:50.273759 9 log.go:172] (0xc0009c1a20) Data frame received for 3 I0124 13:49:50.273879 9 log.go:172] (0xc00204e1e0) (3) Data frame handling I0124 13:49:50.273927 9 log.go:172] (0xc00204e1e0) (3) Data frame sent I0124 13:49:50.599307 9 log.go:172] (0xc0009c1a20) Data frame received for 1 I0124 13:49:50.599469 9 log.go:172] (0xc001dbe140) (1) Data frame handling I0124 13:49:50.599495 9 log.go:172] (0xc001dbe140) (1) Data frame sent I0124 13:49:50.619404 9 log.go:172] (0xc0009c1a20) (0xc001dbe140) Stream removed, broadcasting: 1 I0124 13:49:50.621366 9 log.go:172] (0xc0009c1a20) (0xc00204e1e0) Stream removed, broadcasting: 3 I0124 13:49:50.621670 9 log.go:172] (0xc0009c1a20) (0xc0004a0c80) Stream removed, broadcasting: 5 I0124 13:49:50.621783 9 log.go:172] (0xc0009c1a20) (0xc001dbe140) Stream removed, broadcasting: 1 I0124 13:49:50.621817 9 log.go:172] (0xc0009c1a20) (0xc00204e1e0) Stream removed, broadcasting: 3 I0124 13:49:50.621860 9 log.go:172] (0xc0009c1a20) (0xc0004a0c80) Stream removed, broadcasting: 5 Jan 24 13:49:50.622: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:49:50.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5457" for this suite. Jan 24 13:50:14.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:50:14.942: INFO: namespace pod-network-test-5457 deletion completed in 24.225236879s • [SLOW TEST:59.679 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:50:14.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jan 24 13:50:23.684: INFO: Successfully updated pod "labelsupdate30bad22e-6eb6-4878-95b0-d604af01d8da" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:50:25.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9706" for this suite. Jan 24 13:50:47.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:50:47.952: INFO: namespace projected-9706 deletion completed in 22.185183986s • [SLOW TEST:33.009 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:50:47.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 24 13:50:48.070: INFO: Waiting up to 5m0s for pod "downwardapi-volume-383a209a-4a6d-4767-a3d7-9802e5c1b667" in namespace "projected-2313" to be "success or failure" Jan 24 13:50:48.079: INFO: Pod "downwardapi-volume-383a209a-4a6d-4767-a3d7-9802e5c1b667": Phase="Pending", Reason="", readiness=false. Elapsed: 9.208285ms Jan 24 13:50:50.088: INFO: Pod "downwardapi-volume-383a209a-4a6d-4767-a3d7-9802e5c1b667": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017641338s Jan 24 13:50:52.094: INFO: Pod "downwardapi-volume-383a209a-4a6d-4767-a3d7-9802e5c1b667": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023866681s Jan 24 13:50:54.103: INFO: Pod "downwardapi-volume-383a209a-4a6d-4767-a3d7-9802e5c1b667": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032600113s Jan 24 13:50:56.111: INFO: Pod "downwardapi-volume-383a209a-4a6d-4767-a3d7-9802e5c1b667": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.040716643s STEP: Saw pod success Jan 24 13:50:56.111: INFO: Pod "downwardapi-volume-383a209a-4a6d-4767-a3d7-9802e5c1b667" satisfied condition "success or failure" Jan 24 13:50:56.118: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-383a209a-4a6d-4767-a3d7-9802e5c1b667 container client-container: STEP: delete the pod Jan 24 13:50:56.183: INFO: Waiting for pod downwardapi-volume-383a209a-4a6d-4767-a3d7-9802e5c1b667 to disappear Jan 24 13:50:56.192: INFO: Pod downwardapi-volume-383a209a-4a6d-4767-a3d7-9802e5c1b667 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:50:56.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2313" for this suite. Jan 24 13:51:04.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:51:04.375: INFO: namespace projected-2313 deletion completed in 8.175721726s • [SLOW TEST:16.423 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:51:04.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 24 13:51:04.508: INFO: Waiting up to 5m0s for pod "downward-api-afbecdac-648e-487b-88e3-0fdefeae4382" in namespace "downward-api-9288" to be "success or failure" Jan 24 13:51:04.517: INFO: Pod "downward-api-afbecdac-648e-487b-88e3-0fdefeae4382": Phase="Pending", Reason="", readiness=false. Elapsed: 8.200748ms Jan 24 13:51:06.532: INFO: Pod "downward-api-afbecdac-648e-487b-88e3-0fdefeae4382": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023994404s Jan 24 13:51:08.540: INFO: Pod "downward-api-afbecdac-648e-487b-88e3-0fdefeae4382": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032026397s Jan 24 13:51:10.556: INFO: Pod "downward-api-afbecdac-648e-487b-88e3-0fdefeae4382": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048019473s Jan 24 13:51:12.568: INFO: Pod "downward-api-afbecdac-648e-487b-88e3-0fdefeae4382": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059887252s STEP: Saw pod success Jan 24 13:51:12.569: INFO: Pod "downward-api-afbecdac-648e-487b-88e3-0fdefeae4382" satisfied condition "success or failure" Jan 24 13:51:12.574: INFO: Trying to get logs from node iruya-node pod downward-api-afbecdac-648e-487b-88e3-0fdefeae4382 container dapi-container: STEP: delete the pod Jan 24 13:51:12.762: INFO: Waiting for pod downward-api-afbecdac-648e-487b-88e3-0fdefeae4382 to disappear Jan 24 13:51:12.778: INFO: Pod downward-api-afbecdac-648e-487b-88e3-0fdefeae4382 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:51:12.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9288" for this suite. Jan 24 13:51:18.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:51:19.099: INFO: namespace downward-api-9288 deletion completed in 6.317422501s • [SLOW TEST:14.724 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:51:19.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 24 13:51:19.159: INFO: Waiting up to 5m0s for pod "pod-e732df55-5e79-4cf3-8ef2-1adec05f6494" in namespace "emptydir-9031" to be "success or failure" Jan 24 13:51:19.218: INFO: Pod "pod-e732df55-5e79-4cf3-8ef2-1adec05f6494": Phase="Pending", Reason="", readiness=false. Elapsed: 59.228341ms Jan 24 13:51:21.224: INFO: Pod "pod-e732df55-5e79-4cf3-8ef2-1adec05f6494": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065448412s Jan 24 13:51:23.244: INFO: Pod "pod-e732df55-5e79-4cf3-8ef2-1adec05f6494": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085290055s Jan 24 13:51:25.257: INFO: Pod "pod-e732df55-5e79-4cf3-8ef2-1adec05f6494": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097614293s Jan 24 13:51:27.264: INFO: Pod "pod-e732df55-5e79-4cf3-8ef2-1adec05f6494": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.105232497s STEP: Saw pod success Jan 24 13:51:27.264: INFO: Pod "pod-e732df55-5e79-4cf3-8ef2-1adec05f6494" satisfied condition "success or failure" Jan 24 13:51:27.269: INFO: Trying to get logs from node iruya-node pod pod-e732df55-5e79-4cf3-8ef2-1adec05f6494 container test-container: STEP: delete the pod Jan 24 13:51:27.362: INFO: Waiting for pod pod-e732df55-5e79-4cf3-8ef2-1adec05f6494 to disappear Jan 24 13:51:27.434: INFO: Pod pod-e732df55-5e79-4cf3-8ef2-1adec05f6494 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:51:27.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9031" for this suite. Jan 24 13:51:33.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:51:33.641: INFO: namespace emptydir-9031 deletion completed in 6.199123259s • [SLOW TEST:14.542 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:51:33.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-f7549c7d-b2b9-45c5-a460-1e8005bca58f STEP: Creating a pod to test consume secrets Jan 24 13:51:33.851: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c5c1769d-975c-4611-b7be-f4e1faa68335" in namespace "projected-6431" to be "success or failure" Jan 24 13:51:33.925: INFO: Pod "pod-projected-secrets-c5c1769d-975c-4611-b7be-f4e1faa68335": Phase="Pending", Reason="", readiness=false. Elapsed: 73.224015ms Jan 24 13:51:35.933: INFO: Pod "pod-projected-secrets-c5c1769d-975c-4611-b7be-f4e1faa68335": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08116012s Jan 24 13:51:37.953: INFO: Pod "pod-projected-secrets-c5c1769d-975c-4611-b7be-f4e1faa68335": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10173531s Jan 24 13:51:39.960: INFO: Pod "pod-projected-secrets-c5c1769d-975c-4611-b7be-f4e1faa68335": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108280371s Jan 24 13:51:41.977: INFO: Pod "pod-projected-secrets-c5c1769d-975c-4611-b7be-f4e1faa68335": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.125800776s STEP: Saw pod success Jan 24 13:51:41.978: INFO: Pod "pod-projected-secrets-c5c1769d-975c-4611-b7be-f4e1faa68335" satisfied condition "success or failure" Jan 24 13:51:41.985: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-c5c1769d-975c-4611-b7be-f4e1faa68335 container projected-secret-volume-test: STEP: delete the pod Jan 24 13:51:42.066: INFO: Waiting for pod pod-projected-secrets-c5c1769d-975c-4611-b7be-f4e1faa68335 to disappear Jan 24 13:51:42.074: INFO: Pod pod-projected-secrets-c5c1769d-975c-4611-b7be-f4e1faa68335 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:51:42.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6431" for this suite. Jan 24 13:51:48.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:51:48.254: INFO: namespace projected-6431 deletion completed in 6.170922175s • [SLOW TEST:14.611 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:51:48.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 24 13:51:48.376: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:51:56.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9019" for this suite. Jan 24 13:52:48.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:52:48.678: INFO: namespace pods-9019 deletion completed in 52.17170876s • [SLOW TEST:60.424 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:52:48.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 24 13:52:48.778: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:52:57.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-706" for this suite. Jan 24 13:53:39.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:53:39.433: INFO: namespace pods-706 deletion completed in 42.192067675s • [SLOW TEST:50.753 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:53:39.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Jan 24 13:53:39.538: INFO: Waiting up to 5m0s for pod "client-containers-fa29551b-f774-4725-bc4f-defcd9e6d61b" in namespace "containers-7135" to be "success or failure" Jan 24 13:53:39.550: INFO: Pod "client-containers-fa29551b-f774-4725-bc4f-defcd9e6d61b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.882161ms Jan 24 13:53:41.557: INFO: Pod "client-containers-fa29551b-f774-4725-bc4f-defcd9e6d61b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019404552s Jan 24 13:53:43.565: INFO: Pod "client-containers-fa29551b-f774-4725-bc4f-defcd9e6d61b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027521724s Jan 24 13:53:45.574: INFO: Pod "client-containers-fa29551b-f774-4725-bc4f-defcd9e6d61b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035998033s Jan 24 13:53:47.583: INFO: Pod "client-containers-fa29551b-f774-4725-bc4f-defcd9e6d61b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04484064s STEP: Saw pod success Jan 24 13:53:47.583: INFO: Pod "client-containers-fa29551b-f774-4725-bc4f-defcd9e6d61b" satisfied condition "success or failure" Jan 24 13:53:47.587: INFO: Trying to get logs from node iruya-node pod client-containers-fa29551b-f774-4725-bc4f-defcd9e6d61b container test-container: STEP: delete the pod Jan 24 13:53:47.798: INFO: Waiting for pod client-containers-fa29551b-f774-4725-bc4f-defcd9e6d61b to disappear Jan 24 13:53:47.808: INFO: Pod client-containers-fa29551b-f774-4725-bc4f-defcd9e6d61b no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:53:47.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7135" for this suite. Jan 24 13:53:53.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:53:54.002: INFO: namespace containers-7135 deletion completed in 6.182671737s • [SLOW TEST:14.569 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:53:54.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 24 13:53:54.094: INFO: Waiting up to 5m0s for pod "downwardapi-volume-41d144aa-333a-441d-8887-b4b6f5bb1a67" in namespace "projected-6764" to be "success or failure" Jan 24 13:53:54.176: INFO: Pod "downwardapi-volume-41d144aa-333a-441d-8887-b4b6f5bb1a67": Phase="Pending", Reason="", readiness=false. Elapsed: 81.128256ms Jan 24 13:53:56.185: INFO: Pod "downwardapi-volume-41d144aa-333a-441d-8887-b4b6f5bb1a67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091062656s Jan 24 13:53:58.192: INFO: Pod "downwardapi-volume-41d144aa-333a-441d-8887-b4b6f5bb1a67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097864545s Jan 24 13:54:00.221: INFO: Pod "downwardapi-volume-41d144aa-333a-441d-8887-b4b6f5bb1a67": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126957869s Jan 24 13:54:02.234: INFO: Pod "downwardapi-volume-41d144aa-333a-441d-8887-b4b6f5bb1a67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.139570221s STEP: Saw pod success Jan 24 13:54:02.234: INFO: Pod "downwardapi-volume-41d144aa-333a-441d-8887-b4b6f5bb1a67" satisfied condition "success or failure" Jan 24 13:54:02.238: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-41d144aa-333a-441d-8887-b4b6f5bb1a67 container client-container: STEP: delete the pod Jan 24 13:54:02.319: INFO: Waiting for pod downwardapi-volume-41d144aa-333a-441d-8887-b4b6f5bb1a67 to disappear Jan 24 13:54:02.329: INFO: Pod downwardapi-volume-41d144aa-333a-441d-8887-b4b6f5bb1a67 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:54:02.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6764" for this suite. Jan 24 13:54:08.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:54:08.527: INFO: namespace projected-6764 deletion completed in 6.189683165s • [SLOW TEST:14.525 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:54:08.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:54:17.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6101" for this suite. Jan 24 13:54:39.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:54:39.890: INFO: namespace replication-controller-6101 deletion completed in 22.174502002s • [SLOW TEST:31.363 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:54:39.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 24 13:54:40.011: INFO: PodSpec: initContainers in spec.initContainers Jan 24 13:55:39.102: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-9eac9b55-f6ad-4846-8ffd-e5a5cdc82eb3", GenerateName:"", Namespace:"init-container-4342", SelfLink:"/api/v1/namespaces/init-container-4342/pods/pod-init-9eac9b55-f6ad-4846-8ffd-e5a5cdc82eb3", UID:"9a1c9a1a-883b-43f1-8ced-765279f36d7d", ResourceVersion:"21689088", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715470880, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"11688596"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-78qf4", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc003130b00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-78qf4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-78qf4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-78qf4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0023ae258), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0029cf980), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0023ae2f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0023ae320)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0023ae328), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0023ae32c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715470880, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715470880, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715470880, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715470880, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc00284c900), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0025908c0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://28710387700ff4fd5d8e7f8b857b92ff8093ada6b325dea8b048c8f64a48d750"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00284c940), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00284c920), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:55:39.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4342" for this suite. Jan 24 13:56:01.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:56:01.333: INFO: namespace init-container-4342 deletion completed in 22.215864141s • [SLOW TEST:81.442 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:56:01.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 24 13:56:01.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5210' Jan 24 13:56:03.826: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 24 13:56:03.826: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Jan 24 13:56:03.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-5210' Jan 24 13:56:04.093: INFO: stderr: "" Jan 24 13:56:04.093: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:56:04.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5210" for this suite. Jan 24 13:56:10.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:56:10.265: INFO: namespace kubectl-5210 deletion completed in 6.161813882s • [SLOW TEST:8.931 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:56:10.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-4f301b60-ad62-4d98-b905-80e4a71ea1dd STEP: Creating configMap with name cm-test-opt-upd-8e98f5a0-dcef-4e4e-8ac9-96d3a230337b STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-4f301b60-ad62-4d98-b905-80e4a71ea1dd STEP: Updating configmap cm-test-opt-upd-8e98f5a0-dcef-4e4e-8ac9-96d3a230337b STEP: Creating configMap with name cm-test-opt-create-bb450b0a-595d-4234-b929-bf25c2df3d67 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:56:24.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2991" for this suite. Jan 24 13:57:02.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:57:02.859: INFO: namespace projected-2991 deletion completed in 38.160781618s • [SLOW TEST:52.594 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:57:02.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-6341 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-6341 STEP: Deleting pre-stop pod Jan 24 13:57:24.100: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:57:24.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-6341" for this suite. Jan 24 13:58:08.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:58:08.376: INFO: namespace prestop-6341 deletion completed in 44.224095435s • [SLOW TEST:65.516 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:58:08.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Jan 24 13:58:08.493: INFO: Waiting up to 5m0s for pod "pod-3422f26f-0f36-4c89-8eff-249cc3f3fd7e" in namespace "emptydir-6167" to be "success or failure" Jan 24 13:58:08.510: INFO: Pod "pod-3422f26f-0f36-4c89-8eff-249cc3f3fd7e": Phase="Pending", Reason="", readiness=false. Elapsed: 17.108919ms Jan 24 13:58:10.525: INFO: Pod "pod-3422f26f-0f36-4c89-8eff-249cc3f3fd7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032603689s Jan 24 13:58:12.823: INFO: Pod "pod-3422f26f-0f36-4c89-8eff-249cc3f3fd7e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330734369s Jan 24 13:58:14.836: INFO: Pod "pod-3422f26f-0f36-4c89-8eff-249cc3f3fd7e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.342987198s Jan 24 13:58:16.846: INFO: Pod "pod-3422f26f-0f36-4c89-8eff-249cc3f3fd7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.353078706s STEP: Saw pod success Jan 24 13:58:16.846: INFO: Pod "pod-3422f26f-0f36-4c89-8eff-249cc3f3fd7e" satisfied condition "success or failure" Jan 24 13:58:16.851: INFO: Trying to get logs from node iruya-node pod pod-3422f26f-0f36-4c89-8eff-249cc3f3fd7e container test-container: STEP: delete the pod Jan 24 13:58:17.382: INFO: Waiting for pod pod-3422f26f-0f36-4c89-8eff-249cc3f3fd7e to disappear Jan 24 13:58:17.396: INFO: Pod pod-3422f26f-0f36-4c89-8eff-249cc3f3fd7e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 24 13:58:17.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6167" for this suite. Jan 24 13:58:23.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 13:58:23.565: INFO: namespace emptydir-6167 deletion completed in 6.162260352s • [SLOW TEST:15.189 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 24 13:58:23.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 24 13:58:23.713: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 64.862145ms)
Jan 24 13:58:23.724: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.090366ms)
Jan 24 13:58:23.727: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.537845ms)
Jan 24 13:58:23.732: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.428913ms)
Jan 24 13:58:23.737: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.007976ms)
Jan 24 13:58:23.741: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.08176ms)
Jan 24 13:58:23.746: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.482378ms)
Jan 24 13:58:23.751: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.841712ms)
Jan 24 13:58:23.757: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.467177ms)
Jan 24 13:58:23.761: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.628272ms)
Jan 24 13:58:23.771: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.179055ms)
Jan 24 13:58:23.798: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 27.121557ms)
Jan 24 13:58:23.831: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 32.844657ms)
Jan 24 13:58:23.873: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 42.218998ms)
Jan 24 13:58:23.894: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.886024ms)
Jan 24 13:58:23.918: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 24.340338ms)
Jan 24 13:58:23.950: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 31.988278ms)
Jan 24 13:58:23.965: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.953033ms)
Jan 24 13:58:23.985: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 20.063295ms)
Jan 24 13:58:23.994: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.518384ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 13:58:23.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2174" for this suite.
Jan 24 13:58:30.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 13:58:30.167: INFO: namespace proxy-2174 deletion completed in 6.156807535s

• [SLOW TEST:6.602 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 13:58:30.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan 24 13:58:38.908: INFO: Successfully updated pod "labelsupdate1372f20e-f3f9-474e-be67-cbfeb21655ff"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 13:58:40.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7269" for this suite.
Jan 24 13:59:02.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 13:59:03.070: INFO: namespace downward-api-7269 deletion completed in 22.108571594s

• [SLOW TEST:32.902 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 13:59:03.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 24 13:59:03.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3607'
Jan 24 13:59:03.430: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 24 13:59:03.430: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Jan 24 13:59:03.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-3607'
Jan 24 13:59:03.748: INFO: stderr: ""
Jan 24 13:59:03.748: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 13:59:03.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3607" for this suite.
Jan 24 13:59:09.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 13:59:09.984: INFO: namespace kubectl-3607 deletion completed in 6.231170971s

• [SLOW TEST:6.913 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 13:59:09.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6042.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6042.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6042.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6042.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6042.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6042.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6042.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6042.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6042.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6042.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6042.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6042.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6042.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 63.199.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.199.63_udp@PTR;check="$$(dig +tcp +noall +answer +search 63.199.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.199.63_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6042.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6042.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6042.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6042.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6042.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6042.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6042.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6042.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6042.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6042.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6042.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6042.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6042.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 63.199.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.199.63_udp@PTR;check="$$(dig +tcp +noall +answer +search 63.199.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.199.63_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 24 13:59:22.317: INFO: Unable to read wheezy_udp@dns-test-service.dns-6042.svc.cluster.local from pod dns-6042/dns-test-8133c782-4757-49bd-b06b-966d499981b1: the server could not find the requested resource (get pods dns-test-8133c782-4757-49bd-b06b-966d499981b1)
Jan 24 13:59:22.332: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6042.svc.cluster.local from pod dns-6042/dns-test-8133c782-4757-49bd-b06b-966d499981b1: the server could not find the requested resource (get pods dns-test-8133c782-4757-49bd-b06b-966d499981b1)
Jan 24 13:59:22.336: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6042.svc.cluster.local from pod dns-6042/dns-test-8133c782-4757-49bd-b06b-966d499981b1: the server could not find the requested resource (get pods dns-test-8133c782-4757-49bd-b06b-966d499981b1)
Jan 24 13:59:22.340: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6042.svc.cluster.local from pod dns-6042/dns-test-8133c782-4757-49bd-b06b-966d499981b1: the server could not find the requested resource (get pods dns-test-8133c782-4757-49bd-b06b-966d499981b1)
Jan 24 13:59:22.349: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-6042.svc.cluster.local from pod dns-6042/dns-test-8133c782-4757-49bd-b06b-966d499981b1: the server could not find the requested resource (get pods dns-test-8133c782-4757-49bd-b06b-966d499981b1)
Jan 24 13:59:22.355: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-6042.svc.cluster.local from pod dns-6042/dns-test-8133c782-4757-49bd-b06b-966d499981b1: the server could not find the requested resource (get pods dns-test-8133c782-4757-49bd-b06b-966d499981b1)
Jan 24 13:59:22.359: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6042/dns-test-8133c782-4757-49bd-b06b-966d499981b1: the server could not find the requested resource (get pods dns-test-8133c782-4757-49bd-b06b-966d499981b1)
Jan 24 13:59:22.369: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6042/dns-test-8133c782-4757-49bd-b06b-966d499981b1: the server could not find the requested resource (get pods dns-test-8133c782-4757-49bd-b06b-966d499981b1)
Jan 24 13:59:22.373: INFO: Unable to read 10.110.199.63_udp@PTR from pod dns-6042/dns-test-8133c782-4757-49bd-b06b-966d499981b1: the server could not find the requested resource (get pods dns-test-8133c782-4757-49bd-b06b-966d499981b1)
Jan 24 13:59:22.376: INFO: Unable to read 10.110.199.63_tcp@PTR from pod dns-6042/dns-test-8133c782-4757-49bd-b06b-966d499981b1: the server could not find the requested resource (get pods dns-test-8133c782-4757-49bd-b06b-966d499981b1)
Jan 24 13:59:22.380: INFO: Unable to read jessie_udp@dns-test-service.dns-6042.svc.cluster.local from pod dns-6042/dns-test-8133c782-4757-49bd-b06b-966d499981b1: the server could not find the requested resource (get pods dns-test-8133c782-4757-49bd-b06b-966d499981b1)
Jan 24 13:59:22.384: INFO: Unable to read jessie_tcp@dns-test-service.dns-6042.svc.cluster.local from pod dns-6042/dns-test-8133c782-4757-49bd-b06b-966d499981b1: the server could not find the requested resource (get pods dns-test-8133c782-4757-49bd-b06b-966d499981b1)
Jan 24 13:59:22.389: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6042.svc.cluster.local from pod dns-6042/dns-test-8133c782-4757-49bd-b06b-966d499981b1: the server could not find the requested resource (get pods dns-test-8133c782-4757-49bd-b06b-966d499981b1)
Jan 24 13:59:22.394: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6042.svc.cluster.local from pod dns-6042/dns-test-8133c782-4757-49bd-b06b-966d499981b1: the server could not find the requested resource (get pods dns-test-8133c782-4757-49bd-b06b-966d499981b1)
Jan 24 13:59:22.399: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-6042.svc.cluster.local from pod dns-6042/dns-test-8133c782-4757-49bd-b06b-966d499981b1: the server could not find the requested resource (get pods dns-test-8133c782-4757-49bd-b06b-966d499981b1)
Jan 24 13:59:22.402: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-6042.svc.cluster.local from pod dns-6042/dns-test-8133c782-4757-49bd-b06b-966d499981b1: the server could not find the requested resource (get pods dns-test-8133c782-4757-49bd-b06b-966d499981b1)
Jan 24 13:59:22.404: INFO: Unable to read jessie_udp@PodARecord from pod dns-6042/dns-test-8133c782-4757-49bd-b06b-966d499981b1: the server could not find the requested resource (get pods dns-test-8133c782-4757-49bd-b06b-966d499981b1)
Jan 24 13:59:22.407: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6042/dns-test-8133c782-4757-49bd-b06b-966d499981b1: the server could not find the requested resource (get pods dns-test-8133c782-4757-49bd-b06b-966d499981b1)
Jan 24 13:59:22.410: INFO: Unable to read 10.110.199.63_udp@PTR from pod dns-6042/dns-test-8133c782-4757-49bd-b06b-966d499981b1: the server could not find the requested resource (get pods dns-test-8133c782-4757-49bd-b06b-966d499981b1)
Jan 24 13:59:22.414: INFO: Unable to read 10.110.199.63_tcp@PTR from pod dns-6042/dns-test-8133c782-4757-49bd-b06b-966d499981b1: the server could not find the requested resource (get pods dns-test-8133c782-4757-49bd-b06b-966d499981b1)
Jan 24 13:59:22.414: INFO: Lookups using dns-6042/dns-test-8133c782-4757-49bd-b06b-966d499981b1 failed for: [wheezy_udp@dns-test-service.dns-6042.svc.cluster.local wheezy_tcp@dns-test-service.dns-6042.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6042.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6042.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-6042.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-6042.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.110.199.63_udp@PTR 10.110.199.63_tcp@PTR jessie_udp@dns-test-service.dns-6042.svc.cluster.local jessie_tcp@dns-test-service.dns-6042.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6042.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6042.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-6042.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-6042.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.110.199.63_udp@PTR 10.110.199.63_tcp@PTR]

Jan 24 13:59:27.639: INFO: DNS probes using dns-6042/dns-test-8133c782-4757-49bd-b06b-966d499981b1 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 13:59:28.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6042" for this suite.
Jan 24 13:59:34.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 13:59:34.151: INFO: namespace dns-6042 deletion completed in 6.130488938s

• [SLOW TEST:24.166 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 13:59:34.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-9e99c1de-d704-4a14-94dc-ec369fc3eaac
STEP: Creating a pod to test consume configMaps
Jan 24 13:59:34.284: INFO: Waiting up to 5m0s for pod "pod-configmaps-c510f865-1551-4804-bb19-9a8f474760aa" in namespace "configmap-2608" to be "success or failure"
Jan 24 13:59:34.310: INFO: Pod "pod-configmaps-c510f865-1551-4804-bb19-9a8f474760aa": Phase="Pending", Reason="", readiness=false. Elapsed: 26.264551ms
Jan 24 13:59:36.321: INFO: Pod "pod-configmaps-c510f865-1551-4804-bb19-9a8f474760aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036433304s
Jan 24 13:59:38.336: INFO: Pod "pod-configmaps-c510f865-1551-4804-bb19-9a8f474760aa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051703191s
Jan 24 13:59:40.405: INFO: Pod "pod-configmaps-c510f865-1551-4804-bb19-9a8f474760aa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121106524s
Jan 24 13:59:42.413: INFO: Pod "pod-configmaps-c510f865-1551-4804-bb19-9a8f474760aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.129152728s
STEP: Saw pod success
Jan 24 13:59:42.413: INFO: Pod "pod-configmaps-c510f865-1551-4804-bb19-9a8f474760aa" satisfied condition "success or failure"
Jan 24 13:59:42.417: INFO: Trying to get logs from node iruya-node pod pod-configmaps-c510f865-1551-4804-bb19-9a8f474760aa container configmap-volume-test: 
STEP: delete the pod
Jan 24 13:59:42.503: INFO: Waiting for pod pod-configmaps-c510f865-1551-4804-bb19-9a8f474760aa to disappear
Jan 24 13:59:42.523: INFO: Pod pod-configmaps-c510f865-1551-4804-bb19-9a8f474760aa no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 13:59:42.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2608" for this suite.
Jan 24 13:59:48.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 13:59:48.745: INFO: namespace configmap-2608 deletion completed in 6.212627011s

• [SLOW TEST:14.593 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 13:59:48.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-vl87
STEP: Creating a pod to test atomic-volume-subpath
Jan 24 13:59:48.883: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vl87" in namespace "subpath-2435" to be "success or failure"
Jan 24 13:59:48.923: INFO: Pod "pod-subpath-test-configmap-vl87": Phase="Pending", Reason="", readiness=false. Elapsed: 40.172978ms
Jan 24 13:59:50.928: INFO: Pod "pod-subpath-test-configmap-vl87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04546358s
Jan 24 13:59:52.935: INFO: Pod "pod-subpath-test-configmap-vl87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052704573s
Jan 24 13:59:54.945: INFO: Pod "pod-subpath-test-configmap-vl87": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062429481s
Jan 24 13:59:56.953: INFO: Pod "pod-subpath-test-configmap-vl87": Phase="Running", Reason="", readiness=true. Elapsed: 8.070705039s
Jan 24 13:59:58.960: INFO: Pod "pod-subpath-test-configmap-vl87": Phase="Running", Reason="", readiness=true. Elapsed: 10.077638766s
Jan 24 14:00:00.969: INFO: Pod "pod-subpath-test-configmap-vl87": Phase="Running", Reason="", readiness=true. Elapsed: 12.08575753s
Jan 24 14:00:02.978: INFO: Pod "pod-subpath-test-configmap-vl87": Phase="Running", Reason="", readiness=true. Elapsed: 14.095612692s
Jan 24 14:00:04.986: INFO: Pod "pod-subpath-test-configmap-vl87": Phase="Running", Reason="", readiness=true. Elapsed: 16.103627054s
Jan 24 14:00:06.995: INFO: Pod "pod-subpath-test-configmap-vl87": Phase="Running", Reason="", readiness=true. Elapsed: 18.112147488s
Jan 24 14:00:09.001: INFO: Pod "pod-subpath-test-configmap-vl87": Phase="Running", Reason="", readiness=true. Elapsed: 20.118623679s
Jan 24 14:00:11.007: INFO: Pod "pod-subpath-test-configmap-vl87": Phase="Running", Reason="", readiness=true. Elapsed: 22.123792863s
Jan 24 14:00:13.015: INFO: Pod "pod-subpath-test-configmap-vl87": Phase="Running", Reason="", readiness=true. Elapsed: 24.131931087s
Jan 24 14:00:15.027: INFO: Pod "pod-subpath-test-configmap-vl87": Phase="Running", Reason="", readiness=true. Elapsed: 26.144659524s
Jan 24 14:00:17.035: INFO: Pod "pod-subpath-test-configmap-vl87": Phase="Running", Reason="", readiness=true. Elapsed: 28.15204487s
Jan 24 14:00:19.045: INFO: Pod "pod-subpath-test-configmap-vl87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.161718341s
STEP: Saw pod success
Jan 24 14:00:19.045: INFO: Pod "pod-subpath-test-configmap-vl87" satisfied condition "success or failure"
Jan 24 14:00:19.049: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-vl87 container test-container-subpath-configmap-vl87: 
STEP: delete the pod
Jan 24 14:00:19.109: INFO: Waiting for pod pod-subpath-test-configmap-vl87 to disappear
Jan 24 14:00:19.120: INFO: Pod pod-subpath-test-configmap-vl87 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-vl87
Jan 24 14:00:19.120: INFO: Deleting pod "pod-subpath-test-configmap-vl87" in namespace "subpath-2435"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:00:19.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2435" for this suite.
Jan 24 14:00:25.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:00:25.280: INFO: namespace subpath-2435 deletion completed in 6.143511249s

• [SLOW TEST:36.535 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:00:25.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 24 14:03:25.702: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 14:03:25.786: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 14:03:27.787: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 14:03:27.797: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 14:03:29.787: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 14:03:29.797: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 14:03:31.787: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 14:03:31.800: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 14:03:33.787: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 14:03:33.800: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 14:03:35.787: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 14:03:35.800: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 14:03:37.787: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 14:03:37.803: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 14:03:39.787: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 14:03:39.797: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 14:03:41.787: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 14:03:41.823: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 14:03:43.787: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 14:03:43.806: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 14:03:45.787: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 14:03:45.799: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 14:03:47.787: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 14:03:47.792: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 14:03:49.787: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 14:03:49.798: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 14:03:51.787: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 14:03:51.800: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 14:03:53.787: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 14:03:53.794: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 14:03:55.787: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 14:03:55.795: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 14:03:57.787: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 14:03:57.794: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:03:57.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4283" for this suite.
Jan 24 14:04:21.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:04:21.965: INFO: namespace container-lifecycle-hook-4283 deletion completed in 24.163102411s

• [SLOW TEST:236.685 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:04:21.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 24 14:04:22.068: INFO: Waiting up to 5m0s for pod "downwardapi-volume-813db044-670b-4574-8dea-bd44b0cfe0c3" in namespace "downward-api-2739" to be "success or failure"
Jan 24 14:04:22.077: INFO: Pod "downwardapi-volume-813db044-670b-4574-8dea-bd44b0cfe0c3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.623853ms
Jan 24 14:04:24.086: INFO: Pod "downwardapi-volume-813db044-670b-4574-8dea-bd44b0cfe0c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017881621s
Jan 24 14:04:26.095: INFO: Pod "downwardapi-volume-813db044-670b-4574-8dea-bd44b0cfe0c3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026372204s
Jan 24 14:04:28.101: INFO: Pod "downwardapi-volume-813db044-670b-4574-8dea-bd44b0cfe0c3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032768418s
Jan 24 14:04:30.108: INFO: Pod "downwardapi-volume-813db044-670b-4574-8dea-bd44b0cfe0c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.039874921s
STEP: Saw pod success
Jan 24 14:04:30.108: INFO: Pod "downwardapi-volume-813db044-670b-4574-8dea-bd44b0cfe0c3" satisfied condition "success or failure"
Jan 24 14:04:30.111: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-813db044-670b-4574-8dea-bd44b0cfe0c3 container client-container: 
STEP: delete the pod
Jan 24 14:04:30.164: INFO: Waiting for pod downwardapi-volume-813db044-670b-4574-8dea-bd44b0cfe0c3 to disappear
Jan 24 14:04:30.189: INFO: Pod downwardapi-volume-813db044-670b-4574-8dea-bd44b0cfe0c3 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:04:30.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2739" for this suite.
Jan 24 14:04:36.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:04:36.488: INFO: namespace downward-api-2739 deletion completed in 6.293859389s

• [SLOW TEST:14.522 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:04:36.489: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0124 14:05:18.078198       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 24 14:05:18.078: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:05:18.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8571" for this suite.
Jan 24 14:05:38.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:05:38.410: INFO: namespace gc-8571 deletion completed in 20.329183802s

• [SLOW TEST:61.922 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:05:38.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 24 14:05:38.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-6127'
Jan 24 14:05:38.572: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 24 14:05:38.572: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Jan 24 14:05:40.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-6127'
Jan 24 14:05:40.828: INFO: stderr: ""
Jan 24 14:05:40.828: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:05:40.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6127" for this suite.
Jan 24 14:05:46.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:05:47.017: INFO: namespace kubectl-6127 deletion completed in 6.17784284s

• [SLOW TEST:8.606 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:05:47.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Jan 24 14:05:55.754: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2296 pod-service-account-1ac46c8b-9ab4-47eb-967b-3c4883bfa0c5 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jan 24 14:05:56.255: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2296 pod-service-account-1ac46c8b-9ab4-47eb-967b-3c4883bfa0c5 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jan 24 14:05:56.856: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2296 pod-service-account-1ac46c8b-9ab4-47eb-967b-3c4883bfa0c5 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:05:57.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-2296" for this suite.
Jan 24 14:06:03.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:06:03.406: INFO: namespace svcaccounts-2296 deletion completed in 6.13823257s

• [SLOW TEST:16.388 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:06:03.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Jan 24 14:06:03.509: INFO: Waiting up to 5m0s for pod "var-expansion-b6b6a1c5-7161-4a90-a166-8b39241376d3" in namespace "var-expansion-8488" to be "success or failure"
Jan 24 14:06:03.514: INFO: Pod "var-expansion-b6b6a1c5-7161-4a90-a166-8b39241376d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.860762ms
Jan 24 14:06:05.520: INFO: Pod "var-expansion-b6b6a1c5-7161-4a90-a166-8b39241376d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010812898s
Jan 24 14:06:07.527: INFO: Pod "var-expansion-b6b6a1c5-7161-4a90-a166-8b39241376d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018284053s
Jan 24 14:06:09.540: INFO: Pod "var-expansion-b6b6a1c5-7161-4a90-a166-8b39241376d3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031174279s
Jan 24 14:06:11.550: INFO: Pod "var-expansion-b6b6a1c5-7161-4a90-a166-8b39241376d3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04099367s
Jan 24 14:06:13.558: INFO: Pod "var-expansion-b6b6a1c5-7161-4a90-a166-8b39241376d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.049701242s
STEP: Saw pod success
Jan 24 14:06:13.559: INFO: Pod "var-expansion-b6b6a1c5-7161-4a90-a166-8b39241376d3" satisfied condition "success or failure"
Jan 24 14:06:13.563: INFO: Trying to get logs from node iruya-node pod var-expansion-b6b6a1c5-7161-4a90-a166-8b39241376d3 container dapi-container: 
STEP: delete the pod
Jan 24 14:06:13.707: INFO: Waiting for pod var-expansion-b6b6a1c5-7161-4a90-a166-8b39241376d3 to disappear
Jan 24 14:06:13.716: INFO: Pod var-expansion-b6b6a1c5-7161-4a90-a166-8b39241376d3 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:06:13.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-8488" for this suite.
Jan 24 14:06:19.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:06:19.884: INFO: namespace var-expansion-8488 deletion completed in 6.123436624s

• [SLOW TEST:16.478 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:06:19.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-7474
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 24 14:06:19.967: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 24 14:06:56.128: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-7474 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 24 14:06:56.128: INFO: >>> kubeConfig: /root/.kube/config
I0124 14:06:56.206032       9 log.go:172] (0xc001d93080) (0xc0009f7040) Create stream
I0124 14:06:56.206061       9 log.go:172] (0xc001d93080) (0xc0009f7040) Stream added, broadcasting: 1
I0124 14:06:56.214773       9 log.go:172] (0xc001d93080) Reply frame received for 1
I0124 14:06:56.214820       9 log.go:172] (0xc001d93080) (0xc001e56460) Create stream
I0124 14:06:56.214835       9 log.go:172] (0xc001d93080) (0xc001e56460) Stream added, broadcasting: 3
I0124 14:06:56.216729       9 log.go:172] (0xc001d93080) Reply frame received for 3
I0124 14:06:56.216756       9 log.go:172] (0xc001d93080) (0xc0015b6f00) Create stream
I0124 14:06:56.216768       9 log.go:172] (0xc001d93080) (0xc0015b6f00) Stream added, broadcasting: 5
I0124 14:06:56.218831       9 log.go:172] (0xc001d93080) Reply frame received for 5
I0124 14:06:56.451654       9 log.go:172] (0xc001d93080) Data frame received for 3
I0124 14:06:56.451744       9 log.go:172] (0xc001e56460) (3) Data frame handling
I0124 14:06:56.451772       9 log.go:172] (0xc001e56460) (3) Data frame sent
I0124 14:06:56.718787       9 log.go:172] (0xc001d93080) (0xc001e56460) Stream removed, broadcasting: 3
I0124 14:06:56.718970       9 log.go:172] (0xc001d93080) (0xc0015b6f00) Stream removed, broadcasting: 5
I0124 14:06:56.718991       9 log.go:172] (0xc001d93080) Data frame received for 1
I0124 14:06:56.719004       9 log.go:172] (0xc0009f7040) (1) Data frame handling
I0124 14:06:56.719018       9 log.go:172] (0xc0009f7040) (1) Data frame sent
I0124 14:06:56.719026       9 log.go:172] (0xc001d93080) (0xc0009f7040) Stream removed, broadcasting: 1
I0124 14:06:56.719042       9 log.go:172] (0xc001d93080) Go away received
I0124 14:06:56.719282       9 log.go:172] (0xc001d93080) (0xc0009f7040) Stream removed, broadcasting: 1
I0124 14:06:56.719307       9 log.go:172] (0xc001d93080) (0xc001e56460) Stream removed, broadcasting: 3
I0124 14:06:56.719320       9 log.go:172] (0xc001d93080) (0xc0015b6f00) Stream removed, broadcasting: 5
Jan 24 14:06:56.719: INFO: Waiting for endpoints: map[]
Jan 24 14:06:56.728: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-7474 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 24 14:06:56.728: INFO: >>> kubeConfig: /root/.kube/config
I0124 14:06:56.831352       9 log.go:172] (0xc0002598c0) (0xc0015b7360) Create stream
I0124 14:06:56.831475       9 log.go:172] (0xc0002598c0) (0xc0015b7360) Stream added, broadcasting: 1
I0124 14:06:56.844453       9 log.go:172] (0xc0002598c0) Reply frame received for 1
I0124 14:06:56.844482       9 log.go:172] (0xc0002598c0) (0xc0019c20a0) Create stream
I0124 14:06:56.844489       9 log.go:172] (0xc0002598c0) (0xc0019c20a0) Stream added, broadcasting: 3
I0124 14:06:56.846382       9 log.go:172] (0xc0002598c0) Reply frame received for 3
I0124 14:06:56.846407       9 log.go:172] (0xc0002598c0) (0xc0009f7180) Create stream
I0124 14:06:56.846418       9 log.go:172] (0xc0002598c0) (0xc0009f7180) Stream added, broadcasting: 5
I0124 14:06:56.850715       9 log.go:172] (0xc0002598c0) Reply frame received for 5
I0124 14:06:56.986891       9 log.go:172] (0xc0002598c0) Data frame received for 3
I0124 14:06:56.986953       9 log.go:172] (0xc0019c20a0) (3) Data frame handling
I0124 14:06:56.986974       9 log.go:172] (0xc0019c20a0) (3) Data frame sent
I0124 14:06:57.130796       9 log.go:172] (0xc0002598c0) (0xc0019c20a0) Stream removed, broadcasting: 3
I0124 14:06:57.130991       9 log.go:172] (0xc0002598c0) Data frame received for 1
I0124 14:06:57.131008       9 log.go:172] (0xc0015b7360) (1) Data frame handling
I0124 14:06:57.131236       9 log.go:172] (0xc0015b7360) (1) Data frame sent
I0124 14:06:57.131318       9 log.go:172] (0xc0002598c0) (0xc0009f7180) Stream removed, broadcasting: 5
I0124 14:06:57.131375       9 log.go:172] (0xc0002598c0) (0xc0015b7360) Stream removed, broadcasting: 1
I0124 14:06:57.131403       9 log.go:172] (0xc0002598c0) Go away received
I0124 14:06:57.131582       9 log.go:172] (0xc0002598c0) (0xc0015b7360) Stream removed, broadcasting: 1
I0124 14:06:57.131598       9 log.go:172] (0xc0002598c0) (0xc0019c20a0) Stream removed, broadcasting: 3
I0124 14:06:57.131615       9 log.go:172] (0xc0002598c0) (0xc0009f7180) Stream removed, broadcasting: 5
Jan 24 14:06:57.131: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:06:57.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7474" for this suite.
Jan 24 14:07:21.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:07:21.253: INFO: namespace pod-network-test-7474 deletion completed in 24.112411237s

• [SLOW TEST:61.369 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:07:21.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 24 14:07:37.452: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 24 14:07:37.510: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 24 14:07:39.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 24 14:07:39.518: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 24 14:07:41.511: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 24 14:07:41.518: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 24 14:07:43.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 24 14:07:43.531: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 24 14:07:45.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 24 14:07:45.520: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 24 14:07:47.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 24 14:07:47.521: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 24 14:07:49.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 24 14:07:49.518: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 24 14:07:51.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 24 14:07:51.517: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 24 14:07:53.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 24 14:07:53.521: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 24 14:07:55.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 24 14:07:55.522: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 24 14:07:57.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 24 14:07:57.518: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 24 14:07:59.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 24 14:07:59.517: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:07:59.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2038" for this suite.
Jan 24 14:08:19.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:08:19.678: INFO: namespace container-lifecycle-hook-2038 deletion completed in 20.119141546s

• [SLOW TEST:58.425 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:08:19.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 24 14:08:19.863: INFO: Waiting up to 5m0s for pod "pod-c785107c-9969-4384-80e6-c79dc90fdb76" in namespace "emptydir-8044" to be "success or failure"
Jan 24 14:08:19.889: INFO: Pod "pod-c785107c-9969-4384-80e6-c79dc90fdb76": Phase="Pending", Reason="", readiness=false. Elapsed: 25.460483ms
Jan 24 14:08:21.897: INFO: Pod "pod-c785107c-9969-4384-80e6-c79dc90fdb76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033411187s
Jan 24 14:08:23.916: INFO: Pod "pod-c785107c-9969-4384-80e6-c79dc90fdb76": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053186432s
Jan 24 14:08:25.924: INFO: Pod "pod-c785107c-9969-4384-80e6-c79dc90fdb76": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0607219s
Jan 24 14:08:27.935: INFO: Pod "pod-c785107c-9969-4384-80e6-c79dc90fdb76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072074305s
STEP: Saw pod success
Jan 24 14:08:27.936: INFO: Pod "pod-c785107c-9969-4384-80e6-c79dc90fdb76" satisfied condition "success or failure"
Jan 24 14:08:27.941: INFO: Trying to get logs from node iruya-node pod pod-c785107c-9969-4384-80e6-c79dc90fdb76 container test-container: 
STEP: delete the pod
Jan 24 14:08:28.009: INFO: Waiting for pod pod-c785107c-9969-4384-80e6-c79dc90fdb76 to disappear
Jan 24 14:08:28.017: INFO: Pod pod-c785107c-9969-4384-80e6-c79dc90fdb76 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:08:28.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8044" for this suite.
Jan 24 14:08:34.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:08:34.128: INFO: namespace emptydir-8044 deletion completed in 6.099179956s

• [SLOW TEST:14.449 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:08:34.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jan 24 14:08:34.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8771'
Jan 24 14:08:36.367: INFO: stderr: ""
Jan 24 14:08:36.367: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 24 14:08:36.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8771'
Jan 24 14:08:36.603: INFO: stderr: ""
Jan 24 14:08:36.603: INFO: stdout: "update-demo-nautilus-rgb7p update-demo-nautilus-z76s5 "
Jan 24 14:08:36.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rgb7p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8771'
Jan 24 14:08:36.730: INFO: stderr: ""
Jan 24 14:08:36.730: INFO: stdout: ""
Jan 24 14:08:36.730: INFO: update-demo-nautilus-rgb7p is created but not running
Jan 24 14:08:41.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8771'
Jan 24 14:08:42.694: INFO: stderr: ""
Jan 24 14:08:42.694: INFO: stdout: "update-demo-nautilus-rgb7p update-demo-nautilus-z76s5 "
Jan 24 14:08:42.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rgb7p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8771'
Jan 24 14:08:43.289: INFO: stderr: ""
Jan 24 14:08:43.289: INFO: stdout: ""
Jan 24 14:08:43.289: INFO: update-demo-nautilus-rgb7p is created but not running
Jan 24 14:08:48.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8771'
Jan 24 14:08:48.433: INFO: stderr: ""
Jan 24 14:08:48.434: INFO: stdout: "update-demo-nautilus-rgb7p update-demo-nautilus-z76s5 "
Jan 24 14:08:48.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rgb7p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8771'
Jan 24 14:08:48.532: INFO: stderr: ""
Jan 24 14:08:48.532: INFO: stdout: "true"
Jan 24 14:08:48.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rgb7p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8771'
Jan 24 14:08:48.632: INFO: stderr: ""
Jan 24 14:08:48.632: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 24 14:08:48.632: INFO: validating pod update-demo-nautilus-rgb7p
Jan 24 14:08:48.642: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 24 14:08:48.642: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 24 14:08:48.642: INFO: update-demo-nautilus-rgb7p is verified up and running
Jan 24 14:08:48.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z76s5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8771'
Jan 24 14:08:48.713: INFO: stderr: ""
Jan 24 14:08:48.713: INFO: stdout: "true"
Jan 24 14:08:48.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z76s5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8771'
Jan 24 14:08:48.792: INFO: stderr: ""
Jan 24 14:08:48.792: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 24 14:08:48.792: INFO: validating pod update-demo-nautilus-z76s5
Jan 24 14:08:48.805: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 24 14:08:48.805: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 24 14:08:48.805: INFO: update-demo-nautilus-z76s5 is verified up and running
STEP: using delete to clean up resources
Jan 24 14:08:48.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8771'
Jan 24 14:08:48.923: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 24 14:08:48.923: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 24 14:08:48.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8771'
Jan 24 14:08:49.011: INFO: stderr: "No resources found.\n"
Jan 24 14:08:49.011: INFO: stdout: ""
Jan 24 14:08:49.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8771 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 24 14:08:49.082: INFO: stderr: ""
Jan 24 14:08:49.083: INFO: stdout: "update-demo-nautilus-rgb7p\nupdate-demo-nautilus-z76s5\n"
Jan 24 14:08:49.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8771'
Jan 24 14:08:50.690: INFO: stderr: "No resources found.\n"
Jan 24 14:08:50.691: INFO: stdout: ""
Jan 24 14:08:50.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8771 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 24 14:08:51.009: INFO: stderr: ""
Jan 24 14:08:51.009: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:08:51.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8771" for this suite.
Jan 24 14:09:13.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:09:13.221: INFO: namespace kubectl-8771 deletion completed in 22.203497161s

• [SLOW TEST:39.093 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:09:13.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 24 14:09:13.316: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ac3a838-eb69-4a4f-bc78-271f20620dea" in namespace "downward-api-734" to be "success or failure"
Jan 24 14:09:13.330: INFO: Pod "downwardapi-volume-8ac3a838-eb69-4a4f-bc78-271f20620dea": Phase="Pending", Reason="", readiness=false. Elapsed: 13.65374ms
Jan 24 14:09:15.337: INFO: Pod "downwardapi-volume-8ac3a838-eb69-4a4f-bc78-271f20620dea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020288043s
Jan 24 14:09:17.345: INFO: Pod "downwardapi-volume-8ac3a838-eb69-4a4f-bc78-271f20620dea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029059153s
Jan 24 14:09:19.351: INFO: Pod "downwardapi-volume-8ac3a838-eb69-4a4f-bc78-271f20620dea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03523221s
Jan 24 14:09:21.360: INFO: Pod "downwardapi-volume-8ac3a838-eb69-4a4f-bc78-271f20620dea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.043648919s
STEP: Saw pod success
Jan 24 14:09:21.360: INFO: Pod "downwardapi-volume-8ac3a838-eb69-4a4f-bc78-271f20620dea" satisfied condition "success or failure"
Jan 24 14:09:21.366: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8ac3a838-eb69-4a4f-bc78-271f20620dea container client-container: 
STEP: delete the pod
Jan 24 14:09:21.437: INFO: Waiting for pod downwardapi-volume-8ac3a838-eb69-4a4f-bc78-271f20620dea to disappear
Jan 24 14:09:21.453: INFO: Pod downwardapi-volume-8ac3a838-eb69-4a4f-bc78-271f20620dea no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:09:21.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-734" for this suite.
Jan 24 14:09:27.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:09:27.631: INFO: namespace downward-api-734 deletion completed in 6.160781748s

• [SLOW TEST:14.410 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:09:27.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-4e190123-d97f-470e-8c6b-f9cdb0d93f77
STEP: Creating a pod to test consume configMaps
Jan 24 14:09:27.736: INFO: Waiting up to 5m0s for pod "pod-configmaps-ba196586-360f-4f16-a84e-62f0138de190" in namespace "configmap-4041" to be "success or failure"
Jan 24 14:09:27.773: INFO: Pod "pod-configmaps-ba196586-360f-4f16-a84e-62f0138de190": Phase="Pending", Reason="", readiness=false. Elapsed: 37.432ms
Jan 24 14:09:29.780: INFO: Pod "pod-configmaps-ba196586-360f-4f16-a84e-62f0138de190": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044148084s
Jan 24 14:09:31.792: INFO: Pod "pod-configmaps-ba196586-360f-4f16-a84e-62f0138de190": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055794132s
Jan 24 14:09:33.801: INFO: Pod "pod-configmaps-ba196586-360f-4f16-a84e-62f0138de190": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064721529s
Jan 24 14:09:35.836: INFO: Pod "pod-configmaps-ba196586-360f-4f16-a84e-62f0138de190": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100618279s
Jan 24 14:09:37.845: INFO: Pod "pod-configmaps-ba196586-360f-4f16-a84e-62f0138de190": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.108880118s
STEP: Saw pod success
Jan 24 14:09:37.845: INFO: Pod "pod-configmaps-ba196586-360f-4f16-a84e-62f0138de190" satisfied condition "success or failure"
Jan 24 14:09:37.851: INFO: Trying to get logs from node iruya-node pod pod-configmaps-ba196586-360f-4f16-a84e-62f0138de190 container configmap-volume-test: 
STEP: delete the pod
Jan 24 14:09:37.983: INFO: Waiting for pod pod-configmaps-ba196586-360f-4f16-a84e-62f0138de190 to disappear
Jan 24 14:09:37.989: INFO: Pod pod-configmaps-ba196586-360f-4f16-a84e-62f0138de190 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:09:37.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4041" for this suite.
Jan 24 14:09:44.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:09:44.111: INFO: namespace configmap-4041 deletion completed in 6.116190111s

• [SLOW TEST:16.480 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:09:44.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-adc4fe45-6767-4c77-96e4-defb5de31be2 in namespace container-probe-3970
Jan 24 14:09:52.264: INFO: Started pod busybox-adc4fe45-6767-4c77-96e4-defb5de31be2 in namespace container-probe-3970
STEP: checking the pod's current state and verifying that restartCount is present
Jan 24 14:09:52.268: INFO: Initial restart count of pod busybox-adc4fe45-6767-4c77-96e4-defb5de31be2 is 0
Jan 24 14:10:43.284: INFO: Restart count of pod container-probe-3970/busybox-adc4fe45-6767-4c77-96e4-defb5de31be2 is now 1 (51.015384803s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:10:43.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3970" for this suite.
Jan 24 14:10:49.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:10:49.567: INFO: namespace container-probe-3970 deletion completed in 6.195260334s

• [SLOW TEST:65.455 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:10:49.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 24 14:10:49.698: INFO: Waiting up to 5m0s for pod "downwardapi-volume-58af0a74-ca04-4c1f-9988-828b217a56e3" in namespace "projected-612" to be "success or failure"
Jan 24 14:10:49.702: INFO: Pod "downwardapi-volume-58af0a74-ca04-4c1f-9988-828b217a56e3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.204375ms
Jan 24 14:10:51.718: INFO: Pod "downwardapi-volume-58af0a74-ca04-4c1f-9988-828b217a56e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019303031s
Jan 24 14:10:53.725: INFO: Pod "downwardapi-volume-58af0a74-ca04-4c1f-9988-828b217a56e3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026195502s
Jan 24 14:10:55.732: INFO: Pod "downwardapi-volume-58af0a74-ca04-4c1f-9988-828b217a56e3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033869832s
Jan 24 14:10:57.741: INFO: Pod "downwardapi-volume-58af0a74-ca04-4c1f-9988-828b217a56e3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04219622s
Jan 24 14:10:59.750: INFO: Pod "downwardapi-volume-58af0a74-ca04-4c1f-9988-828b217a56e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.05160379s
STEP: Saw pod success
Jan 24 14:10:59.750: INFO: Pod "downwardapi-volume-58af0a74-ca04-4c1f-9988-828b217a56e3" satisfied condition "success or failure"
Jan 24 14:10:59.754: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-58af0a74-ca04-4c1f-9988-828b217a56e3 container client-container: 
STEP: delete the pod
Jan 24 14:10:59.816: INFO: Waiting for pod downwardapi-volume-58af0a74-ca04-4c1f-9988-828b217a56e3 to disappear
Jan 24 14:10:59.829: INFO: Pod downwardapi-volume-58af0a74-ca04-4c1f-9988-828b217a56e3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:10:59.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-612" for this suite.
Jan 24 14:11:05.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:11:06.022: INFO: namespace projected-612 deletion completed in 6.184097435s

• [SLOW TEST:16.454 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:11:06.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-bafa5086-4252-48d6-b403-daa7a2ae0ec4
STEP: Creating a pod to test consume secrets
Jan 24 14:11:06.141: INFO: Waiting up to 5m0s for pod "pod-secrets-4bc8ef1c-84c6-471c-80ce-7434ea9aca0d" in namespace "secrets-5405" to be "success or failure"
Jan 24 14:11:06.158: INFO: Pod "pod-secrets-4bc8ef1c-84c6-471c-80ce-7434ea9aca0d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.96477ms
Jan 24 14:11:08.172: INFO: Pod "pod-secrets-4bc8ef1c-84c6-471c-80ce-7434ea9aca0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030731684s
Jan 24 14:11:10.179: INFO: Pod "pod-secrets-4bc8ef1c-84c6-471c-80ce-7434ea9aca0d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037436002s
Jan 24 14:11:12.198: INFO: Pod "pod-secrets-4bc8ef1c-84c6-471c-80ce-7434ea9aca0d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057027527s
Jan 24 14:11:14.207: INFO: Pod "pod-secrets-4bc8ef1c-84c6-471c-80ce-7434ea9aca0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066083455s
STEP: Saw pod success
Jan 24 14:11:14.207: INFO: Pod "pod-secrets-4bc8ef1c-84c6-471c-80ce-7434ea9aca0d" satisfied condition "success or failure"
Jan 24 14:11:14.212: INFO: Trying to get logs from node iruya-node pod pod-secrets-4bc8ef1c-84c6-471c-80ce-7434ea9aca0d container secret-volume-test: 
STEP: delete the pod
Jan 24 14:11:14.536: INFO: Waiting for pod pod-secrets-4bc8ef1c-84c6-471c-80ce-7434ea9aca0d to disappear
Jan 24 14:11:14.546: INFO: Pod pod-secrets-4bc8ef1c-84c6-471c-80ce-7434ea9aca0d no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:11:14.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5405" for this suite.
Jan 24 14:11:20.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:11:20.712: INFO: namespace secrets-5405 deletion completed in 6.159529692s

• [SLOW TEST:14.691 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:11:20.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:11:26.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7050" for this suite.
Jan 24 14:11:32.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:11:32.607: INFO: namespace watch-7050 deletion completed in 6.237565866s

• [SLOW TEST:11.894 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:11:32.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 24 14:11:32.716: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.300793ms)
Jan 24 14:11:32.729: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.186678ms)
Jan 24 14:11:32.734: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.95472ms)
Jan 24 14:11:32.737: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.642656ms)
Jan 24 14:11:32.742: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.4397ms)
Jan 24 14:11:32.747: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.474693ms)
Jan 24 14:11:32.752: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.97434ms)
Jan 24 14:11:32.756: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.730354ms)
Jan 24 14:11:32.761: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.506406ms)
Jan 24 14:11:32.767: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.871264ms)
Jan 24 14:11:32.775: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.377036ms)
Jan 24 14:11:32.783: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.296498ms)
Jan 24 14:11:32.788: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.4699ms)
Jan 24 14:11:32.795: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.938794ms)
Jan 24 14:11:32.804: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.916757ms)
Jan 24 14:11:32.810: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.364321ms)
Jan 24 14:11:32.817: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.745979ms)
Jan 24 14:11:32.825: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.46875ms)
Jan 24 14:11:32.833: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.235963ms)
Jan 24 14:11:32.837: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.75907ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:11:32.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4794" for this suite.
Jan 24 14:11:38.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:11:39.056: INFO: namespace proxy-4794 deletion completed in 6.213105441s

• [SLOW TEST:6.449 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:11:39.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-83bd5c61-627d-4270-bbd2-bc4c6012642b in namespace container-probe-3889
Jan 24 14:11:47.177: INFO: Started pod liveness-83bd5c61-627d-4270-bbd2-bc4c6012642b in namespace container-probe-3889
STEP: checking the pod's current state and verifying that restartCount is present
Jan 24 14:11:47.181: INFO: Initial restart count of pod liveness-83bd5c61-627d-4270-bbd2-bc4c6012642b is 0
Jan 24 14:12:13.407: INFO: Restart count of pod container-probe-3889/liveness-83bd5c61-627d-4270-bbd2-bc4c6012642b is now 1 (26.22574139s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:12:13.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3889" for this suite.
Jan 24 14:12:19.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:12:19.674: INFO: namespace container-probe-3889 deletion completed in 6.226453865s

• [SLOW TEST:40.616 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:12:19.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Jan 24 14:12:29.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-18c2e356-6619-4bcb-abe9-bb9853f9a3a3 -c busybox-main-container --namespace=emptydir-1932 -- cat /usr/share/volumeshare/shareddata.txt'
Jan 24 14:12:30.418: INFO: stderr: "I0124 14:12:30.138096    2919 log.go:172] (0xc00095c370) (0xc0009bc820) Create stream\nI0124 14:12:30.138232    2919 log.go:172] (0xc00095c370) (0xc0009bc820) Stream added, broadcasting: 1\nI0124 14:12:30.144096    2919 log.go:172] (0xc00095c370) Reply frame received for 1\nI0124 14:12:30.144127    2919 log.go:172] (0xc00095c370) (0xc000578320) Create stream\nI0124 14:12:30.144154    2919 log.go:172] (0xc00095c370) (0xc000578320) Stream added, broadcasting: 3\nI0124 14:12:30.148642    2919 log.go:172] (0xc00095c370) Reply frame received for 3\nI0124 14:12:30.148663    2919 log.go:172] (0xc00095c370) (0xc0009bc8c0) Create stream\nI0124 14:12:30.148671    2919 log.go:172] (0xc00095c370) (0xc0009bc8c0) Stream added, broadcasting: 5\nI0124 14:12:30.150456    2919 log.go:172] (0xc00095c370) Reply frame received for 5\nI0124 14:12:30.255969    2919 log.go:172] (0xc00095c370) Data frame received for 3\nI0124 14:12:30.256070    2919 log.go:172] (0xc000578320) (3) Data frame handling\nI0124 14:12:30.256128    2919 log.go:172] (0xc000578320) (3) Data frame sent\nI0124 14:12:30.407220    2919 log.go:172] (0xc00095c370) (0xc000578320) Stream removed, broadcasting: 3\nI0124 14:12:30.407603    2919 log.go:172] (0xc00095c370) Data frame received for 1\nI0124 14:12:30.407656    2919 log.go:172] (0xc0009bc820) (1) Data frame handling\nI0124 14:12:30.407707    2919 log.go:172] (0xc0009bc820) (1) Data frame sent\nI0124 14:12:30.407832    2919 log.go:172] (0xc00095c370) (0xc0009bc8c0) Stream removed, broadcasting: 5\nI0124 14:12:30.407930    2919 log.go:172] (0xc00095c370) (0xc0009bc820) Stream removed, broadcasting: 1\nI0124 14:12:30.407965    2919 log.go:172] (0xc00095c370) Go away received\nI0124 14:12:30.408789    2919 log.go:172] (0xc00095c370) (0xc0009bc820) Stream removed, broadcasting: 1\nI0124 14:12:30.408814    2919 log.go:172] (0xc00095c370) (0xc000578320) Stream removed, broadcasting: 3\nI0124 14:12:30.408829    2919 log.go:172] (0xc00095c370) (0xc0009bc8c0) Stream removed, broadcasting: 5\n"
Jan 24 14:12:30.418: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:12:30.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1932" for this suite.
Jan 24 14:12:36.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:12:36.575: INFO: namespace emptydir-1932 deletion completed in 6.148195168s

• [SLOW TEST:16.901 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:12:36.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Jan 24 14:12:36.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7005'
Jan 24 14:12:37.035: INFO: stderr: ""
Jan 24 14:12:37.035: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Jan 24 14:12:38.045: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 14:12:38.045: INFO: Found 0 / 1
Jan 24 14:12:39.048: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 14:12:39.049: INFO: Found 0 / 1
Jan 24 14:12:40.047: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 14:12:40.047: INFO: Found 0 / 1
Jan 24 14:12:41.044: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 14:12:41.044: INFO: Found 0 / 1
Jan 24 14:12:42.046: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 14:12:42.046: INFO: Found 0 / 1
Jan 24 14:12:43.048: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 14:12:43.048: INFO: Found 0 / 1
Jan 24 14:12:44.045: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 14:12:44.045: INFO: Found 0 / 1
Jan 24 14:12:45.043: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 14:12:45.043: INFO: Found 0 / 1
Jan 24 14:12:46.044: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 14:12:46.044: INFO: Found 0 / 1
Jan 24 14:12:47.042: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 14:12:47.042: INFO: Found 1 / 1
Jan 24 14:12:47.042: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 24 14:12:47.046: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 14:12:47.046: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jan 24 14:12:47.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zxdrd redis-master --namespace=kubectl-7005'
Jan 24 14:12:47.181: INFO: stderr: ""
Jan 24 14:12:47.182: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 24 Jan 14:12:45.248 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 24 Jan 14:12:45.248 # Server started, Redis version 3.2.12\n1:M 24 Jan 14:12:45.249 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 24 Jan 14:12:45.249 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jan 24 14:12:47.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zxdrd redis-master --namespace=kubectl-7005 --tail=1'
Jan 24 14:12:47.304: INFO: stderr: ""
Jan 24 14:12:47.304: INFO: stdout: "1:M 24 Jan 14:12:45.249 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jan 24 14:12:47.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zxdrd redis-master --namespace=kubectl-7005 --limit-bytes=1'
Jan 24 14:12:47.426: INFO: stderr: ""
Jan 24 14:12:47.426: INFO: stdout: " "
STEP: exposing timestamps
Jan 24 14:12:47.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zxdrd redis-master --namespace=kubectl-7005 --tail=1 --timestamps'
Jan 24 14:12:47.521: INFO: stderr: ""
Jan 24 14:12:47.521: INFO: stdout: "2020-01-24T14:12:45.251097771Z 1:M 24 Jan 14:12:45.249 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jan 24 14:12:50.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zxdrd redis-master --namespace=kubectl-7005 --since=1s'
Jan 24 14:12:50.225: INFO: stderr: ""
Jan 24 14:12:50.225: INFO: stdout: ""
Jan 24 14:12:50.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zxdrd redis-master --namespace=kubectl-7005 --since=24h'
Jan 24 14:12:50.364: INFO: stderr: ""
Jan 24 14:12:50.364: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 24 Jan 14:12:45.248 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 24 Jan 14:12:45.248 # Server started, Redis version 3.2.12\n1:M 24 Jan 14:12:45.249 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 24 Jan 14:12:45.249 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Jan 24 14:12:50.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7005'
Jan 24 14:12:50.474: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 24 14:12:50.474: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jan 24 14:12:50.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-7005'
Jan 24 14:12:50.590: INFO: stderr: "No resources found.\n"
Jan 24 14:12:50.590: INFO: stdout: ""
Jan 24 14:12:50.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-7005 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 24 14:12:50.705: INFO: stderr: ""
Jan 24 14:12:50.705: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:12:50.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7005" for this suite.
Jan 24 14:13:12.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:13:12.863: INFO: namespace kubectl-7005 deletion completed in 22.148996536s

• [SLOW TEST:36.287 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:13:12.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 24 14:13:13.301: INFO: Waiting up to 5m0s for pod "pod-5d270d28-5e72-4a41-805e-894e7420ec1b" in namespace "emptydir-5772" to be "success or failure"
Jan 24 14:13:13.325: INFO: Pod "pod-5d270d28-5e72-4a41-805e-894e7420ec1b": Phase="Pending", Reason="", readiness=false. Elapsed: 23.589819ms
Jan 24 14:13:15.349: INFO: Pod "pod-5d270d28-5e72-4a41-805e-894e7420ec1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047501002s
Jan 24 14:13:17.361: INFO: Pod "pod-5d270d28-5e72-4a41-805e-894e7420ec1b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060031101s
Jan 24 14:13:19.382: INFO: Pod "pod-5d270d28-5e72-4a41-805e-894e7420ec1b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081251024s
Jan 24 14:13:21.402: INFO: Pod "pod-5d270d28-5e72-4a41-805e-894e7420ec1b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.101079891s
Jan 24 14:13:23.417: INFO: Pod "pod-5d270d28-5e72-4a41-805e-894e7420ec1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.116151806s
STEP: Saw pod success
Jan 24 14:13:23.417: INFO: Pod "pod-5d270d28-5e72-4a41-805e-894e7420ec1b" satisfied condition "success or failure"
Jan 24 14:13:23.426: INFO: Trying to get logs from node iruya-node pod pod-5d270d28-5e72-4a41-805e-894e7420ec1b container test-container: 
STEP: delete the pod
Jan 24 14:13:23.662: INFO: Waiting for pod pod-5d270d28-5e72-4a41-805e-894e7420ec1b to disappear
Jan 24 14:13:23.669: INFO: Pod pod-5d270d28-5e72-4a41-805e-894e7420ec1b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:13:23.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5772" for this suite.
Jan 24 14:13:29.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:13:29.899: INFO: namespace emptydir-5772 deletion completed in 6.220464745s

• [SLOW TEST:17.036 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:13:29.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 24 14:13:38.130: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:13:38.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5026" for this suite.
Jan 24 14:13:44.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:13:44.327: INFO: namespace container-runtime-5026 deletion completed in 6.139753315s

• [SLOW TEST:14.427 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:13:44.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 24 14:13:44.497: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"f08c3bc9-edee-440e-807e-4d90ef5677fa", Controller:(*bool)(0xc0023db43a), BlockOwnerDeletion:(*bool)(0xc0023db43b)}}
Jan 24 14:13:44.514: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"128a8ca5-7b36-4e6d-8636-0aac6bf1b7de", Controller:(*bool)(0xc0023db5da), BlockOwnerDeletion:(*bool)(0xc0023db5db)}}
Jan 24 14:13:44.526: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"1401a7a9-eadc-4873-b547-5cf6e12a65f0", Controller:(*bool)(0xc00235b31a), BlockOwnerDeletion:(*bool)(0xc00235b31b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:13:49.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9834" for this suite.
Jan 24 14:13:55.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:13:55.792: INFO: namespace gc-9834 deletion completed in 6.250368507s

• [SLOW TEST:11.464 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:13:55.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-6tc4g in namespace proxy-8300
I0124 14:13:56.120579       9 runners.go:180] Created replication controller with name: proxy-service-6tc4g, namespace: proxy-8300, replica count: 1
I0124 14:13:57.171241       9 runners.go:180] proxy-service-6tc4g Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 14:13:58.171564       9 runners.go:180] proxy-service-6tc4g Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 14:13:59.171950       9 runners.go:180] proxy-service-6tc4g Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 14:14:00.172286       9 runners.go:180] proxy-service-6tc4g Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 14:14:01.172681       9 runners.go:180] proxy-service-6tc4g Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 14:14:02.173093       9 runners.go:180] proxy-service-6tc4g Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 14:14:03.173405       9 runners.go:180] proxy-service-6tc4g Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0124 14:14:04.173795       9 runners.go:180] proxy-service-6tc4g Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 24 14:14:04.181: INFO: setup took 8.19684909s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan 24 14:14:04.229: INFO: (0) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c/proxy/: test (200; 48.05329ms)
Jan 24 14:14:04.229: INFO: (0) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:1080/proxy/: test<... (200; 48.022542ms)
Jan 24 14:14:04.230: INFO: (0) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:160/proxy/: foo (200; 48.474443ms)
Jan 24 14:14:04.288: INFO: (0) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:1080/proxy/: ... (200; 106.272895ms)
Jan 24 14:14:04.288: INFO: (0) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:160/proxy/: foo (200; 106.358474ms)
Jan 24 14:14:04.290: INFO: (0) /api/v1/namespaces/proxy-8300/services/proxy-service-6tc4g:portname2/proxy/: bar (200; 108.628907ms)
Jan 24 14:14:04.311: INFO: (0) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:162/proxy/: bar (200; 129.923373ms)
Jan 24 14:14:04.318: INFO: (0) /api/v1/namespaces/proxy-8300/services/http:proxy-service-6tc4g:portname1/proxy/: foo (200; 136.738327ms)
Jan 24 14:14:04.319: INFO: (0) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:162/proxy/: bar (200; 137.618887ms)
Jan 24 14:14:04.319: INFO: (0) /api/v1/namespaces/proxy-8300/services/proxy-service-6tc4g:portname1/proxy/: foo (200; 138.130136ms)
Jan 24 14:14:04.335: INFO: (0) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:460/proxy/: tls baz (200; 153.339581ms)
Jan 24 14:14:04.335: INFO: (0) /api/v1/namespaces/proxy-8300/services/http:proxy-service-6tc4g:portname2/proxy/: bar (200; 153.525637ms)
Jan 24 14:14:04.335: INFO: (0) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:462/proxy/: tls qux (200; 153.759299ms)
Jan 24 14:14:04.335: INFO: (0) /api/v1/namespaces/proxy-8300/services/https:proxy-service-6tc4g:tlsportname2/proxy/: tls qux (200; 153.705629ms)
Jan 24 14:14:04.335: INFO: (0) /api/v1/namespaces/proxy-8300/services/https:proxy-service-6tc4g:tlsportname1/proxy/: tls baz (200; 154.022241ms)
Jan 24 14:14:04.341: INFO: (0) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:443/proxy/: ... (200; 52.778067ms)
Jan 24 14:14:04.394: INFO: (1) /api/v1/namespaces/proxy-8300/services/proxy-service-6tc4g:portname1/proxy/: foo (200; 53.030992ms)
Jan 24 14:14:04.394: INFO: (1) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:1080/proxy/: test<... (200; 53.244158ms)
Jan 24 14:14:04.395: INFO: (1) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:162/proxy/: bar (200; 53.953801ms)
Jan 24 14:14:04.395: INFO: (1) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:443/proxy/: test (200; 55.303856ms)
Jan 24 14:14:04.398: INFO: (1) /api/v1/namespaces/proxy-8300/services/http:proxy-service-6tc4g:portname1/proxy/: foo (200; 56.73251ms)
Jan 24 14:14:04.398: INFO: (1) /api/v1/namespaces/proxy-8300/services/proxy-service-6tc4g:portname2/proxy/: bar (200; 56.951192ms)
Jan 24 14:14:04.413: INFO: (2) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:462/proxy/: tls qux (200; 14.929369ms)
Jan 24 14:14:04.413: INFO: (2) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:1080/proxy/: test<... (200; 15.233025ms)
Jan 24 14:14:04.414: INFO: (2) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:1080/proxy/: ... (200; 15.728599ms)
Jan 24 14:14:04.414: INFO: (2) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:162/proxy/: bar (200; 16.068303ms)
Jan 24 14:14:04.416: INFO: (2) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:160/proxy/: foo (200; 17.413127ms)
Jan 24 14:14:04.416: INFO: (2) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c/proxy/: test (200; 17.834169ms)
Jan 24 14:14:04.416: INFO: (2) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:443/proxy/: test (200; 14.193175ms)
Jan 24 14:14:04.439: INFO: (3) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:1080/proxy/: test<... (200; 14.310191ms)
Jan 24 14:14:04.439: INFO: (3) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:1080/proxy/: ... (200; 14.740117ms)
Jan 24 14:14:04.441: INFO: (3) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:460/proxy/: tls baz (200; 16.334006ms)
Jan 24 14:14:04.441: INFO: (3) /api/v1/namespaces/proxy-8300/services/https:proxy-service-6tc4g:tlsportname2/proxy/: tls qux (200; 17.100687ms)
Jan 24 14:14:04.442: INFO: (3) /api/v1/namespaces/proxy-8300/services/proxy-service-6tc4g:portname2/proxy/: bar (200; 17.389136ms)
Jan 24 14:14:04.442: INFO: (3) /api/v1/namespaces/proxy-8300/services/https:proxy-service-6tc4g:tlsportname1/proxy/: tls baz (200; 17.772804ms)
Jan 24 14:14:04.445: INFO: (3) /api/v1/namespaces/proxy-8300/services/http:proxy-service-6tc4g:portname2/proxy/: bar (200; 20.375438ms)
Jan 24 14:14:04.455: INFO: (4) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:460/proxy/: tls baz (200; 9.815063ms)
Jan 24 14:14:04.455: INFO: (4) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:160/proxy/: foo (200; 9.815984ms)
Jan 24 14:14:04.455: INFO: (4) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:1080/proxy/: ... (200; 9.941207ms)
Jan 24 14:14:04.461: INFO: (4) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:443/proxy/: test<... (200; 17.805925ms)
Jan 24 14:14:04.462: INFO: (4) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c/proxy/: test (200; 17.517713ms)
Jan 24 14:14:04.463: INFO: (4) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:462/proxy/: tls qux (200; 17.935081ms)
Jan 24 14:14:04.463: INFO: (4) /api/v1/namespaces/proxy-8300/services/http:proxy-service-6tc4g:portname1/proxy/: foo (200; 17.98271ms)
Jan 24 14:14:04.464: INFO: (4) /api/v1/namespaces/proxy-8300/services/http:proxy-service-6tc4g:portname2/proxy/: bar (200; 19.322948ms)
Jan 24 14:14:04.465: INFO: (4) /api/v1/namespaces/proxy-8300/services/https:proxy-service-6tc4g:tlsportname1/proxy/: tls baz (200; 19.8571ms)
Jan 24 14:14:04.465: INFO: (4) /api/v1/namespaces/proxy-8300/services/proxy-service-6tc4g:portname1/proxy/: foo (200; 20.105041ms)
Jan 24 14:14:04.465: INFO: (4) /api/v1/namespaces/proxy-8300/services/proxy-service-6tc4g:portname2/proxy/: bar (200; 20.527368ms)
Jan 24 14:14:04.465: INFO: (4) /api/v1/namespaces/proxy-8300/services/https:proxy-service-6tc4g:tlsportname2/proxy/: tls qux (200; 20.608633ms)
Jan 24 14:14:04.473: INFO: (5) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:443/proxy/: ... (200; 13.258245ms)
Jan 24 14:14:04.480: INFO: (5) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:1080/proxy/: test<... (200; 14.018042ms)
Jan 24 14:14:04.480: INFO: (5) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:162/proxy/: bar (200; 13.643802ms)
Jan 24 14:14:04.480: INFO: (5) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:162/proxy/: bar (200; 13.50068ms)
Jan 24 14:14:04.480: INFO: (5) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c/proxy/: test (200; 13.801539ms)
Jan 24 14:14:04.483: INFO: (5) /api/v1/namespaces/proxy-8300/services/proxy-service-6tc4g:portname2/proxy/: bar (200; 16.991034ms)
Jan 24 14:14:04.488: INFO: (5) /api/v1/namespaces/proxy-8300/services/proxy-service-6tc4g:portname1/proxy/: foo (200; 21.443583ms)
Jan 24 14:14:04.488: INFO: (5) /api/v1/namespaces/proxy-8300/services/https:proxy-service-6tc4g:tlsportname1/proxy/: tls baz (200; 22.630132ms)
Jan 24 14:14:04.488: INFO: (5) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:160/proxy/: foo (200; 22.343219ms)
Jan 24 14:14:04.488: INFO: (5) /api/v1/namespaces/proxy-8300/services/https:proxy-service-6tc4g:tlsportname2/proxy/: tls qux (200; 21.272482ms)
Jan 24 14:14:04.488: INFO: (5) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:462/proxy/: tls qux (200; 21.854265ms)
Jan 24 14:14:04.489: INFO: (5) /api/v1/namespaces/proxy-8300/services/http:proxy-service-6tc4g:portname2/proxy/: bar (200; 21.728771ms)
Jan 24 14:14:04.489: INFO: (5) /api/v1/namespaces/proxy-8300/services/http:proxy-service-6tc4g:portname1/proxy/: foo (200; 22.23236ms)
Jan 24 14:14:04.501: INFO: (6) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:162/proxy/: bar (200; 12.459517ms)
Jan 24 14:14:04.502: INFO: (6) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c/proxy/: test (200; 12.824971ms)
Jan 24 14:14:04.503: INFO: (6) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:160/proxy/: foo (200; 14.059204ms)
Jan 24 14:14:04.503: INFO: (6) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:160/proxy/: foo (200; 14.141522ms)
Jan 24 14:14:04.503: INFO: (6) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:1080/proxy/: test<... (200; 14.186885ms)
Jan 24 14:14:04.503: INFO: (6) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:1080/proxy/: ... (200; 14.284359ms)
Jan 24 14:14:04.503: INFO: (6) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:162/proxy/: bar (200; 14.44009ms)
Jan 24 14:14:04.504: INFO: (6) /api/v1/namespaces/proxy-8300/services/https:proxy-service-6tc4g:tlsportname1/proxy/: tls baz (200; 15.148244ms)
Jan 24 14:14:04.505: INFO: (6) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:460/proxy/: tls baz (200; 15.957489ms)
Jan 24 14:14:04.505: INFO: (6) /api/v1/namespaces/proxy-8300/services/https:proxy-service-6tc4g:tlsportname2/proxy/: tls qux (200; 15.977364ms)
Jan 24 14:14:04.505: INFO: (6) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:462/proxy/: tls qux (200; 16.077641ms)
Jan 24 14:14:04.505: INFO: (6) /api/v1/namespaces/proxy-8300/services/proxy-service-6tc4g:portname1/proxy/: foo (200; 16.222804ms)
Jan 24 14:14:04.505: INFO: (6) /api/v1/namespaces/proxy-8300/services/http:proxy-service-6tc4g:portname1/proxy/: foo (200; 16.287151ms)
Jan 24 14:14:04.505: INFO: (6) /api/v1/namespaces/proxy-8300/services/proxy-service-6tc4g:portname2/proxy/: bar (200; 16.561336ms)
Jan 24 14:14:04.506: INFO: (6) /api/v1/namespaces/proxy-8300/services/http:proxy-service-6tc4g:portname2/proxy/: bar (200; 16.678203ms)
Jan 24 14:14:04.508: INFO: (6) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:443/proxy/: ... (200; 8.813845ms)
Jan 24 14:14:04.518: INFO: (7) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:162/proxy/: bar (200; 9.22576ms)
Jan 24 14:14:04.518: INFO: (7) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:162/proxy/: bar (200; 9.146139ms)
Jan 24 14:14:04.518: INFO: (7) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:443/proxy/: test (200; 10.865098ms)
Jan 24 14:14:04.527: INFO: (7) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:160/proxy/: foo (200; 18.649078ms)
Jan 24 14:14:04.527: INFO: (7) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:1080/proxy/: test<... (200; 18.750038ms)
Jan 24 14:14:04.527: INFO: (7) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:462/proxy/: tls qux (200; 18.82925ms)
Jan 24 14:14:04.527: INFO: (7) /api/v1/namespaces/proxy-8300/services/proxy-service-6tc4g:portname2/proxy/: bar (200; 18.851055ms)
Jan 24 14:14:04.527: INFO: (7) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:160/proxy/: foo (200; 18.805799ms)
Jan 24 14:14:04.529: INFO: (7) /api/v1/namespaces/proxy-8300/services/proxy-service-6tc4g:portname1/proxy/: foo (200; 20.432445ms)
Jan 24 14:14:04.529: INFO: (7) /api/v1/namespaces/proxy-8300/services/http:proxy-service-6tc4g:portname1/proxy/: foo (200; 20.487987ms)
Jan 24 14:14:04.537: INFO: (7) /api/v1/namespaces/proxy-8300/services/https:proxy-service-6tc4g:tlsportname2/proxy/: tls qux (200; 28.454726ms)
Jan 24 14:14:04.538: INFO: (7) /api/v1/namespaces/proxy-8300/services/https:proxy-service-6tc4g:tlsportname1/proxy/: tls baz (200; 29.202055ms)
Jan 24 14:14:04.553: INFO: (8) /api/v1/namespaces/proxy-8300/services/http:proxy-service-6tc4g:portname1/proxy/: foo (200; 15.340392ms)
Jan 24 14:14:04.553: INFO: (8) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:162/proxy/: bar (200; 15.438234ms)
Jan 24 14:14:04.553: INFO: (8) /api/v1/namespaces/proxy-8300/services/proxy-service-6tc4g:portname1/proxy/: foo (200; 15.373465ms)
Jan 24 14:14:04.553: INFO: (8) /api/v1/namespaces/proxy-8300/services/http:proxy-service-6tc4g:portname2/proxy/: bar (200; 15.546ms)
Jan 24 14:14:04.554: INFO: (8) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:1080/proxy/: ... (200; 15.882934ms)
Jan 24 14:14:04.554: INFO: (8) /api/v1/namespaces/proxy-8300/services/proxy-service-6tc4g:portname2/proxy/: bar (200; 16.092883ms)
Jan 24 14:14:04.554: INFO: (8) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:160/proxy/: foo (200; 16.149486ms)
Jan 24 14:14:04.554: INFO: (8) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:160/proxy/: foo (200; 16.547276ms)
Jan 24 14:14:04.555: INFO: (8) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:1080/proxy/: test<... (200; 16.741419ms)
Jan 24 14:14:04.555: INFO: (8) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:443/proxy/: test (200; 16.657296ms)
Jan 24 14:14:04.555: INFO: (8) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:462/proxy/: tls qux (200; 16.902321ms)
Jan 24 14:14:04.555: INFO: (8) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:162/proxy/: bar (200; 16.897347ms)
Jan 24 14:14:04.556: INFO: (8) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:460/proxy/: tls baz (200; 17.602908ms)
Jan 24 14:14:04.556: INFO: (8) /api/v1/namespaces/proxy-8300/services/https:proxy-service-6tc4g:tlsportname1/proxy/: tls baz (200; 17.710308ms)
Jan 24 14:14:04.571: INFO: (9) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:160/proxy/: foo (200; 14.896566ms)
Jan 24 14:14:04.571: INFO: (9) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:160/proxy/: foo (200; 15.092431ms)
Jan 24 14:14:04.571: INFO: (9) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:162/proxy/: bar (200; 15.188879ms)
Jan 24 14:14:04.571: INFO: (9) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:1080/proxy/: ... (200; 15.174618ms)
Jan 24 14:14:04.571: INFO: (9) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:1080/proxy/: test<... (200; 15.384308ms)
Jan 24 14:14:04.571: INFO: (9) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:162/proxy/: bar (200; 15.245698ms)
Jan 24 14:14:04.571: INFO: (9) /api/v1/namespaces/proxy-8300/services/http:proxy-service-6tc4g:portname2/proxy/: bar (200; 15.461501ms)
Jan 24 14:14:04.572: INFO: (9) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c/proxy/: test (200; 15.684204ms)
Jan 24 14:14:04.572: INFO: (9) /api/v1/namespaces/proxy-8300/services/https:proxy-service-6tc4g:tlsportname2/proxy/: tls qux (200; 16.263586ms)
Jan 24 14:14:04.572: INFO: (9) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:460/proxy/: tls baz (200; 16.537869ms)
Jan 24 14:14:04.572: INFO: (9) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:443/proxy/: test (200; 8.425233ms)
Jan 24 14:14:04.587: INFO: (10) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:460/proxy/: tls baz (200; 8.62414ms)
Jan 24 14:14:04.588: INFO: (10) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:160/proxy/: foo (200; 9.026862ms)
Jan 24 14:14:04.588: INFO: (10) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:160/proxy/: foo (200; 9.028669ms)
Jan 24 14:14:04.588: INFO: (10) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:1080/proxy/: ... (200; 9.176545ms)
Jan 24 14:14:04.588: INFO: (10) /api/v1/namespaces/proxy-8300/services/proxy-service-6tc4g:portname2/proxy/: bar (200; 9.196486ms)
Jan 24 14:14:04.592: INFO: (10) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:1080/proxy/: test<... (200; 13.210369ms)
Jan 24 14:14:04.592: INFO: (10) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:162/proxy/: bar (200; 13.307521ms)
Jan 24 14:14:04.592: INFO: (10) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:443/proxy/: test<... (200; 12.245718ms)
Jan 24 14:14:04.609: INFO: (11) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:160/proxy/: foo (200; 12.62435ms)
Jan 24 14:14:04.609: INFO: (11) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:1080/proxy/: ... (200; 12.883514ms)
Jan 24 14:14:04.609: INFO: (11) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:162/proxy/: bar (200; 13.084902ms)
Jan 24 14:14:04.609: INFO: (11) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c/proxy/: test (200; 13.19184ms)
Jan 24 14:14:04.612: INFO: (11) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:443/proxy/: test (200; 8.665906ms)
Jan 24 14:14:04.623: INFO: (12) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:462/proxy/: tls qux (200; 9.347664ms)
Jan 24 14:14:04.623: INFO: (12) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:1080/proxy/: test<... (200; 9.255625ms)
Jan 24 14:14:04.623: INFO: (12) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:460/proxy/: tls baz (200; 9.546423ms)
Jan 24 14:14:04.623: INFO: (12) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:160/proxy/: foo (200; 9.677472ms)
Jan 24 14:14:04.623: INFO: (12) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:162/proxy/: bar (200; 9.658032ms)
Jan 24 14:14:04.623: INFO: (12) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:1080/proxy/: ... (200; 9.629411ms)
Jan 24 14:14:04.623: INFO: (12) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:443/proxy/: test<... (200; 8.569843ms)
Jan 24 14:14:04.636: INFO: (13) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:160/proxy/: foo (200; 8.891966ms)
Jan 24 14:14:04.636: INFO: (13) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:462/proxy/: tls qux (200; 9.150512ms)
Jan 24 14:14:04.637: INFO: (13) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:460/proxy/: tls baz (200; 10.300024ms)
Jan 24 14:14:04.637: INFO: (13) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:160/proxy/: foo (200; 10.431556ms)
Jan 24 14:14:04.638: INFO: (13) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:1080/proxy/: ... (200; 11.631692ms)
Jan 24 14:14:04.638: INFO: (13) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c/proxy/: test (200; 11.659106ms)
Jan 24 14:14:04.638: INFO: (13) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:162/proxy/: bar (200; 11.76681ms)
Jan 24 14:14:04.639: INFO: (13) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:443/proxy/: test (200; 6.578802ms)
Jan 24 14:14:04.648: INFO: (14) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:1080/proxy/: ... (200; 6.77902ms)
Jan 24 14:14:04.648: INFO: (14) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:160/proxy/: foo (200; 6.741152ms)
Jan 24 14:14:04.648: INFO: (14) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:162/proxy/: bar (200; 6.821654ms)
Jan 24 14:14:04.648: INFO: (14) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:160/proxy/: foo (200; 6.811607ms)
Jan 24 14:14:04.648: INFO: (14) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:443/proxy/: test<... (200; 7.323644ms)
Jan 24 14:14:04.650: INFO: (14) /api/v1/namespaces/proxy-8300/services/https:proxy-service-6tc4g:tlsportname2/proxy/: tls qux (200; 9.309241ms)
Jan 24 14:14:04.651: INFO: (14) /api/v1/namespaces/proxy-8300/services/http:proxy-service-6tc4g:portname1/proxy/: foo (200; 10.31708ms)
Jan 24 14:14:04.651: INFO: (14) /api/v1/namespaces/proxy-8300/services/https:proxy-service-6tc4g:tlsportname1/proxy/: tls baz (200; 10.232503ms)
Jan 24 14:14:04.651: INFO: (14) /api/v1/namespaces/proxy-8300/services/proxy-service-6tc4g:portname2/proxy/: bar (200; 10.25751ms)
Jan 24 14:14:04.651: INFO: (14) /api/v1/namespaces/proxy-8300/services/http:proxy-service-6tc4g:portname2/proxy/: bar (200; 10.297493ms)
Jan 24 14:14:04.652: INFO: (14) /api/v1/namespaces/proxy-8300/services/proxy-service-6tc4g:portname1/proxy/: foo (200; 10.714631ms)
Jan 24 14:14:04.663: INFO: (15) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:1080/proxy/: test<... (200; 11.572084ms)
Jan 24 14:14:04.663: INFO: (15) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:1080/proxy/: ... (200; 11.651046ms)
Jan 24 14:14:04.664: INFO: (15) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:160/proxy/: foo (200; 11.652828ms)
Jan 24 14:14:04.664: INFO: (15) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:460/proxy/: tls baz (200; 11.865506ms)
Jan 24 14:14:04.664: INFO: (15) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:160/proxy/: foo (200; 11.920824ms)
Jan 24 14:14:04.664: INFO: (15) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:162/proxy/: bar (200; 11.99668ms)
Jan 24 14:14:04.664: INFO: (15) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:162/proxy/: bar (200; 11.99398ms)
Jan 24 14:14:04.664: INFO: (15) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c/proxy/: test (200; 12.091938ms)
Jan 24 14:14:04.664: INFO: (15) /api/v1/namespaces/proxy-8300/services/https:proxy-service-6tc4g:tlsportname1/proxy/: tls baz (200; 12.143601ms)
Jan 24 14:14:04.665: INFO: (15) /api/v1/namespaces/proxy-8300/services/proxy-service-6tc4g:portname1/proxy/: foo (200; 13.01924ms)
Jan 24 14:14:04.665: INFO: (15) /api/v1/namespaces/proxy-8300/services/https:proxy-service-6tc4g:tlsportname2/proxy/: tls qux (200; 13.022924ms)
Jan 24 14:14:04.665: INFO: (15) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:462/proxy/: tls qux (200; 13.089878ms)
Jan 24 14:14:04.665: INFO: (15) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:443/proxy/: test (200; 14.814547ms)
Jan 24 14:14:04.681: INFO: (16) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:460/proxy/: tls baz (200; 14.488375ms)
Jan 24 14:14:04.681: INFO: (16) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:1080/proxy/: test<... (200; 14.278409ms)
Jan 24 14:14:04.681: INFO: (16) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:1080/proxy/: ... (200; 14.384883ms)
Jan 24 14:14:04.687: INFO: (17) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:162/proxy/: bar (200; 6.42418ms)
Jan 24 14:14:04.687: INFO: (17) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:443/proxy/: ... (200; 9.48848ms)
Jan 24 14:14:04.692: INFO: (17) /api/v1/namespaces/proxy-8300/services/proxy-service-6tc4g:portname2/proxy/: bar (200; 11.365614ms)
Jan 24 14:14:04.692: INFO: (17) /api/v1/namespaces/proxy-8300/services/http:proxy-service-6tc4g:portname1/proxy/: foo (200; 11.404523ms)
Jan 24 14:14:04.692: INFO: (17) /api/v1/namespaces/proxy-8300/services/https:proxy-service-6tc4g:tlsportname2/proxy/: tls qux (200; 11.528028ms)
Jan 24 14:14:04.693: INFO: (17) /api/v1/namespaces/proxy-8300/services/proxy-service-6tc4g:portname1/proxy/: foo (200; 11.709074ms)
Jan 24 14:14:04.694: INFO: (17) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:462/proxy/: tls qux (200; 12.691962ms)
Jan 24 14:14:04.694: INFO: (17) /api/v1/namespaces/proxy-8300/services/https:proxy-service-6tc4g:tlsportname1/proxy/: tls baz (200; 12.720898ms)
Jan 24 14:14:04.694: INFO: (17) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:160/proxy/: foo (200; 12.836452ms)
Jan 24 14:14:04.694: INFO: (17) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c/proxy/: test (200; 12.865714ms)
Jan 24 14:14:04.694: INFO: (17) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:1080/proxy/: test<... (200; 12.989643ms)
Jan 24 14:14:04.694: INFO: (17) /api/v1/namespaces/proxy-8300/services/http:proxy-service-6tc4g:portname2/proxy/: bar (200; 13.049195ms)
Jan 24 14:14:04.703: INFO: (18) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:1080/proxy/: ... (200; 8.580215ms)
Jan 24 14:14:04.703: INFO: (18) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:443/proxy/: test<... (200; 9.004009ms)
Jan 24 14:14:04.703: INFO: (18) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c/proxy/: test (200; 9.009233ms)
Jan 24 14:14:04.703: INFO: (18) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:460/proxy/: tls baz (200; 9.118284ms)
Jan 24 14:14:04.703: INFO: (18) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:160/proxy/: foo (200; 9.108988ms)
Jan 24 14:14:04.704: INFO: (18) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:462/proxy/: tls qux (200; 10.114028ms)
Jan 24 14:14:04.705: INFO: (18) /api/v1/namespaces/proxy-8300/services/proxy-service-6tc4g:portname1/proxy/: foo (200; 11.470586ms)
Jan 24 14:14:04.706: INFO: (18) /api/v1/namespaces/proxy-8300/services/proxy-service-6tc4g:portname2/proxy/: bar (200; 11.733551ms)
Jan 24 14:14:04.708: INFO: (18) /api/v1/namespaces/proxy-8300/services/https:proxy-service-6tc4g:tlsportname1/proxy/: tls baz (200; 13.511217ms)
Jan 24 14:14:04.708: INFO: (18) /api/v1/namespaces/proxy-8300/services/http:proxy-service-6tc4g:portname2/proxy/: bar (200; 13.769183ms)
Jan 24 14:14:04.709: INFO: (18) /api/v1/namespaces/proxy-8300/services/https:proxy-service-6tc4g:tlsportname2/proxy/: tls qux (200; 15.438488ms)
Jan 24 14:14:04.710: INFO: (18) /api/v1/namespaces/proxy-8300/services/http:proxy-service-6tc4g:portname1/proxy/: foo (200; 15.733755ms)
Jan 24 14:14:04.717: INFO: (19) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:160/proxy/: foo (200; 6.926155ms)
Jan 24 14:14:04.721: INFO: (19) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:1080/proxy/: ... (200; 11.01753ms)
Jan 24 14:14:04.721: INFO: (19) /api/v1/namespaces/proxy-8300/services/proxy-service-6tc4g:portname1/proxy/: foo (200; 11.275769ms)
Jan 24 14:14:04.721: INFO: (19) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:1080/proxy/: test<... (200; 11.576994ms)
Jan 24 14:14:04.722: INFO: (19) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c:162/proxy/: bar (200; 11.664441ms)
Jan 24 14:14:04.722: INFO: (19) /api/v1/namespaces/proxy-8300/pods/proxy-service-6tc4g-lbd8c/proxy/: test (200; 11.664074ms)
Jan 24 14:14:04.722: INFO: (19) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:462/proxy/: tls qux (200; 11.754736ms)
Jan 24 14:14:04.722: INFO: (19) /api/v1/namespaces/proxy-8300/pods/http:proxy-service-6tc4g-lbd8c:162/proxy/: bar (200; 11.668241ms)
Jan 24 14:14:04.727: INFO: (19) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:460/proxy/: tls baz (200; 16.965478ms)
Jan 24 14:14:04.727: INFO: (19) /api/v1/namespaces/proxy-8300/services/https:proxy-service-6tc4g:tlsportname1/proxy/: tls baz (200; 17.122902ms)
Jan 24 14:14:04.727: INFO: (19) /api/v1/namespaces/proxy-8300/services/proxy-service-6tc4g:portname2/proxy/: bar (200; 17.273281ms)
Jan 24 14:14:04.727: INFO: (19) /api/v1/namespaces/proxy-8300/services/https:proxy-service-6tc4g:tlsportname2/proxy/: tls qux (200; 17.291374ms)
Jan 24 14:14:04.727: INFO: (19) /api/v1/namespaces/proxy-8300/services/http:proxy-service-6tc4g:portname1/proxy/: foo (200; 17.321418ms)
Jan 24 14:14:04.727: INFO: (19) /api/v1/namespaces/proxy-8300/services/http:proxy-service-6tc4g:portname2/proxy/: bar (200; 17.374699ms)
Jan 24 14:14:04.727: INFO: (19) /api/v1/namespaces/proxy-8300/pods/https:proxy-service-6tc4g-lbd8c:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-7a23ac7f-58c5-4a41-a331-83857986208b
STEP: Creating a pod to test consume configMaps
Jan 24 14:14:22.900: INFO: Waiting up to 5m0s for pod "pod-configmaps-4eb21521-d239-4d75-89f1-33ccc935eee1" in namespace "configmap-7955" to be "success or failure"
Jan 24 14:14:22.909: INFO: Pod "pod-configmaps-4eb21521-d239-4d75-89f1-33ccc935eee1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.088894ms
Jan 24 14:14:24.917: INFO: Pod "pod-configmaps-4eb21521-d239-4d75-89f1-33ccc935eee1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01699124s
Jan 24 14:14:26.925: INFO: Pod "pod-configmaps-4eb21521-d239-4d75-89f1-33ccc935eee1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02475963s
Jan 24 14:14:28.933: INFO: Pod "pod-configmaps-4eb21521-d239-4d75-89f1-33ccc935eee1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032468375s
Jan 24 14:14:30.944: INFO: Pod "pod-configmaps-4eb21521-d239-4d75-89f1-33ccc935eee1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.043674742s
STEP: Saw pod success
Jan 24 14:14:30.944: INFO: Pod "pod-configmaps-4eb21521-d239-4d75-89f1-33ccc935eee1" satisfied condition "success or failure"
Jan 24 14:14:30.946: INFO: Trying to get logs from node iruya-node pod pod-configmaps-4eb21521-d239-4d75-89f1-33ccc935eee1 container configmap-volume-test: 
STEP: delete the pod
Jan 24 14:14:31.044: INFO: Waiting for pod pod-configmaps-4eb21521-d239-4d75-89f1-33ccc935eee1 to disappear
Jan 24 14:14:31.048: INFO: Pod pod-configmaps-4eb21521-d239-4d75-89f1-33ccc935eee1 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:14:31.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7955" for this suite.
Jan 24 14:14:37.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:14:37.183: INFO: namespace configmap-7955 deletion completed in 6.13101606s

• [SLOW TEST:14.434 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:14:37.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 24 14:14:37.269: INFO: Creating deployment "nginx-deployment"
Jan 24 14:14:37.273: INFO: Waiting for observed generation 1
Jan 24 14:14:40.466: INFO: Waiting for all required pods to come up
Jan 24 14:14:40.479: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan 24 14:15:08.975: INFO: Waiting for deployment "nginx-deployment" to complete
Jan 24 14:15:08.983: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jan 24 14:15:08.999: INFO: Updating deployment nginx-deployment
Jan 24 14:15:08.999: INFO: Waiting for observed generation 2
Jan 24 14:15:11.468: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan 24 14:15:11.475: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan 24 14:15:12.135: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan 24 14:15:12.148: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan 24 14:15:12.148: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan 24 14:15:12.151: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan 24 14:15:12.157: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jan 24 14:15:12.157: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jan 24 14:15:12.167: INFO: Updating deployment nginx-deployment
Jan 24 14:15:12.167: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jan 24 14:15:13.823: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan 24 14:15:14.171: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 24 14:15:22.348: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-2287,SelfLink:/apis/apps/v1/namespaces/deployment-2287/deployments/nginx-deployment,UID:01e90533-b50e-4f18-8e50-3d7ee03584d6,ResourceVersion:21692209,Generation:3,CreationTimestamp:2020-01-24 14:14:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:21,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-01-24 14:15:12 +0000 UTC 2020-01-24 14:15:12 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-24 14:15:17 +0000 UTC 2020-01-24 14:14:37 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Jan 24 14:15:22.934: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-2287,SelfLink:/apis/apps/v1/namespaces/deployment-2287/replicasets/nginx-deployment-55fb7cb77f,UID:20c15bc9-1122-4cfd-9498-02e567dff85f,ResourceVersion:21692203,Generation:3,CreationTimestamp:2020-01-24 14:15:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 01e90533-b50e-4f18-8e50-3d7ee03584d6 0xc0035e22a7 0xc0035e22a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 24 14:15:22.934: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jan 24 14:15:22.934: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-2287,SelfLink:/apis/apps/v1/namespaces/deployment-2287/replicasets/nginx-deployment-7b8c6f4498,UID:17fb062b-e513-4159-b912-62050e382a38,ResourceVersion:21692216,Generation:3,CreationTimestamp:2020-01-24 14:14:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 01e90533-b50e-4f18-8e50-3d7ee03584d6 0xc0035e2377 0xc0035e2378}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jan 24 14:15:26.037: INFO: Pod "nginx-deployment-55fb7cb77f-4pgwr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4pgwr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-55fb7cb77f-4pgwr,UID:4e0abb57-a6b3-423a-9fbb-f08233b48a1a,ResourceVersion:21692121,Generation:0,CreationTimestamp:2020-01-24 14:15:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 20c15bc9-1122-4cfd-9498-02e567dff85f 0xc0033c0007 0xc0033c0008}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033c0080} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033c00a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-24 14:15:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.038: INFO: Pod "nginx-deployment-55fb7cb77f-7n4m6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7n4m6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-55fb7cb77f-7n4m6,UID:1d1372cf-746e-4a6d-98b0-aa0910fe0e0d,ResourceVersion:21692110,Generation:0,CreationTimestamp:2020-01-24 14:15:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 20c15bc9-1122-4cfd-9498-02e567dff85f 0xc0033c0177 0xc0033c0178}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033c01f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033c0210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-24 14:15:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.038: INFO: Pod "nginx-deployment-55fb7cb77f-89pnv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-89pnv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-55fb7cb77f-89pnv,UID:bd4fc556-c1d6-4456-93fc-8fd1d98a0046,ResourceVersion:21692140,Generation:0,CreationTimestamp:2020-01-24 14:15:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 20c15bc9-1122-4cfd-9498-02e567dff85f 0xc0033c02e7 0xc0033c02e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033c0360} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033c0380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-24 14:15:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.039: INFO: Pod "nginx-deployment-55fb7cb77f-8f5dl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8f5dl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-55fb7cb77f-8f5dl,UID:d45a5ae9-c681-4f07-a24c-67c5330fdabe,ResourceVersion:21692205,Generation:0,CreationTimestamp:2020-01-24 14:15:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 20c15bc9-1122-4cfd-9498-02e567dff85f 0xc0033c0457 0xc0033c0458}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033c04c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033c04e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:13 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-24 14:15:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.039: INFO: Pod "nginx-deployment-55fb7cb77f-dllrs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dllrs,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-55fb7cb77f-dllrs,UID:36757861-0f0f-44dd-a91e-1b1e1c87f437,ResourceVersion:21692118,Generation:0,CreationTimestamp:2020-01-24 14:15:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 20c15bc9-1122-4cfd-9498-02e567dff85f 0xc0033c05b7 0xc0033c05b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033c0620} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033c0640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-24 14:15:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.040: INFO: Pod "nginx-deployment-55fb7cb77f-llqkx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-llqkx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-55fb7cb77f-llqkx,UID:d98173a4-406c-422f-870c-4137dcaf0406,ResourceVersion:21692196,Generation:0,CreationTimestamp:2020-01-24 14:15:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 20c15bc9-1122-4cfd-9498-02e567dff85f 0xc0033c0717 0xc0033c0718}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033c0780} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033c07a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.040: INFO: Pod "nginx-deployment-55fb7cb77f-s7mm7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-s7mm7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-55fb7cb77f-s7mm7,UID:3fba6407-a809-423b-9f1e-2c13852e22b3,ResourceVersion:21692141,Generation:0,CreationTimestamp:2020-01-24 14:15:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 20c15bc9-1122-4cfd-9498-02e567dff85f 0xc0033c0827 0xc0033c0828}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033c0890} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033c08b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-24 14:15:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.041: INFO: Pod "nginx-deployment-55fb7cb77f-sk4tx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-sk4tx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-55fb7cb77f-sk4tx,UID:7f39fc57-5aae-4cfb-ba6d-782ab09fb3d2,ResourceVersion:21692227,Generation:0,CreationTimestamp:2020-01-24 14:15:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 20c15bc9-1122-4cfd-9498-02e567dff85f 0xc0033c0987 0xc0033c0988}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033c0a00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033c0a20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-24 14:15:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.041: INFO: Pod "nginx-deployment-55fb7cb77f-sxhkj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-sxhkj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-55fb7cb77f-sxhkj,UID:bb426d61-1dd5-4850-939a-8a84b469d87f,ResourceVersion:21692201,Generation:0,CreationTimestamp:2020-01-24 14:15:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 20c15bc9-1122-4cfd-9498-02e567dff85f 0xc0033c0af7 0xc0033c0af8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033c0b70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033c0b90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.041: INFO: Pod "nginx-deployment-55fb7cb77f-t5rmm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-t5rmm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-55fb7cb77f-t5rmm,UID:076fa77b-287c-4194-b2c2-c168aa9bf59b,ResourceVersion:21692199,Generation:0,CreationTimestamp:2020-01-24 14:15:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 20c15bc9-1122-4cfd-9498-02e567dff85f 0xc0033c0c17 0xc0033c0c18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033c0c90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033c0cb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.042: INFO: Pod "nginx-deployment-55fb7cb77f-t6mfj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-t6mfj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-55fb7cb77f-t6mfj,UID:9d009a1f-a60b-4a3c-a91f-d5cc7f8b56f7,ResourceVersion:21692195,Generation:0,CreationTimestamp:2020-01-24 14:15:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 20c15bc9-1122-4cfd-9498-02e567dff85f 0xc0033c0d37 0xc0033c0d38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033c0da0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033c0dc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.042: INFO: Pod "nginx-deployment-55fb7cb77f-txfx6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-txfx6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-55fb7cb77f-txfx6,UID:0dddbf66-92ce-49e7-a353-8bddf931d831,ResourceVersion:21692194,Generation:0,CreationTimestamp:2020-01-24 14:15:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 20c15bc9-1122-4cfd-9498-02e567dff85f 0xc0033c0e47 0xc0033c0e48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033c0eb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033c0ed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.043: INFO: Pod "nginx-deployment-55fb7cb77f-x7lt8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-x7lt8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-55fb7cb77f-x7lt8,UID:6ae6c844-717d-4a01-bee1-0f5aec9d8f03,ResourceVersion:21692189,Generation:0,CreationTimestamp:2020-01-24 14:15:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 20c15bc9-1122-4cfd-9498-02e567dff85f 0xc0033c0f57 0xc0033c0f58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033c0fd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033c0ff0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.043: INFO: Pod "nginx-deployment-7b8c6f4498-5lw8k" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5lw8k,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-7b8c6f4498-5lw8k,UID:4bf98fd9-b64f-40ea-a723-8fcde76b4b18,ResourceVersion:21692058,Generation:0,CreationTimestamp:2020-01-24 14:14:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 17fb062b-e513-4159-b912-62050e382a38 0xc0033c1077 0xc0033c1078}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033c10f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033c1110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:14:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:03 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:14:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-01-24 14:14:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-24 14:15:02 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a5c4502af6dcdab7e9bbc59aa1aef42cab2d4ba1853b4f11d517c92c0473ddd5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.043: INFO: Pod "nginx-deployment-7b8c6f4498-6lb58" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6lb58,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-7b8c6f4498-6lb58,UID:27abdac7-ce08-420e-9153-ca1553d2aa60,ResourceVersion:21692206,Generation:0,CreationTimestamp:2020-01-24 14:15:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 17fb062b-e513-4159-b912-62050e382a38 0xc0033c11e7 0xc0033c11e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033c1260} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033c1280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:13 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-24 14:15:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.044: INFO: Pod "nginx-deployment-7b8c6f4498-862wb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-862wb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-7b8c6f4498-862wb,UID:0343cbdb-c55a-442e-81b8-52116ecf66e7,ResourceVersion:21692183,Generation:0,CreationTimestamp:2020-01-24 14:15:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 17fb062b-e513-4159-b912-62050e382a38 0xc0033c1347 0xc0033c1348}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033c13c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033c13e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.044: INFO: Pod "nginx-deployment-7b8c6f4498-8xg2j" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8xg2j,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-7b8c6f4498-8xg2j,UID:9480945c-1640-4f4b-9fcb-d15c09608046,ResourceVersion:21692197,Generation:0,CreationTimestamp:2020-01-24 14:15:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 17fb062b-e513-4159-b912-62050e382a38 0xc0033c1467 0xc0033c1468}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033c14d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033c14f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.044: INFO: Pod "nginx-deployment-7b8c6f4498-9wmd5" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9wmd5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-7b8c6f4498-9wmd5,UID:ce699a5f-0613-476e-a1e0-606606377f13,ResourceVersion:21692073,Generation:0,CreationTimestamp:2020-01-24 14:14:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 17fb062b-e513-4159-b912-62050e382a38 0xc0033c1577 0xc0033c1578}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033c15e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033c1600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:14:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:14:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-01-24 14:14:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-24 14:15:04 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://76c82773db4e77e554db4b9a07d6ae9ab5f1f1e4204e232f1316611f046da552}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.045: INFO: Pod "nginx-deployment-7b8c6f4498-bhsjw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bhsjw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-7b8c6f4498-bhsjw,UID:057ae314-0e03-4c3b-896e-717d294cb0f5,ResourceVersion:21692185,Generation:0,CreationTimestamp:2020-01-24 14:15:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 17fb062b-e513-4159-b912-62050e382a38 0xc0033c16d7 0xc0033c16d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033c1750} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033c1770}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.045: INFO: Pod "nginx-deployment-7b8c6f4498-dgb9p" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dgb9p,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-7b8c6f4498-dgb9p,UID:872427c4-6894-4d03-b624-5cf730526765,ResourceVersion:21692191,Generation:0,CreationTimestamp:2020-01-24 14:15:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 17fb062b-e513-4159-b912-62050e382a38 0xc0033c17f7 0xc0033c17f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033c1860} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033c1880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.045: INFO: Pod "nginx-deployment-7b8c6f4498-h7js6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-h7js6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-7b8c6f4498-h7js6,UID:81e6f753-baaa-4b85-a436-bbdd6e2ce979,ResourceVersion:21692187,Generation:0,CreationTimestamp:2020-01-24 14:15:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 17fb062b-e513-4159-b912-62050e382a38 0xc0033c1907 0xc0033c1908}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033c1980} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033c19a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.045: INFO: Pod "nginx-deployment-7b8c6f4498-htfv8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-htfv8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-7b8c6f4498-htfv8,UID:899cb099-9deb-4087-a1d2-e1fb92217291,ResourceVersion:21692181,Generation:0,CreationTimestamp:2020-01-24 14:15:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 17fb062b-e513-4159-b912-62050e382a38 0xc0033c1a27 0xc0033c1a28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033c1a90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033c1ab0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-24 14:15:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.046: INFO: Pod "nginx-deployment-7b8c6f4498-jhjbx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jhjbx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-7b8c6f4498-jhjbx,UID:045bad54-62a4-41c4-a4d7-c26f0a37a50c,ResourceVersion:21692218,Generation:0,CreationTimestamp:2020-01-24 14:15:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 17fb062b-e513-4159-b912-62050e382a38 0xc0033c1b77 0xc0033c1b78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033c1be0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033c1c00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:13 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-24 14:15:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.046: INFO: Pod "nginx-deployment-7b8c6f4498-mjwvj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mjwvj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-7b8c6f4498-mjwvj,UID:b64df4d1-d6ee-4580-8c2b-c577e84c4d9e,ResourceVersion:21692070,Generation:0,CreationTimestamp:2020-01-24 14:14:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 17fb062b-e513-4159-b912-62050e382a38 0xc0033c1cc7 0xc0033c1cc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033c1d30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033c1d50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:14:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:14:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-01-24 14:14:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-24 14:15:04 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2caf0f1503a4b4e569d295729f52f08ab0e6d0a106e488cba21b7ac312c481f1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.046: INFO: Pod "nginx-deployment-7b8c6f4498-mwpnt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mwpnt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-7b8c6f4498-mwpnt,UID:f146324b-e036-4b38-bd31-c28c3297ddbe,ResourceVersion:21692226,Generation:0,CreationTimestamp:2020-01-24 14:15:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 17fb062b-e513-4159-b912-62050e382a38 0xc0033c1e27 0xc0033c1e28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033c1e90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033c1eb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-24 14:15:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.047: INFO: Pod "nginx-deployment-7b8c6f4498-nqzzn" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nqzzn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-7b8c6f4498-nqzzn,UID:0a00125f-807c-48c8-9f27-79196ec88a57,ResourceVersion:21692077,Generation:0,CreationTimestamp:2020-01-24 14:14:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 17fb062b-e513-4159-b912-62050e382a38 0xc0033c1f77 0xc0033c1f78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033c1fe0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ee8000}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:14:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:14:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-01-24 14:14:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-24 14:15:05 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://4ef66f051f51737b3440671eb1d7fe73b2cc046fcc2cd8663cda00e2eee2b077}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.047: INFO: Pod "nginx-deployment-7b8c6f4498-qdbrx" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qdbrx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-7b8c6f4498-qdbrx,UID:9bd9b1a0-67e4-40a1-bdbe-438493da5f6b,ResourceVersion:21692048,Generation:0,CreationTimestamp:2020-01-24 14:14:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 17fb062b-e513-4159-b912-62050e382a38 0xc002ee80d7 0xc002ee80d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ee8150} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ee8170}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:14:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:14:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2020-01-24 14:14:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-24 14:15:02 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://39e146905e30b2f7d37996704dc8af66e03d8e7a8b55b9ed699d1899b863b080}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.047: INFO: Pod "nginx-deployment-7b8c6f4498-qhwm8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qhwm8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-7b8c6f4498-qhwm8,UID:b245c33f-ed4d-4045-b7cc-09725add2472,ResourceVersion:21692188,Generation:0,CreationTimestamp:2020-01-24 14:15:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 17fb062b-e513-4159-b912-62050e382a38 0xc002ee8247 0xc002ee8248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ee82b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ee82d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.047: INFO: Pod "nginx-deployment-7b8c6f4498-r645k" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-r645k,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-7b8c6f4498-r645k,UID:9e4f4368-0877-4bdd-8505-59d867d2c87f,ResourceVersion:21692184,Generation:0,CreationTimestamp:2020-01-24 14:15:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 17fb062b-e513-4159-b912-62050e382a38 0xc002ee8357 0xc002ee8358}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ee83d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ee83f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.048: INFO: Pod "nginx-deployment-7b8c6f4498-rrsdn" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rrsdn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-7b8c6f4498-rrsdn,UID:81c01e9f-c597-4a6c-aab9-6314e79fc913,ResourceVersion:21692055,Generation:0,CreationTimestamp:2020-01-24 14:14:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 17fb062b-e513-4159-b912-62050e382a38 0xc002ee8477 0xc002ee8478}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ee84f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ee8510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:14:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:03 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:14:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-24 14:14:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-24 14:15:02 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c7d86507658edb95270a25cef970c9b99433327ba32302808ee77c0cadbb1771}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.048: INFO: Pod "nginx-deployment-7b8c6f4498-s64w2" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-s64w2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-7b8c6f4498-s64w2,UID:d5261004-b638-45ca-9ca8-40b03f40fd56,ResourceVersion:21692041,Generation:0,CreationTimestamp:2020-01-24 14:14:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 17fb062b-e513-4159-b912-62050e382a38 0xc002ee85e7 0xc002ee85e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ee8660} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ee8680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:14:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:14:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-01-24 14:14:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-24 14:15:02 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9e6a85664f9f5ac098fdb88c8677b7c75b5bbce2c2af6d48252b0e22bbd8caf3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.048: INFO: Pod "nginx-deployment-7b8c6f4498-tvlpg" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tvlpg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-7b8c6f4498-tvlpg,UID:1af5ebd0-9b39-47eb-8aa9-15f0c2c56ea5,ResourceVersion:21692035,Generation:0,CreationTimestamp:2020-01-24 14:14:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 17fb062b-e513-4159-b912-62050e382a38 0xc002ee8757 0xc002ee8758}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ee87d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ee87f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:14:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:14:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-24 14:14:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-24 14:14:59 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://238ff84dd542c9e8e26eabbba5b5ba12092278cc10f23158e2aa977fbdb6dcc0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:15:26.048: INFO: Pod "nginx-deployment-7b8c6f4498-x4dlb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-x4dlb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2287,SelfLink:/api/v1/namespaces/deployment-2287/pods/nginx-deployment-7b8c6f4498-x4dlb,UID:edb74b9b-11cb-4f92-be8a-46d75ef71e6d,ResourceVersion:21692190,Generation:0,CreationTimestamp:2020-01-24 14:15:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 17fb062b-e513-4159-b912-62050e382a38 0xc002ee88c7 0xc002ee88c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9gk8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9gk8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-t9gk8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ee8940} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ee8960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:15:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:15:26.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2287" for this suite.
Jan 24 14:16:22.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:16:22.689: INFO: namespace deployment-2287 deletion completed in 55.863582561s

• [SLOW TEST:105.506 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:16:22.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-7a826e76-2aa9-4e10-a966-79574d9663e7
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:16:34.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2609" for this suite.
Jan 24 14:16:56.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:16:57.097: INFO: namespace configmap-2609 deletion completed in 22.143164065s

• [SLOW TEST:34.407 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:16:57.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 24 14:16:57.224: INFO: Waiting up to 5m0s for pod "downward-api-7659bea6-d721-459f-8dea-e77df4fca2f5" in namespace "downward-api-1084" to be "success or failure"
Jan 24 14:16:57.241: INFO: Pod "downward-api-7659bea6-d721-459f-8dea-e77df4fca2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.416479ms
Jan 24 14:16:59.249: INFO: Pod "downward-api-7659bea6-d721-459f-8dea-e77df4fca2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024518266s
Jan 24 14:17:01.256: INFO: Pod "downward-api-7659bea6-d721-459f-8dea-e77df4fca2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03116441s
Jan 24 14:17:03.266: INFO: Pod "downward-api-7659bea6-d721-459f-8dea-e77df4fca2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041098035s
Jan 24 14:17:05.277: INFO: Pod "downward-api-7659bea6-d721-459f-8dea-e77df4fca2f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052918918s
STEP: Saw pod success
Jan 24 14:17:05.278: INFO: Pod "downward-api-7659bea6-d721-459f-8dea-e77df4fca2f5" satisfied condition "success or failure"
Jan 24 14:17:05.282: INFO: Trying to get logs from node iruya-node pod downward-api-7659bea6-d721-459f-8dea-e77df4fca2f5 container dapi-container: 
STEP: delete the pod
Jan 24 14:17:05.359: INFO: Waiting for pod downward-api-7659bea6-d721-459f-8dea-e77df4fca2f5 to disappear
Jan 24 14:17:05.383: INFO: Pod downward-api-7659bea6-d721-459f-8dea-e77df4fca2f5 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:17:05.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1084" for this suite.
Jan 24 14:17:11.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:17:11.542: INFO: namespace downward-api-1084 deletion completed in 6.153354484s

• [SLOW TEST:14.445 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:17:11.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-v8ph
STEP: Creating a pod to test atomic-volume-subpath
Jan 24 14:17:11.636: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-v8ph" in namespace "subpath-4104" to be "success or failure"
Jan 24 14:17:11.690: INFO: Pod "pod-subpath-test-projected-v8ph": Phase="Pending", Reason="", readiness=false. Elapsed: 53.272215ms
Jan 24 14:17:13.698: INFO: Pod "pod-subpath-test-projected-v8ph": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062002157s
Jan 24 14:17:15.715: INFO: Pod "pod-subpath-test-projected-v8ph": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078723398s
Jan 24 14:17:17.725: INFO: Pod "pod-subpath-test-projected-v8ph": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088056092s
Jan 24 14:17:19.733: INFO: Pod "pod-subpath-test-projected-v8ph": Phase="Running", Reason="", readiness=true. Elapsed: 8.096692112s
Jan 24 14:17:21.740: INFO: Pod "pod-subpath-test-projected-v8ph": Phase="Running", Reason="", readiness=true. Elapsed: 10.103142567s
Jan 24 14:17:23.750: INFO: Pod "pod-subpath-test-projected-v8ph": Phase="Running", Reason="", readiness=true. Elapsed: 12.113557255s
Jan 24 14:17:25.757: INFO: Pod "pod-subpath-test-projected-v8ph": Phase="Running", Reason="", readiness=true. Elapsed: 14.120893282s
Jan 24 14:17:27.765: INFO: Pod "pod-subpath-test-projected-v8ph": Phase="Running", Reason="", readiness=true. Elapsed: 16.128823587s
Jan 24 14:17:29.774: INFO: Pod "pod-subpath-test-projected-v8ph": Phase="Running", Reason="", readiness=true. Elapsed: 18.137797618s
Jan 24 14:17:31.783: INFO: Pod "pod-subpath-test-projected-v8ph": Phase="Running", Reason="", readiness=true. Elapsed: 20.14643773s
Jan 24 14:17:33.790: INFO: Pod "pod-subpath-test-projected-v8ph": Phase="Running", Reason="", readiness=true. Elapsed: 22.153973361s
Jan 24 14:17:35.803: INFO: Pod "pod-subpath-test-projected-v8ph": Phase="Running", Reason="", readiness=true. Elapsed: 24.166278284s
Jan 24 14:17:37.813: INFO: Pod "pod-subpath-test-projected-v8ph": Phase="Running", Reason="", readiness=true. Elapsed: 26.176872244s
Jan 24 14:17:39.826: INFO: Pod "pod-subpath-test-projected-v8ph": Phase="Running", Reason="", readiness=true. Elapsed: 28.189074723s
Jan 24 14:17:41.838: INFO: Pod "pod-subpath-test-projected-v8ph": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.201136269s
STEP: Saw pod success
Jan 24 14:17:41.838: INFO: Pod "pod-subpath-test-projected-v8ph" satisfied condition "success or failure"
Jan 24 14:17:41.843: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-v8ph container test-container-subpath-projected-v8ph: 
STEP: delete the pod
Jan 24 14:17:41.906: INFO: Waiting for pod pod-subpath-test-projected-v8ph to disappear
Jan 24 14:17:41.913: INFO: Pod pod-subpath-test-projected-v8ph no longer exists
STEP: Deleting pod pod-subpath-test-projected-v8ph
Jan 24 14:17:41.913: INFO: Deleting pod "pod-subpath-test-projected-v8ph" in namespace "subpath-4104"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:17:41.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4104" for this suite.
Jan 24 14:17:47.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:17:48.052: INFO: namespace subpath-4104 deletion completed in 6.130251874s

• [SLOW TEST:36.509 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:17:48.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-5663ee55-a180-411b-9093-ec9bbb0dcd18
STEP: Creating configMap with name cm-test-opt-upd-cf685277-e5c2-4f14-a718-c7bf71140424
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-5663ee55-a180-411b-9093-ec9bbb0dcd18
STEP: Updating configmap cm-test-opt-upd-cf685277-e5c2-4f14-a718-c7bf71140424
STEP: Creating configMap with name cm-test-opt-create-aa207f6c-2435-4109-8ea6-39fd471e9d29
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:18:02.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1302" for this suite.
Jan 24 14:18:20.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:18:20.735: INFO: namespace configmap-1302 deletion completed in 18.256053969s

• [SLOW TEST:32.684 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:18:20.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-2821, will wait for the garbage collector to delete the pods
Jan 24 14:18:32.996: INFO: Deleting Job.batch foo took: 13.498194ms
Jan 24 14:18:33.397: INFO: Terminating Job.batch foo pods took: 400.661806ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:19:16.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2821" for this suite.
Jan 24 14:19:22.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:19:22.894: INFO: namespace job-2821 deletion completed in 6.16534687s

• [SLOW TEST:62.158 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:19:22.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-c44b9c02-fede-477e-9efa-b8994c127e66
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:19:23.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7085" for this suite.
Jan 24 14:19:29.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:19:29.159: INFO: namespace configmap-7085 deletion completed in 6.135084196s

• [SLOW TEST:6.264 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:19:29.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 24 14:19:43.348: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 24 14:19:43.359: INFO: Pod pod-with-prestop-http-hook still exists
Jan 24 14:19:45.359: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 24 14:19:45.369: INFO: Pod pod-with-prestop-http-hook still exists
Jan 24 14:19:47.360: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 24 14:19:47.372: INFO: Pod pod-with-prestop-http-hook still exists
Jan 24 14:19:49.360: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 24 14:19:49.370: INFO: Pod pod-with-prestop-http-hook still exists
Jan 24 14:19:51.359: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 24 14:19:51.367: INFO: Pod pod-with-prestop-http-hook still exists
Jan 24 14:19:53.359: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 24 14:19:53.369: INFO: Pod pod-with-prestop-http-hook still exists
Jan 24 14:19:55.359: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 24 14:19:55.369: INFO: Pod pod-with-prestop-http-hook still exists
Jan 24 14:19:57.359: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 24 14:19:57.371: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:19:57.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9487" for this suite.
Jan 24 14:20:19.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:20:19.699: INFO: namespace container-lifecycle-hook-9487 deletion completed in 22.285791593s

• [SLOW TEST:50.539 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:20:19.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 24 14:20:27.946: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:20:28.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6713" for this suite.
Jan 24 14:20:34.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:20:34.953: INFO: namespace container-runtime-6713 deletion completed in 6.154470467s

• [SLOW TEST:15.253 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:20:34.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-39864150-dfa3-4095-9f2d-80e9da692b59
STEP: Creating secret with name s-test-opt-upd-8a12da48-d539-435d-ac99-d9cfd6cb2e22
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-39864150-dfa3-4095-9f2d-80e9da692b59
STEP: Updating secret s-test-opt-upd-8a12da48-d539-435d-ac99-d9cfd6cb2e22
STEP: Creating secret with name s-test-opt-create-9dd585e8-f482-492c-b8b8-70f0ced46c3f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:22:04.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1717" for this suite.
Jan 24 14:22:26.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:22:26.225: INFO: namespace secrets-1717 deletion completed in 22.159457759s

• [SLOW TEST:111.271 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:22:26.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan 24 14:22:26.931: INFO: Pod name wrapped-volume-race-83c91e43-b5e9-42a7-b513-79b3401642e1: Found 0 pods out of 5
Jan 24 14:22:31.943: INFO: Pod name wrapped-volume-race-83c91e43-b5e9-42a7-b513-79b3401642e1: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-83c91e43-b5e9-42a7-b513-79b3401642e1 in namespace emptydir-wrapper-299, will wait for the garbage collector to delete the pods
Jan 24 14:23:00.068: INFO: Deleting ReplicationController wrapped-volume-race-83c91e43-b5e9-42a7-b513-79b3401642e1 took: 26.277713ms
Jan 24 14:23:00.568: INFO: Terminating ReplicationController wrapped-volume-race-83c91e43-b5e9-42a7-b513-79b3401642e1 pods took: 500.551306ms
STEP: Creating RC which spawns configmap-volume pods
Jan 24 14:23:56.842: INFO: Pod name wrapped-volume-race-fa20d4a6-1187-4916-9039-c4879622a6bf: Found 0 pods out of 5
Jan 24 14:24:01.887: INFO: Pod name wrapped-volume-race-fa20d4a6-1187-4916-9039-c4879622a6bf: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-fa20d4a6-1187-4916-9039-c4879622a6bf in namespace emptydir-wrapper-299, will wait for the garbage collector to delete the pods
Jan 24 14:24:29.994: INFO: Deleting ReplicationController wrapped-volume-race-fa20d4a6-1187-4916-9039-c4879622a6bf took: 18.92118ms
Jan 24 14:24:30.294: INFO: Terminating ReplicationController wrapped-volume-race-fa20d4a6-1187-4916-9039-c4879622a6bf pods took: 300.368194ms
STEP: Creating RC which spawns configmap-volume pods
Jan 24 14:25:17.038: INFO: Pod name wrapped-volume-race-aa472369-cf17-4148-902b-b096d3b3fffa: Found 0 pods out of 5
Jan 24 14:25:23.080: INFO: Pod name wrapped-volume-race-aa472369-cf17-4148-902b-b096d3b3fffa: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-aa472369-cf17-4148-902b-b096d3b3fffa in namespace emptydir-wrapper-299, will wait for the garbage collector to delete the pods
Jan 24 14:25:51.225: INFO: Deleting ReplicationController wrapped-volume-race-aa472369-cf17-4148-902b-b096d3b3fffa took: 12.495699ms
Jan 24 14:25:51.625: INFO: Terminating ReplicationController wrapped-volume-race-aa472369-cf17-4148-902b-b096d3b3fffa pods took: 400.362807ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:26:38.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-299" for this suite.
Jan 24 14:26:48.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:26:48.723: INFO: namespace emptydir-wrapper-299 deletion completed in 10.223152942s

• [SLOW TEST:262.498 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:26:48.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-420d054e-b539-4fc2-8852-94e27c560d63
Jan 24 14:26:48.836: INFO: Pod name my-hostname-basic-420d054e-b539-4fc2-8852-94e27c560d63: Found 0 pods out of 1
Jan 24 14:26:53.846: INFO: Pod name my-hostname-basic-420d054e-b539-4fc2-8852-94e27c560d63: Found 1 pods out of 1
Jan 24 14:26:53.846: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-420d054e-b539-4fc2-8852-94e27c560d63" are running
Jan 24 14:26:59.864: INFO: Pod "my-hostname-basic-420d054e-b539-4fc2-8852-94e27c560d63-zvz7k" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-24 14:26:48 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-24 14:26:48 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-420d054e-b539-4fc2-8852-94e27c560d63]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-24 14:26:48 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-420d054e-b539-4fc2-8852-94e27c560d63]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-24 14:26:48 +0000 UTC Reason: Message:}])
Jan 24 14:26:59.865: INFO: Trying to dial the pod
Jan 24 14:27:04.925: INFO: Controller my-hostname-basic-420d054e-b539-4fc2-8852-94e27c560d63: Got expected result from replica 1 [my-hostname-basic-420d054e-b539-4fc2-8852-94e27c560d63-zvz7k]: "my-hostname-basic-420d054e-b539-4fc2-8852-94e27c560d63-zvz7k", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:27:04.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3788" for this suite.
Jan 24 14:27:10.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:27:11.076: INFO: namespace replication-controller-3788 deletion completed in 6.139192382s

• [SLOW TEST:22.353 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:27:11.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Jan 24 14:27:21.177: INFO: Pod pod-hostip-68735563-7581-47a8-a4c6-5c1d6a71b603 has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:27:21.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8731" for this suite.
Jan 24 14:27:43.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:27:43.361: INFO: namespace pods-8731 deletion completed in 22.176248406s

• [SLOW TEST:32.285 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:27:43.361: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Jan 24 14:27:43.525: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-488" to be "success or failure"
Jan 24 14:27:43.578: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 52.293749ms
Jan 24 14:27:45.586: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0602018s
Jan 24 14:27:47.602: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076531693s
Jan 24 14:27:49.614: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088811037s
Jan 24 14:27:51.622: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.096510936s
Jan 24 14:27:53.631: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.105476061s
Jan 24 14:27:55.643: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.117149505s
STEP: Saw pod success
Jan 24 14:27:55.643: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan 24 14:27:55.648: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan 24 14:27:55.757: INFO: Waiting for pod pod-host-path-test to disappear
Jan 24 14:27:55.802: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:27:55.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-488" for this suite.
Jan 24 14:28:01.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:28:01.996: INFO: namespace hostpath-488 deletion completed in 6.186272386s

• [SLOW TEST:18.635 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:28:01.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 24 14:28:02.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6863'
Jan 24 14:28:03.930: INFO: stderr: ""
Jan 24 14:28:03.930: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Jan 24 14:28:03.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-6863'
Jan 24 14:28:07.547: INFO: stderr: ""
Jan 24 14:28:07.547: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:28:07.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6863" for this suite.
Jan 24 14:28:13.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:28:13.769: INFO: namespace kubectl-6863 deletion completed in 6.215595497s

• [SLOW TEST:11.773 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:28:13.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan 24 14:28:13.872: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:28:26.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-111" for this suite.
Jan 24 14:28:32.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:28:32.443: INFO: namespace init-container-111 deletion completed in 6.15014446s

• [SLOW TEST:18.672 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:28:32.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:28:40.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-948" for this suite.
Jan 24 14:29:22.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:29:22.804: INFO: namespace kubelet-test-948 deletion completed in 42.157824297s

• [SLOW TEST:50.360 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:29:22.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 24 14:29:22.922: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan 24 14:29:27.930: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 24 14:29:29.946: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan 24 14:29:31.957: INFO: Creating deployment "test-rollover-deployment"
Jan 24 14:29:31.972: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan 24 14:29:33.990: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan 24 14:29:33.998: INFO: Ensure that both replica sets have 1 created replica
Jan 24 14:29:34.004: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan 24 14:29:34.016: INFO: Updating deployment test-rollover-deployment
Jan 24 14:29:34.016: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan 24 14:29:36.139: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan 24 14:29:36.159: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan 24 14:29:36.168: INFO: all replica sets need to contain the pod-template-hash label
Jan 24 14:29:36.168: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472972, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472972, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472974, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472971, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 14:29:39.419: INFO: all replica sets need to contain the pod-template-hash label
Jan 24 14:29:39.419: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472972, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472972, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472974, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472971, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 14:29:40.178: INFO: all replica sets need to contain the pod-template-hash label
Jan 24 14:29:40.178: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472972, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472972, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472974, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472971, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 14:29:42.179: INFO: all replica sets need to contain the pod-template-hash label
Jan 24 14:29:42.179: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472972, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472972, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472974, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472971, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 14:29:44.182: INFO: all replica sets need to contain the pod-template-hash label
Jan 24 14:29:44.182: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472972, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472972, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472982, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472971, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 14:29:46.192: INFO: all replica sets need to contain the pod-template-hash label
Jan 24 14:29:46.193: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472972, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472972, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472982, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472971, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 14:29:48.183: INFO: all replica sets need to contain the pod-template-hash label
Jan 24 14:29:48.184: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472972, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472972, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472982, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472971, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 14:29:50.180: INFO: all replica sets need to contain the pod-template-hash label
Jan 24 14:29:50.181: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472972, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472972, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472982, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472971, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 14:29:52.181: INFO: all replica sets need to contain the pod-template-hash label
Jan 24 14:29:52.181: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472972, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472972, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472982, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715472971, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 14:29:54.190: INFO: 
Jan 24 14:29:54.190: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 24 14:29:54.199: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-2797,SelfLink:/apis/apps/v1/namespaces/deployment-2797/deployments/test-rollover-deployment,UID:e1fd224f-e81f-4fcd-b91b-2a0ba2f512a0,ResourceVersion:21694948,Generation:2,CreationTimestamp:2020-01-24 14:29:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-24 14:29:32 +0000 UTC 2020-01-24 14:29:32 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-24 14:29:52 +0000 UTC 2020-01-24 14:29:31 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan 24 14:29:54.202: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-2797,SelfLink:/apis/apps/v1/namespaces/deployment-2797/replicasets/test-rollover-deployment-854595fc44,UID:23ad6eb7-2aa7-4cd5-aae0-d87345763fbd,ResourceVersion:21694937,Generation:2,CreationTimestamp:2020-01-24 14:29:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment e1fd224f-e81f-4fcd-b91b-2a0ba2f512a0 0xc002a72747 0xc002a72748}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 24 14:29:54.202: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan 24 14:29:54.202: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-2797,SelfLink:/apis/apps/v1/namespaces/deployment-2797/replicasets/test-rollover-controller,UID:f3ad8e66-18a0-4e63-a4b9-72494f57eb00,ResourceVersion:21694947,Generation:2,CreationTimestamp:2020-01-24 14:29:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment e1fd224f-e81f-4fcd-b91b-2a0ba2f512a0 0xc002a72677 0xc002a72678}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 24 14:29:54.202: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-2797,SelfLink:/apis/apps/v1/namespaces/deployment-2797/replicasets/test-rollover-deployment-9b8b997cf,UID:d92f6a2d-1795-408a-8eac-e72955177560,ResourceVersion:21694899,Generation:2,CreationTimestamp:2020-01-24 14:29:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment e1fd224f-e81f-4fcd-b91b-2a0ba2f512a0 0xc002a72810 0xc002a72811}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 24 14:29:54.206: INFO: Pod "test-rollover-deployment-854595fc44-9s8l6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-9s8l6,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-2797,SelfLink:/api/v1/namespaces/deployment-2797/pods/test-rollover-deployment-854595fc44-9s8l6,UID:622a4aae-ddbf-4a18-90f1-03c093359e68,ResourceVersion:21694921,Generation:0,CreationTimestamp:2020-01-24 14:29:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 23ad6eb7-2aa7-4cd5-aae0-d87345763fbd 0xc002341817 0xc002341818}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-v5lc5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-v5lc5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-v5lc5 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002341880} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023418a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:29:34 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:29:42 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:29:42 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:29:34 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-01-24 14:29:34 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-24 14:29:41 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://f45b84a3b7d893b63d41a8f3a77036e73d73950c16119753c56e4f1c061db365}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:29:54.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2797" for this suite.
Jan 24 14:30:02.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:30:02.400: INFO: namespace deployment-2797 deletion completed in 8.190496979s

• [SLOW TEST:39.596 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:30:02.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:30:11.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9487" for this suite.
Jan 24 14:30:17.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:30:17.442: INFO: namespace emptydir-wrapper-9487 deletion completed in 6.187275703s

• [SLOW TEST:15.041 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:30:17.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 24 14:30:17.557: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan 24 14:30:22.567: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 24 14:30:24.576: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 24 14:30:24.626: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-7613,SelfLink:/apis/apps/v1/namespaces/deployment-7613/deployments/test-cleanup-deployment,UID:2a4d738b-298c-4156-886e-3e611ff10470,ResourceVersion:21695069,Generation:1,CreationTimestamp:2020-01-24 14:30:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jan 24 14:30:24.697: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-7613,SelfLink:/apis/apps/v1/namespaces/deployment-7613/replicasets/test-cleanup-deployment-55bbcbc84c,UID:f038d3a6-c3fd-4c04-8730-773614c144d3,ResourceVersion:21695071,Generation:1,CreationTimestamp:2020-01-24 14:30:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 2a4d738b-298c-4156-886e-3e611ff10470 0xc001a9b1e7 0xc001a9b1e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 24 14:30:24.697: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jan 24 14:30:24.697: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-7613,SelfLink:/apis/apps/v1/namespaces/deployment-7613/replicasets/test-cleanup-controller,UID:7e1364c6-245d-4e2c-8e30-52569c61c630,ResourceVersion:21695070,Generation:1,CreationTimestamp:2020-01-24 14:30:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 2a4d738b-298c-4156-886e-3e611ff10470 0xc001a9b117 0xc001a9b118}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 24 14:30:24.740: INFO: Pod "test-cleanup-controller-tg96x" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-tg96x,GenerateName:test-cleanup-controller-,Namespace:deployment-7613,SelfLink:/api/v1/namespaces/deployment-7613/pods/test-cleanup-controller-tg96x,UID:e0c8b7cc-f392-4271-8df5-66ea070cae3b,ResourceVersion:21695067,Generation:0,CreationTimestamp:2020-01-24 14:30:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 7e1364c6-245d-4e2c-8e30-52569c61c630 0xc001a9bb17 0xc001a9bb18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nbzpg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbzpg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nbzpg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001a9bb90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001a9bbb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:30:17 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:30:24 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:30:24 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:30:17 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-24 14:30:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-24 14:30:23 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f3c098be9e534d05beebfc50e7cf33942148666540e8d1a37518acecf3b8127b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 14:30:24.740: INFO: Pod "test-cleanup-deployment-55bbcbc84c-8ffmf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-8ffmf,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-7613,SelfLink:/api/v1/namespaces/deployment-7613/pods/test-cleanup-deployment-55bbcbc84c-8ffmf,UID:37d5cfad-7f7c-44d3-88a5-c37b2b50e4bd,ResourceVersion:21695076,Generation:0,CreationTimestamp:2020-01-24 14:30:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c f038d3a6-c3fd-4c04-8730-773614c144d3 0xc001a9bcc7 0xc001a9bcc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nbzpg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbzpg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-nbzpg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001a9be40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001a9bec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:30:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:30:24.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7613" for this suite.
Jan 24 14:30:30.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:30:30.979: INFO: namespace deployment-7613 deletion completed in 6.202659206s

• [SLOW TEST:13.536 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:30:30.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Jan 24 14:30:31.135: INFO: Waiting up to 5m0s for pod "client-containers-a81dcb86-cc8c-4f82-8eb8-112edaaf16d9" in namespace "containers-8379" to be "success or failure"
Jan 24 14:30:31.155: INFO: Pod "client-containers-a81dcb86-cc8c-4f82-8eb8-112edaaf16d9": Phase="Pending", Reason="", readiness=false. Elapsed: 19.164731ms
Jan 24 14:30:33.163: INFO: Pod "client-containers-a81dcb86-cc8c-4f82-8eb8-112edaaf16d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02742218s
Jan 24 14:30:35.170: INFO: Pod "client-containers-a81dcb86-cc8c-4f82-8eb8-112edaaf16d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034438148s
Jan 24 14:30:37.182: INFO: Pod "client-containers-a81dcb86-cc8c-4f82-8eb8-112edaaf16d9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046081156s
Jan 24 14:30:39.190: INFO: Pod "client-containers-a81dcb86-cc8c-4f82-8eb8-112edaaf16d9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054565774s
Jan 24 14:30:41.196: INFO: Pod "client-containers-a81dcb86-cc8c-4f82-8eb8-112edaaf16d9": Phase="Running", Reason="", readiness=true. Elapsed: 10.060864341s
Jan 24 14:30:43.202: INFO: Pod "client-containers-a81dcb86-cc8c-4f82-8eb8-112edaaf16d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.065988854s
STEP: Saw pod success
Jan 24 14:30:43.202: INFO: Pod "client-containers-a81dcb86-cc8c-4f82-8eb8-112edaaf16d9" satisfied condition "success or failure"
Jan 24 14:30:43.204: INFO: Trying to get logs from node iruya-node pod client-containers-a81dcb86-cc8c-4f82-8eb8-112edaaf16d9 container test-container: 
STEP: delete the pod
Jan 24 14:30:43.284: INFO: Waiting for pod client-containers-a81dcb86-cc8c-4f82-8eb8-112edaaf16d9 to disappear
Jan 24 14:30:43.317: INFO: Pod client-containers-a81dcb86-cc8c-4f82-8eb8-112edaaf16d9 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:30:43.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8379" for this suite.
Jan 24 14:30:49.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:30:49.549: INFO: namespace containers-8379 deletion completed in 6.227417201s

• [SLOW TEST:18.570 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:30:49.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-7307
I0124 14:30:49.615293       9 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7307, replica count: 1
I0124 14:30:50.666210       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 14:30:51.666655       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 14:30:52.666978       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 14:30:53.667478       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 14:30:54.667807       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 14:30:55.668205       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 14:30:56.668520       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 14:30:57.668833       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 14:30:58.669264       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 24 14:30:58.818: INFO: Created: latency-svc-gsgxq
Jan 24 14:30:58.841: INFO: Got endpoints: latency-svc-gsgxq [72.050371ms]
Jan 24 14:30:58.936: INFO: Created: latency-svc-rbfpd
Jan 24 14:30:58.954: INFO: Got endpoints: latency-svc-rbfpd [110.741677ms]
Jan 24 14:30:59.046: INFO: Created: latency-svc-m49lz
Jan 24 14:30:59.090: INFO: Got endpoints: latency-svc-m49lz [245.626346ms]
Jan 24 14:30:59.092: INFO: Created: latency-svc-n4wtc
Jan 24 14:30:59.104: INFO: Got endpoints: latency-svc-n4wtc [259.559745ms]
Jan 24 14:30:59.192: INFO: Created: latency-svc-x82wn
Jan 24 14:30:59.197: INFO: Got endpoints: latency-svc-x82wn [352.960548ms]
Jan 24 14:30:59.252: INFO: Created: latency-svc-mcj8t
Jan 24 14:30:59.278: INFO: Got endpoints: latency-svc-mcj8t [433.003842ms]
Jan 24 14:30:59.280: INFO: Created: latency-svc-8kvhz
Jan 24 14:30:59.326: INFO: Got endpoints: latency-svc-8kvhz [481.947178ms]
Jan 24 14:30:59.338: INFO: Created: latency-svc-kmps7
Jan 24 14:30:59.350: INFO: Got endpoints: latency-svc-kmps7 [505.517085ms]
Jan 24 14:30:59.396: INFO: Created: latency-svc-4lwgk
Jan 24 14:30:59.407: INFO: Got endpoints: latency-svc-4lwgk [80.992445ms]
Jan 24 14:30:59.479: INFO: Created: latency-svc-tbjgc
Jan 24 14:30:59.486: INFO: Got endpoints: latency-svc-tbjgc [641.559153ms]
Jan 24 14:30:59.522: INFO: Created: latency-svc-6d5w2
Jan 24 14:30:59.528: INFO: Got endpoints: latency-svc-6d5w2 [682.817097ms]
Jan 24 14:30:59.556: INFO: Created: latency-svc-sxnv8
Jan 24 14:30:59.677: INFO: Got endpoints: latency-svc-sxnv8 [832.474051ms]
Jan 24 14:30:59.689: INFO: Created: latency-svc-4psfg
Jan 24 14:30:59.702: INFO: Got endpoints: latency-svc-4psfg [856.674701ms]
Jan 24 14:30:59.765: INFO: Created: latency-svc-nt57f
Jan 24 14:30:59.854: INFO: Got endpoints: latency-svc-nt57f [1.009297901s]
Jan 24 14:30:59.873: INFO: Created: latency-svc-kpmfb
Jan 24 14:31:00.066: INFO: Got endpoints: latency-svc-kpmfb [1.221514566s]
Jan 24 14:31:00.078: INFO: Created: latency-svc-5pjzp
Jan 24 14:31:00.085: INFO: Got endpoints: latency-svc-5pjzp [1.2429842s]
Jan 24 14:31:00.131: INFO: Created: latency-svc-7b5b5
Jan 24 14:31:00.163: INFO: Got endpoints: latency-svc-7b5b5 [1.318074459s]
Jan 24 14:31:00.212: INFO: Created: latency-svc-qnv78
Jan 24 14:31:00.223: INFO: Got endpoints: latency-svc-qnv78 [1.269177388s]
Jan 24 14:31:00.307: INFO: Created: latency-svc-8wvqs
Jan 24 14:31:00.388: INFO: Got endpoints: latency-svc-8wvqs [1.297369302s]
Jan 24 14:31:00.410: INFO: Created: latency-svc-z89n6
Jan 24 14:31:00.423: INFO: Got endpoints: latency-svc-z89n6 [1.318682844s]
Jan 24 14:31:00.450: INFO: Created: latency-svc-jss2d
Jan 24 14:31:00.457: INFO: Got endpoints: latency-svc-jss2d [1.259522889s]
Jan 24 14:31:00.554: INFO: Created: latency-svc-9nljt
Jan 24 14:31:00.569: INFO: Got endpoints: latency-svc-9nljt [1.291029048s]
Jan 24 14:31:00.614: INFO: Created: latency-svc-tjffq
Jan 24 14:31:00.736: INFO: Got endpoints: latency-svc-tjffq [1.384923781s]
Jan 24 14:31:00.739: INFO: Created: latency-svc-nm58m
Jan 24 14:31:00.745: INFO: Got endpoints: latency-svc-nm58m [1.336741872s]
Jan 24 14:31:00.896: INFO: Created: latency-svc-bv2tc
Jan 24 14:31:00.929: INFO: Got endpoints: latency-svc-bv2tc [1.442765999s]
Jan 24 14:31:00.943: INFO: Created: latency-svc-ntvr4
Jan 24 14:31:00.979: INFO: Created: latency-svc-n2vfp
Jan 24 14:31:00.981: INFO: Got endpoints: latency-svc-ntvr4 [1.453508676s]
Jan 24 14:31:00.985: INFO: Got endpoints: latency-svc-n2vfp [1.308038286s]
Jan 24 14:31:01.099: INFO: Created: latency-svc-g46mx
Jan 24 14:31:01.116: INFO: Got endpoints: latency-svc-g46mx [1.414767356s]
Jan 24 14:31:01.143: INFO: Created: latency-svc-v7lsj
Jan 24 14:31:01.145: INFO: Got endpoints: latency-svc-v7lsj [1.291237924s]
Jan 24 14:31:01.219: INFO: Created: latency-svc-95jwr
Jan 24 14:31:01.222: INFO: Got endpoints: latency-svc-95jwr [1.155480559s]
Jan 24 14:31:01.257: INFO: Created: latency-svc-24xxk
Jan 24 14:31:01.258: INFO: Got endpoints: latency-svc-24xxk [1.172695776s]
Jan 24 14:31:01.287: INFO: Created: latency-svc-tpjsv
Jan 24 14:31:01.300: INFO: Got endpoints: latency-svc-tpjsv [1.136776616s]
Jan 24 14:31:01.362: INFO: Created: latency-svc-d67xg
Jan 24 14:31:01.393: INFO: Got endpoints: latency-svc-d67xg [1.168999749s]
Jan 24 14:31:01.394: INFO: Created: latency-svc-m2vdc
Jan 24 14:31:01.401: INFO: Got endpoints: latency-svc-m2vdc [1.013085865s]
Jan 24 14:31:01.446: INFO: Created: latency-svc-qnwfq
Jan 24 14:31:01.448: INFO: Got endpoints: latency-svc-qnwfq [1.02550526s]
Jan 24 14:31:01.534: INFO: Created: latency-svc-49vtl
Jan 24 14:31:01.541: INFO: Got endpoints: latency-svc-49vtl [1.084513444s]
Jan 24 14:31:01.576: INFO: Created: latency-svc-wh4hn
Jan 24 14:31:01.581: INFO: Got endpoints: latency-svc-wh4hn [1.011470282s]
Jan 24 14:31:01.626: INFO: Created: latency-svc-dvm8r
Jan 24 14:31:01.694: INFO: Got endpoints: latency-svc-dvm8r [957.989678ms]
Jan 24 14:31:01.728: INFO: Created: latency-svc-v7z9s
Jan 24 14:31:01.754: INFO: Got endpoints: latency-svc-v7z9s [1.009258135s]
Jan 24 14:31:01.886: INFO: Created: latency-svc-gtl52
Jan 24 14:31:01.892: INFO: Got endpoints: latency-svc-gtl52 [962.724376ms]
Jan 24 14:31:01.962: INFO: Created: latency-svc-c8drr
Jan 24 14:31:01.963: INFO: Got endpoints: latency-svc-c8drr [981.528714ms]
Jan 24 14:31:02.028: INFO: Created: latency-svc-v774k
Jan 24 14:31:02.038: INFO: Got endpoints: latency-svc-v774k [1.052919544s]
Jan 24 14:31:02.100: INFO: Created: latency-svc-fd7kw
Jan 24 14:31:02.107: INFO: Got endpoints: latency-svc-fd7kw [991.018438ms]
Jan 24 14:31:02.178: INFO: Created: latency-svc-l8qbq
Jan 24 14:31:02.191: INFO: Got endpoints: latency-svc-l8qbq [1.045580917s]
Jan 24 14:31:02.240: INFO: Created: latency-svc-92cs4
Jan 24 14:31:02.247: INFO: Got endpoints: latency-svc-92cs4 [1.025097452s]
Jan 24 14:31:02.388: INFO: Created: latency-svc-mmnhr
Jan 24 14:31:02.401: INFO: Got endpoints: latency-svc-mmnhr [1.142886016s]
Jan 24 14:31:02.464: INFO: Created: latency-svc-qnlwz
Jan 24 14:31:02.476: INFO: Got endpoints: latency-svc-qnlwz [1.176221859s]
Jan 24 14:31:02.575: INFO: Created: latency-svc-psswr
Jan 24 14:31:02.592: INFO: Got endpoints: latency-svc-psswr [1.199370392s]
Jan 24 14:31:02.635: INFO: Created: latency-svc-kssh4
Jan 24 14:31:02.710: INFO: Got endpoints: latency-svc-kssh4 [1.308497017s]
Jan 24 14:31:02.720: INFO: Created: latency-svc-k2k9x
Jan 24 14:31:02.749: INFO: Got endpoints: latency-svc-k2k9x [1.300760962s]
Jan 24 14:31:02.753: INFO: Created: latency-svc-blbb8
Jan 24 14:31:02.758: INFO: Got endpoints: latency-svc-blbb8 [1.216953704s]
Jan 24 14:31:02.791: INFO: Created: latency-svc-bn67f
Jan 24 14:31:02.795: INFO: Got endpoints: latency-svc-bn67f [1.21399291s]
Jan 24 14:31:02.899: INFO: Created: latency-svc-fzmp9
Jan 24 14:31:02.909: INFO: Got endpoints: latency-svc-fzmp9 [1.215124458s]
Jan 24 14:31:02.952: INFO: Created: latency-svc-st66p
Jan 24 14:31:02.959: INFO: Got endpoints: latency-svc-st66p [1.205030114s]
Jan 24 14:31:03.062: INFO: Created: latency-svc-g859q
Jan 24 14:31:03.071: INFO: Got endpoints: latency-svc-g859q [1.179177371s]
Jan 24 14:31:03.109: INFO: Created: latency-svc-zx9wv
Jan 24 14:31:03.141: INFO: Got endpoints: latency-svc-zx9wv [1.178171474s]
Jan 24 14:31:03.206: INFO: Created: latency-svc-wkf5q
Jan 24 14:31:03.210: INFO: Got endpoints: latency-svc-wkf5q [1.17181347s]
Jan 24 14:31:03.251: INFO: Created: latency-svc-lfpjj
Jan 24 14:31:03.261: INFO: Got endpoints: latency-svc-lfpjj [1.15295258s]
Jan 24 14:31:03.293: INFO: Created: latency-svc-8lkks
Jan 24 14:31:03.380: INFO: Got endpoints: latency-svc-8lkks [1.188654832s]
Jan 24 14:31:03.381: INFO: Created: latency-svc-shssb
Jan 24 14:31:03.397: INFO: Got endpoints: latency-svc-shssb [1.150156029s]
Jan 24 14:31:03.425: INFO: Created: latency-svc-67n4j
Jan 24 14:31:03.435: INFO: Got endpoints: latency-svc-67n4j [1.034159386s]
Jan 24 14:31:03.507: INFO: Created: latency-svc-l62l5
Jan 24 14:31:03.518: INFO: Got endpoints: latency-svc-l62l5 [1.041203333s]
Jan 24 14:31:03.546: INFO: Created: latency-svc-qjmj8
Jan 24 14:31:03.551: INFO: Got endpoints: latency-svc-qjmj8 [958.48318ms]
Jan 24 14:31:03.600: INFO: Created: latency-svc-7rg5j
Jan 24 14:31:03.655: INFO: Got endpoints: latency-svc-7rg5j [945.486282ms]
Jan 24 14:31:03.671: INFO: Created: latency-svc-4j8xj
Jan 24 14:31:03.679: INFO: Got endpoints: latency-svc-4j8xj [930.148881ms]
Jan 24 14:31:03.724: INFO: Created: latency-svc-dhg2n
Jan 24 14:31:03.741: INFO: Got endpoints: latency-svc-dhg2n [981.980102ms]
Jan 24 14:31:03.833: INFO: Created: latency-svc-zph96
Jan 24 14:31:03.851: INFO: Got endpoints: latency-svc-zph96 [1.055895054s]
Jan 24 14:31:03.924: INFO: Created: latency-svc-nhx5z
Jan 24 14:31:03.994: INFO: Got endpoints: latency-svc-nhx5z [1.084919733s]
Jan 24 14:31:04.007: INFO: Created: latency-svc-f22r6
Jan 24 14:31:04.011: INFO: Got endpoints: latency-svc-f22r6 [1.05182493s]
Jan 24 14:31:04.077: INFO: Created: latency-svc-b59w2
Jan 24 14:31:04.090: INFO: Got endpoints: latency-svc-b59w2 [1.01875639s]
Jan 24 14:31:04.147: INFO: Created: latency-svc-rgwsj
Jan 24 14:31:04.155: INFO: Got endpoints: latency-svc-rgwsj [1.013949555s]
Jan 24 14:31:04.196: INFO: Created: latency-svc-ft8ng
Jan 24 14:31:04.203: INFO: Got endpoints: latency-svc-ft8ng [993.051611ms]
Jan 24 14:31:04.244: INFO: Created: latency-svc-bvqt4
Jan 24 14:31:04.336: INFO: Got endpoints: latency-svc-bvqt4 [1.075431785s]
Jan 24 14:31:04.381: INFO: Created: latency-svc-q79qf
Jan 24 14:31:04.384: INFO: Got endpoints: latency-svc-q79qf [1.0037462s]
Jan 24 14:31:04.433: INFO: Created: latency-svc-2xzbd
Jan 24 14:31:04.434: INFO: Got endpoints: latency-svc-2xzbd [1.036358757s]
Jan 24 14:31:04.506: INFO: Created: latency-svc-k6h2v
Jan 24 14:31:04.523: INFO: Got endpoints: latency-svc-k6h2v [1.087923465s]
Jan 24 14:31:04.565: INFO: Created: latency-svc-vpqbf
Jan 24 14:31:04.565: INFO: Got endpoints: latency-svc-vpqbf [1.047330232s]
Jan 24 14:31:04.647: INFO: Created: latency-svc-4rq9x
Jan 24 14:31:04.655: INFO: Got endpoints: latency-svc-4rq9x [1.104186391s]
Jan 24 14:31:04.682: INFO: Created: latency-svc-dr99m
Jan 24 14:31:04.692: INFO: Got endpoints: latency-svc-dr99m [1.036794543s]
Jan 24 14:31:04.777: INFO: Created: latency-svc-5lbgn
Jan 24 14:31:04.783: INFO: Got endpoints: latency-svc-5lbgn [1.102976005s]
Jan 24 14:31:04.822: INFO: Created: latency-svc-jxlkf
Jan 24 14:31:04.823: INFO: Got endpoints: latency-svc-jxlkf [1.081952306s]
Jan 24 14:31:05.434: INFO: Created: latency-svc-rrzln
Jan 24 14:31:05.434: INFO: Got endpoints: latency-svc-rrzln [1.583174219s]
Jan 24 14:31:05.490: INFO: Created: latency-svc-lbmh4
Jan 24 14:31:05.647: INFO: Got endpoints: latency-svc-lbmh4 [1.652474227s]
Jan 24 14:31:05.674: INFO: Created: latency-svc-f8v57
Jan 24 14:31:05.678: INFO: Got endpoints: latency-svc-f8v57 [1.666363313s]
Jan 24 14:31:05.897: INFO: Created: latency-svc-lvj8t
Jan 24 14:31:05.919: INFO: Got endpoints: latency-svc-lvj8t [1.828498018s]
Jan 24 14:31:05.972: INFO: Created: latency-svc-dsmhk
Jan 24 14:31:05.983: INFO: Got endpoints: latency-svc-dsmhk [1.827540099s]
Jan 24 14:31:06.043: INFO: Created: latency-svc-b6zgz
Jan 24 14:31:06.053: INFO: Got endpoints: latency-svc-b6zgz [1.849178493s]
Jan 24 14:31:06.085: INFO: Created: latency-svc-2wp7j
Jan 24 14:31:06.093: INFO: Got endpoints: latency-svc-2wp7j [1.756538439s]
Jan 24 14:31:06.134: INFO: Created: latency-svc-krsnm
Jan 24 14:31:06.141: INFO: Got endpoints: latency-svc-krsnm [1.757056887s]
Jan 24 14:31:06.237: INFO: Created: latency-svc-m97hs
Jan 24 14:31:06.291: INFO: Got endpoints: latency-svc-m97hs [1.856650645s]
Jan 24 14:31:06.292: INFO: Created: latency-svc-trzw4
Jan 24 14:31:06.313: INFO: Got endpoints: latency-svc-trzw4 [1.788920087s]
Jan 24 14:31:06.457: INFO: Created: latency-svc-qhbwj
Jan 24 14:31:06.467: INFO: Got endpoints: latency-svc-qhbwj [1.901877209s]
Jan 24 14:31:06.510: INFO: Created: latency-svc-2nwxj
Jan 24 14:31:06.541: INFO: Got endpoints: latency-svc-2nwxj [1.885614035s]
Jan 24 14:31:06.616: INFO: Created: latency-svc-g6gs6
Jan 24 14:31:06.624: INFO: Got endpoints: latency-svc-g6gs6 [1.932239723s]
Jan 24 14:31:06.667: INFO: Created: latency-svc-ws25b
Jan 24 14:31:06.786: INFO: Got endpoints: latency-svc-ws25b [2.003001274s]
Jan 24 14:31:06.786: INFO: Created: latency-svc-zfshv
Jan 24 14:31:06.802: INFO: Got endpoints: latency-svc-zfshv [1.979545471s]
Jan 24 14:31:06.870: INFO: Created: latency-svc-jt9cg
Jan 24 14:31:06.882: INFO: Got endpoints: latency-svc-jt9cg [1.447439302s]
Jan 24 14:31:07.047: INFO: Created: latency-svc-vglzj
Jan 24 14:31:07.062: INFO: Got endpoints: latency-svc-vglzj [1.414785448s]
Jan 24 14:31:07.116: INFO: Created: latency-svc-w529w
Jan 24 14:31:07.123: INFO: Got endpoints: latency-svc-w529w [1.445450768s]
Jan 24 14:31:07.258: INFO: Created: latency-svc-2w9mn
Jan 24 14:31:07.274: INFO: Got endpoints: latency-svc-2w9mn [1.354963561s]
Jan 24 14:31:07.305: INFO: Created: latency-svc-xzjcs
Jan 24 14:31:07.310: INFO: Got endpoints: latency-svc-xzjcs [1.327604687s]
Jan 24 14:31:07.342: INFO: Created: latency-svc-d2prz
Jan 24 14:31:07.432: INFO: Got endpoints: latency-svc-d2prz [1.379368234s]
Jan 24 14:31:07.443: INFO: Created: latency-svc-xkw4m
Jan 24 14:31:07.446: INFO: Got endpoints: latency-svc-xkw4m [1.35318893s]
Jan 24 14:31:07.492: INFO: Created: latency-svc-l94wv
Jan 24 14:31:07.499: INFO: Got endpoints: latency-svc-l94wv [1.358335122s]
Jan 24 14:31:07.598: INFO: Created: latency-svc-jn67c
Jan 24 14:31:07.604: INFO: Got endpoints: latency-svc-jn67c [1.312935723s]
Jan 24 14:31:07.635: INFO: Created: latency-svc-xnbnm
Jan 24 14:31:07.653: INFO: Got endpoints: latency-svc-xnbnm [1.339941627s]
Jan 24 14:31:07.689: INFO: Created: latency-svc-4mvgc
Jan 24 14:31:07.797: INFO: Got endpoints: latency-svc-4mvgc [1.329708513s]
Jan 24 14:31:07.814: INFO: Created: latency-svc-cb4h2
Jan 24 14:31:07.832: INFO: Got endpoints: latency-svc-cb4h2 [1.290971336s]
Jan 24 14:31:07.911: INFO: Created: latency-svc-5tqxg
Jan 24 14:31:07.926: INFO: Got endpoints: latency-svc-5tqxg [1.301159694s]
Jan 24 14:31:08.148: INFO: Created: latency-svc-6xqbb
Jan 24 14:31:08.164: INFO: Got endpoints: latency-svc-6xqbb [1.377389225s]
Jan 24 14:31:08.218: INFO: Created: latency-svc-tfr66
Jan 24 14:31:08.224: INFO: Got endpoints: latency-svc-tfr66 [1.421498874s]
Jan 24 14:31:08.332: INFO: Created: latency-svc-t9q8b
Jan 24 14:31:08.341: INFO: Got endpoints: latency-svc-t9q8b [1.459388653s]
Jan 24 14:31:08.416: INFO: Created: latency-svc-hmw58
Jan 24 14:31:08.487: INFO: Got endpoints: latency-svc-hmw58 [1.425170456s]
Jan 24 14:31:08.500: INFO: Created: latency-svc-5v44s
Jan 24 14:31:08.504: INFO: Got endpoints: latency-svc-5v44s [1.380913826s]
Jan 24 14:31:08.572: INFO: Created: latency-svc-4nvsl
Jan 24 14:31:08.591: INFO: Got endpoints: latency-svc-4nvsl [1.316327218s]
Jan 24 14:31:08.672: INFO: Created: latency-svc-q2h7s
Jan 24 14:31:08.684: INFO: Got endpoints: latency-svc-q2h7s [1.373625831s]
Jan 24 14:31:08.731: INFO: Created: latency-svc-mrdms
Jan 24 14:31:08.751: INFO: Got endpoints: latency-svc-mrdms [1.318010886s]
Jan 24 14:31:08.844: INFO: Created: latency-svc-2cqzn
Jan 24 14:31:08.872: INFO: Got endpoints: latency-svc-2cqzn [1.42555106s]
Jan 24 14:31:08.919: INFO: Created: latency-svc-zpcpl
Jan 24 14:31:09.107: INFO: Got endpoints: latency-svc-zpcpl [1.60760969s]
Jan 24 14:31:09.116: INFO: Created: latency-svc-f8255
Jan 24 14:31:09.132: INFO: Got endpoints: latency-svc-f8255 [1.527836397s]
Jan 24 14:31:09.155: INFO: Created: latency-svc-9wfw7
Jan 24 14:31:09.165: INFO: Got endpoints: latency-svc-9wfw7 [1.512419072s]
Jan 24 14:31:09.208: INFO: Created: latency-svc-bcbqs
Jan 24 14:31:09.276: INFO: Got endpoints: latency-svc-bcbqs [1.479158066s]
Jan 24 14:31:09.284: INFO: Created: latency-svc-vdvhb
Jan 24 14:31:09.291: INFO: Got endpoints: latency-svc-vdvhb [1.458774514s]
Jan 24 14:31:09.339: INFO: Created: latency-svc-hf6qw
Jan 24 14:31:09.339: INFO: Got endpoints: latency-svc-hf6qw [1.413053088s]
Jan 24 14:31:09.384: INFO: Created: latency-svc-4lbxk
Jan 24 14:31:09.474: INFO: Got endpoints: latency-svc-4lbxk [1.30960861s]
Jan 24 14:31:09.480: INFO: Created: latency-svc-49php
Jan 24 14:31:09.493: INFO: Got endpoints: latency-svc-49php [1.268683194s]
Jan 24 14:31:09.534: INFO: Created: latency-svc-xx474
Jan 24 14:31:09.539: INFO: Got endpoints: latency-svc-xx474 [1.197666272s]
Jan 24 14:31:09.569: INFO: Created: latency-svc-7mfbt
Jan 24 14:31:09.629: INFO: Got endpoints: latency-svc-7mfbt [1.142049237s]
Jan 24 14:31:09.644: INFO: Created: latency-svc-q7mwz
Jan 24 14:31:09.653: INFO: Got endpoints: latency-svc-q7mwz [1.148590517s]
Jan 24 14:31:09.674: INFO: Created: latency-svc-qdxb9
Jan 24 14:31:09.680: INFO: Got endpoints: latency-svc-qdxb9 [1.088687535s]
Jan 24 14:31:09.711: INFO: Created: latency-svc-6bld4
Jan 24 14:31:09.716: INFO: Got endpoints: latency-svc-6bld4 [1.031389745s]
Jan 24 14:31:09.841: INFO: Created: latency-svc-zxtmm
Jan 24 14:31:09.854: INFO: Got endpoints: latency-svc-zxtmm [1.102401034s]
Jan 24 14:31:09.907: INFO: Created: latency-svc-rgx7p
Jan 24 14:31:09.921: INFO: Got endpoints: latency-svc-rgx7p [1.048827956s]
Jan 24 14:31:10.004: INFO: Created: latency-svc-jqnhc
Jan 24 14:31:10.011: INFO: Got endpoints: latency-svc-jqnhc [903.499966ms]
Jan 24 14:31:10.752: INFO: Created: latency-svc-7pbdw
Jan 24 14:31:10.786: INFO: Got endpoints: latency-svc-7pbdw [1.65429241s]
Jan 24 14:31:10.902: INFO: Created: latency-svc-59z7w
Jan 24 14:31:10.911: INFO: Got endpoints: latency-svc-59z7w [1.746116769s]
Jan 24 14:31:10.952: INFO: Created: latency-svc-vtn89
Jan 24 14:31:10.959: INFO: Got endpoints: latency-svc-vtn89 [1.682571816s]
Jan 24 14:31:11.173: INFO: Created: latency-svc-7kshc
Jan 24 14:31:11.187: INFO: Got endpoints: latency-svc-7kshc [1.896019156s]
Jan 24 14:31:11.314: INFO: Created: latency-svc-45kd2
Jan 24 14:31:11.332: INFO: Got endpoints: latency-svc-45kd2 [1.99294212s]
Jan 24 14:31:11.389: INFO: Created: latency-svc-mq4fk
Jan 24 14:31:11.397: INFO: Got endpoints: latency-svc-mq4fk [1.923194032s]
Jan 24 14:31:11.513: INFO: Created: latency-svc-99rqx
Jan 24 14:31:11.515: INFO: Got endpoints: latency-svc-99rqx [2.021678107s]
Jan 24 14:31:11.557: INFO: Created: latency-svc-kq5kk
Jan 24 14:31:11.570: INFO: Got endpoints: latency-svc-kq5kk [2.030411206s]
Jan 24 14:31:11.642: INFO: Created: latency-svc-mf9rb
Jan 24 14:31:11.651: INFO: Got endpoints: latency-svc-mf9rb [2.021248632s]
Jan 24 14:31:11.692: INFO: Created: latency-svc-ggtck
Jan 24 14:31:11.847: INFO: Got endpoints: latency-svc-ggtck [2.194187379s]
Jan 24 14:31:11.855: INFO: Created: latency-svc-qwcpk
Jan 24 14:31:11.864: INFO: Got endpoints: latency-svc-qwcpk [2.183706472s]
Jan 24 14:31:12.123: INFO: Created: latency-svc-ppgtw
Jan 24 14:31:12.125: INFO: Got endpoints: latency-svc-ppgtw [2.40861261s]
Jan 24 14:31:12.203: INFO: Created: latency-svc-nfhk4
Jan 24 14:31:12.203: INFO: Got endpoints: latency-svc-nfhk4 [2.348738734s]
Jan 24 14:31:12.303: INFO: Created: latency-svc-d27lb
Jan 24 14:31:12.321: INFO: Got endpoints: latency-svc-d27lb [2.399747988s]
Jan 24 14:31:12.375: INFO: Created: latency-svc-5w5df
Jan 24 14:31:12.466: INFO: Got endpoints: latency-svc-5w5df [2.455136972s]
Jan 24 14:31:12.558: INFO: Created: latency-svc-stqcc
Jan 24 14:31:12.577: INFO: Got endpoints: latency-svc-stqcc [1.790218016s]
Jan 24 14:31:12.653: INFO: Created: latency-svc-bphq7
Jan 24 14:31:12.657: INFO: Got endpoints: latency-svc-bphq7 [1.744828366s]
Jan 24 14:31:12.696: INFO: Created: latency-svc-b57md
Jan 24 14:31:12.728: INFO: Got endpoints: latency-svc-b57md [1.769075261s]
Jan 24 14:31:12.815: INFO: Created: latency-svc-gds7r
Jan 24 14:31:12.815: INFO: Got endpoints: latency-svc-gds7r [1.626603756s]
Jan 24 14:31:12.836: INFO: Created: latency-svc-5jcdl
Jan 24 14:31:12.874: INFO: Got endpoints: latency-svc-5jcdl [1.542071377s]
Jan 24 14:31:12.936: INFO: Created: latency-svc-6z7sr
Jan 24 14:31:12.946: INFO: Got endpoints: latency-svc-6z7sr [1.548294108s]
Jan 24 14:31:12.979: INFO: Created: latency-svc-zss2b
Jan 24 14:31:12.987: INFO: Got endpoints: latency-svc-zss2b [1.472254674s]
Jan 24 14:31:13.017: INFO: Created: latency-svc-2l9kx
Jan 24 14:31:13.029: INFO: Got endpoints: latency-svc-2l9kx [1.45877408s]
Jan 24 14:31:13.170: INFO: Created: latency-svc-grgpb
Jan 24 14:31:13.180: INFO: Got endpoints: latency-svc-grgpb [1.529005474s]
Jan 24 14:31:13.213: INFO: Created: latency-svc-j72rs
Jan 24 14:31:13.216: INFO: Got endpoints: latency-svc-j72rs [1.368215678s]
Jan 24 14:31:13.300: INFO: Created: latency-svc-gnmsb
Jan 24 14:31:13.320: INFO: Got endpoints: latency-svc-gnmsb [1.455664966s]
Jan 24 14:31:13.339: INFO: Created: latency-svc-xs4g7
Jan 24 14:31:13.369: INFO: Got endpoints: latency-svc-xs4g7 [1.244572154s]
Jan 24 14:31:13.470: INFO: Created: latency-svc-tddnd
Jan 24 14:31:13.470: INFO: Got endpoints: latency-svc-tddnd [1.267595312s]
Jan 24 14:31:13.506: INFO: Created: latency-svc-x7jq9
Jan 24 14:31:13.518: INFO: Got endpoints: latency-svc-x7jq9 [1.195989055s]
Jan 24 14:31:13.601: INFO: Created: latency-svc-6qhbw
Jan 24 14:31:13.608: INFO: Got endpoints: latency-svc-6qhbw [1.14142747s]
Jan 24 14:31:13.650: INFO: Created: latency-svc-w57nw
Jan 24 14:31:13.653: INFO: Got endpoints: latency-svc-w57nw [1.075475903s]
Jan 24 14:31:13.683: INFO: Created: latency-svc-57wtl
Jan 24 14:31:13.689: INFO: Got endpoints: latency-svc-57wtl [1.03190071s]
Jan 24 14:31:13.752: INFO: Created: latency-svc-d7kjk
Jan 24 14:31:13.763: INFO: Got endpoints: latency-svc-d7kjk [1.034289798s]
Jan 24 14:31:13.820: INFO: Created: latency-svc-v9brk
Jan 24 14:31:13.823: INFO: Got endpoints: latency-svc-v9brk [1.007678023s]
Jan 24 14:31:13.920: INFO: Created: latency-svc-mb28k
Jan 24 14:31:13.939: INFO: Got endpoints: latency-svc-mb28k [1.063782662s]
Jan 24 14:31:13.967: INFO: Created: latency-svc-6jh5b
Jan 24 14:31:13.983: INFO: Got endpoints: latency-svc-6jh5b [1.037220942s]
Jan 24 14:31:14.102: INFO: Created: latency-svc-6tpk5
Jan 24 14:31:14.102: INFO: Got endpoints: latency-svc-6tpk5 [1.1148127s]
Jan 24 14:31:14.144: INFO: Created: latency-svc-fm28h
Jan 24 14:31:14.154: INFO: Got endpoints: latency-svc-fm28h [1.125157872s]
Jan 24 14:31:14.188: INFO: Created: latency-svc-w8x27
Jan 24 14:31:14.247: INFO: Got endpoints: latency-svc-w8x27 [1.067314962s]
Jan 24 14:31:14.267: INFO: Created: latency-svc-5zbrc
Jan 24 14:31:14.271: INFO: Got endpoints: latency-svc-5zbrc [1.054747047s]
Jan 24 14:31:14.315: INFO: Created: latency-svc-526fv
Jan 24 14:31:14.325: INFO: Got endpoints: latency-svc-526fv [1.004496222s]
Jan 24 14:31:14.349: INFO: Created: latency-svc-5tf4c
Jan 24 14:31:14.433: INFO: Got endpoints: latency-svc-5tf4c [1.06290081s]
Jan 24 14:31:14.448: INFO: Created: latency-svc-bkkjq
Jan 24 14:31:14.452: INFO: Got endpoints: latency-svc-bkkjq [981.771669ms]
Jan 24 14:31:14.515: INFO: Created: latency-svc-f6hhz
Jan 24 14:31:14.521: INFO: Got endpoints: latency-svc-f6hhz [1.002917106s]
Jan 24 14:31:14.678: INFO: Created: latency-svc-hxdw9
Jan 24 14:31:14.705: INFO: Got endpoints: latency-svc-hxdw9 [1.0977412s]
Jan 24 14:31:14.739: INFO: Created: latency-svc-d4ds5
Jan 24 14:31:14.741: INFO: Got endpoints: latency-svc-d4ds5 [1.088314401s]
Jan 24 14:31:14.812: INFO: Created: latency-svc-2dfns
Jan 24 14:31:14.821: INFO: Got endpoints: latency-svc-2dfns [1.132311252s]
Jan 24 14:31:14.864: INFO: Created: latency-svc-f427z
Jan 24 14:31:14.883: INFO: Got endpoints: latency-svc-f427z [1.119406376s]
Jan 24 14:31:14.994: INFO: Created: latency-svc-v6dwk
Jan 24 14:31:15.011: INFO: Got endpoints: latency-svc-v6dwk [1.188231405s]
Jan 24 14:31:15.058: INFO: Created: latency-svc-94bc7
Jan 24 14:31:15.230: INFO: Got endpoints: latency-svc-94bc7 [1.29042587s]
Jan 24 14:31:15.285: INFO: Created: latency-svc-d7gqr
Jan 24 14:31:15.303: INFO: Got endpoints: latency-svc-d7gqr [1.320207856s]
Jan 24 14:31:15.325: INFO: Created: latency-svc-qfsms
Jan 24 14:31:15.372: INFO: Got endpoints: latency-svc-qfsms [1.270004391s]
Jan 24 14:31:15.404: INFO: Created: latency-svc-tllm6
Jan 24 14:31:15.407: INFO: Got endpoints: latency-svc-tllm6 [1.252784486s]
Jan 24 14:31:15.476: INFO: Created: latency-svc-nxbnc
Jan 24 14:31:15.553: INFO: Got endpoints: latency-svc-nxbnc [1.304937523s]
Jan 24 14:31:15.598: INFO: Created: latency-svc-7696l
Jan 24 14:31:15.598: INFO: Got endpoints: latency-svc-7696l [1.326873029s]
Jan 24 14:31:15.626: INFO: Created: latency-svc-6mtm9
Jan 24 14:31:15.644: INFO: Got endpoints: latency-svc-6mtm9 [1.319046401s]
Jan 24 14:31:15.756: INFO: Created: latency-svc-fgzq4
Jan 24 14:31:15.765: INFO: Got endpoints: latency-svc-fgzq4 [1.332200997s]
Jan 24 14:31:15.804: INFO: Created: latency-svc-4qbzk
Jan 24 14:31:15.807: INFO: Got endpoints: latency-svc-4qbzk [1.354695368s]
Jan 24 14:31:15.938: INFO: Created: latency-svc-49mfp
Jan 24 14:31:15.938: INFO: Got endpoints: latency-svc-49mfp [1.41757183s]
Jan 24 14:31:16.059: INFO: Created: latency-svc-4l2vj
Jan 24 14:31:16.072: INFO: Got endpoints: latency-svc-4l2vj [1.366616046s]
Jan 24 14:31:16.102: INFO: Created: latency-svc-p86d5
Jan 24 14:31:16.109: INFO: Got endpoints: latency-svc-p86d5 [1.367968136s]
Jan 24 14:31:16.151: INFO: Created: latency-svc-r4rp5
Jan 24 14:31:16.151: INFO: Got endpoints: latency-svc-r4rp5 [1.330260128s]
Jan 24 14:31:16.216: INFO: Created: latency-svc-x2szs
Jan 24 14:31:16.220: INFO: Got endpoints: latency-svc-x2szs [1.336818046s]
Jan 24 14:31:16.253: INFO: Created: latency-svc-smqhj
Jan 24 14:31:16.254: INFO: Got endpoints: latency-svc-smqhj [1.24150182s]
Jan 24 14:31:16.291: INFO: Created: latency-svc-t52xp
Jan 24 14:31:16.291: INFO: Got endpoints: latency-svc-t52xp [1.061705587s]
Jan 24 14:31:16.362: INFO: Created: latency-svc-ww866
Jan 24 14:31:16.375: INFO: Got endpoints: latency-svc-ww866 [1.071016997s]
Jan 24 14:31:16.406: INFO: Created: latency-svc-86gfl
Jan 24 14:31:16.425: INFO: Got endpoints: latency-svc-86gfl [1.052521752s]
Jan 24 14:31:16.425: INFO: Latencies: [80.992445ms 110.741677ms 245.626346ms 259.559745ms 352.960548ms 433.003842ms 481.947178ms 505.517085ms 641.559153ms 682.817097ms 832.474051ms 856.674701ms 903.499966ms 930.148881ms 945.486282ms 957.989678ms 958.48318ms 962.724376ms 981.528714ms 981.771669ms 981.980102ms 991.018438ms 993.051611ms 1.002917106s 1.0037462s 1.004496222s 1.007678023s 1.009258135s 1.009297901s 1.011470282s 1.013085865s 1.013949555s 1.01875639s 1.025097452s 1.02550526s 1.031389745s 1.03190071s 1.034159386s 1.034289798s 1.036358757s 1.036794543s 1.037220942s 1.041203333s 1.045580917s 1.047330232s 1.048827956s 1.05182493s 1.052521752s 1.052919544s 1.054747047s 1.055895054s 1.061705587s 1.06290081s 1.063782662s 1.067314962s 1.071016997s 1.075431785s 1.075475903s 1.081952306s 1.084513444s 1.084919733s 1.087923465s 1.088314401s 1.088687535s 1.0977412s 1.102401034s 1.102976005s 1.104186391s 1.1148127s 1.119406376s 1.125157872s 1.132311252s 1.136776616s 1.14142747s 1.142049237s 1.142886016s 1.148590517s 1.150156029s 1.15295258s 1.155480559s 1.168999749s 1.17181347s 1.172695776s 1.176221859s 1.178171474s 1.179177371s 1.188231405s 1.188654832s 1.195989055s 1.197666272s 1.199370392s 1.205030114s 1.21399291s 1.215124458s 1.216953704s 1.221514566s 1.24150182s 1.2429842s 1.244572154s 1.252784486s 1.259522889s 1.267595312s 1.268683194s 1.269177388s 1.270004391s 1.29042587s 1.290971336s 1.291029048s 1.291237924s 1.297369302s 1.300760962s 1.301159694s 1.304937523s 1.308038286s 1.308497017s 1.30960861s 1.312935723s 1.316327218s 1.318010886s 1.318074459s 1.318682844s 1.319046401s 1.320207856s 1.326873029s 1.327604687s 1.329708513s 1.330260128s 1.332200997s 1.336741872s 1.336818046s 1.339941627s 1.35318893s 1.354695368s 1.354963561s 1.358335122s 1.366616046s 1.367968136s 1.368215678s 1.373625831s 1.377389225s 1.379368234s 1.380913826s 1.384923781s 1.413053088s 1.414767356s 1.414785448s 1.41757183s 1.421498874s 1.425170456s 1.42555106s 1.442765999s 1.445450768s 1.447439302s 1.453508676s 1.455664966s 1.45877408s 1.458774514s 1.459388653s 1.472254674s 1.479158066s 1.512419072s 1.527836397s 1.529005474s 1.542071377s 1.548294108s 1.583174219s 1.60760969s 1.626603756s 1.652474227s 1.65429241s 1.666363313s 1.682571816s 1.744828366s 1.746116769s 1.756538439s 1.757056887s 1.769075261s 1.788920087s 1.790218016s 1.827540099s 1.828498018s 1.849178493s 1.856650645s 1.885614035s 1.896019156s 1.901877209s 1.923194032s 1.932239723s 1.979545471s 1.99294212s 2.003001274s 2.021248632s 2.021678107s 2.030411206s 2.183706472s 2.194187379s 2.348738734s 2.399747988s 2.40861261s 2.455136972s]
Jan 24 14:31:16.425: INFO: 50 %ile: 1.259522889s
Jan 24 14:31:16.425: INFO: 90 %ile: 1.828498018s
Jan 24 14:31:16.426: INFO: 99 %ile: 2.40861261s
Jan 24 14:31:16.426: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:31:16.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-7307" for this suite.
Jan 24 14:31:48.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:31:48.616: INFO: namespace svc-latency-7307 deletion completed in 32.183269241s

• [SLOW TEST:59.066 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:31:48.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 24 14:31:48.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-6498'
Jan 24 14:31:48.884: INFO: stderr: ""
Jan 24 14:31:48.884: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan 24 14:31:58.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-6498 -o json'
Jan 24 14:31:59.037: INFO: stderr: ""
Jan 24 14:31:59.037: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-24T14:31:48Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-6498\",\n        \"resourceVersion\": \"21696670\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-6498/pods/e2e-test-nginx-pod\",\n        \"uid\": \"747706eb-7400-48e4-901e-718ec15a7fcd\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-5tnjw\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-5tnjw\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-5tnjw\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-24T14:31:48Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-24T14:31:56Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-24T14:31:56Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-24T14:31:48Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://ca9a40ab1e95d20c2755bf2869531238e1e8c16b514b04c6bdabed7c49f1e580\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-24T14:31:55Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-24T14:31:48Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan 24 14:31:59.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-6498'
Jan 24 14:31:59.591: INFO: stderr: ""
Jan 24 14:31:59.591: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Jan 24 14:31:59.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-6498'
Jan 24 14:32:06.559: INFO: stderr: ""
Jan 24 14:32:06.559: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:32:06.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6498" for this suite.
Jan 24 14:32:13.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:32:13.198: INFO: namespace kubectl-6498 deletion completed in 6.629897457s

• [SLOW TEST:24.581 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:32:13.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-eed42a4d-3892-4eb7-a757-6d7aef991ecc
STEP: Creating a pod to test consume secrets
Jan 24 14:32:13.399: INFO: Waiting up to 5m0s for pod "pod-secrets-fda1fcca-0a7c-47ef-801f-f6d65ca7e2db" in namespace "secrets-2969" to be "success or failure"
Jan 24 14:32:13.421: INFO: Pod "pod-secrets-fda1fcca-0a7c-47ef-801f-f6d65ca7e2db": Phase="Pending", Reason="", readiness=false. Elapsed: 21.883316ms
Jan 24 14:32:15.432: INFO: Pod "pod-secrets-fda1fcca-0a7c-47ef-801f-f6d65ca7e2db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032801021s
Jan 24 14:32:17.449: INFO: Pod "pod-secrets-fda1fcca-0a7c-47ef-801f-f6d65ca7e2db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050264022s
Jan 24 14:32:19.460: INFO: Pod "pod-secrets-fda1fcca-0a7c-47ef-801f-f6d65ca7e2db": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060462721s
Jan 24 14:32:21.477: INFO: Pod "pod-secrets-fda1fcca-0a7c-47ef-801f-f6d65ca7e2db": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07741488s
Jan 24 14:32:23.484: INFO: Pod "pod-secrets-fda1fcca-0a7c-47ef-801f-f6d65ca7e2db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.085031499s
STEP: Saw pod success
Jan 24 14:32:23.484: INFO: Pod "pod-secrets-fda1fcca-0a7c-47ef-801f-f6d65ca7e2db" satisfied condition "success or failure"
Jan 24 14:32:23.490: INFO: Trying to get logs from node iruya-node pod pod-secrets-fda1fcca-0a7c-47ef-801f-f6d65ca7e2db container secret-volume-test: 
STEP: delete the pod
Jan 24 14:32:23.560: INFO: Waiting for pod pod-secrets-fda1fcca-0a7c-47ef-801f-f6d65ca7e2db to disappear
Jan 24 14:32:23.571: INFO: Pod pod-secrets-fda1fcca-0a7c-47ef-801f-f6d65ca7e2db no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:32:23.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2969" for this suite.
Jan 24 14:32:29.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:32:29.775: INFO: namespace secrets-2969 deletion completed in 6.197157733s

• [SLOW TEST:16.576 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:32:29.775: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 24 14:32:46.050: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 24 14:32:46.109: INFO: Pod pod-with-poststart-http-hook still exists
Jan 24 14:32:48.110: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 24 14:32:48.122: INFO: Pod pod-with-poststart-http-hook still exists
Jan 24 14:32:50.110: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 24 14:32:50.124: INFO: Pod pod-with-poststart-http-hook still exists
Jan 24 14:32:52.110: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 24 14:32:52.121: INFO: Pod pod-with-poststart-http-hook still exists
Jan 24 14:32:54.110: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 24 14:32:54.116: INFO: Pod pod-with-poststart-http-hook still exists
Jan 24 14:32:56.110: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 24 14:32:56.121: INFO: Pod pod-with-poststart-http-hook still exists
Jan 24 14:32:58.110: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 24 14:32:58.125: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:32:58.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4286" for this suite.
Jan 24 14:33:20.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:33:20.255: INFO: namespace container-lifecycle-hook-4286 deletion completed in 22.121423691s

• [SLOW TEST:50.480 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:33:20.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0124 14:33:51.286733       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 24 14:33:51.286: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:33:51.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-993" for this suite.
Jan 24 14:33:59.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:33:59.423: INFO: namespace gc-993 deletion completed in 8.132668177s

• [SLOW TEST:39.168 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:33:59.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3363.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3363.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3363.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3363.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 24 14:34:12.989: INFO: File wheezy_udp@dns-test-service-3.dns-3363.svc.cluster.local from pod  dns-3363/dns-test-88a5ba61-b384-4c98-aa8f-3b8297998f0c contains '' instead of 'foo.example.com.'
Jan 24 14:34:13.000: INFO: File jessie_udp@dns-test-service-3.dns-3363.svc.cluster.local from pod  dns-3363/dns-test-88a5ba61-b384-4c98-aa8f-3b8297998f0c contains '' instead of 'foo.example.com.'
Jan 24 14:34:13.000: INFO: Lookups using dns-3363/dns-test-88a5ba61-b384-4c98-aa8f-3b8297998f0c failed for: [wheezy_udp@dns-test-service-3.dns-3363.svc.cluster.local jessie_udp@dns-test-service-3.dns-3363.svc.cluster.local]

Jan 24 14:34:18.029: INFO: DNS probes using dns-test-88a5ba61-b384-4c98-aa8f-3b8297998f0c succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3363.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3363.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3363.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3363.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 24 14:34:30.164: INFO: File wheezy_udp@dns-test-service-3.dns-3363.svc.cluster.local from pod  dns-3363/dns-test-8ce255aa-2df0-4618-a48f-4b3fd8d01984 contains '' instead of 'bar.example.com.'
Jan 24 14:34:30.171: INFO: File jessie_udp@dns-test-service-3.dns-3363.svc.cluster.local from pod  dns-3363/dns-test-8ce255aa-2df0-4618-a48f-4b3fd8d01984 contains '' instead of 'bar.example.com.'
Jan 24 14:34:30.171: INFO: Lookups using dns-3363/dns-test-8ce255aa-2df0-4618-a48f-4b3fd8d01984 failed for: [wheezy_udp@dns-test-service-3.dns-3363.svc.cluster.local jessie_udp@dns-test-service-3.dns-3363.svc.cluster.local]

Jan 24 14:34:35.183: INFO: File wheezy_udp@dns-test-service-3.dns-3363.svc.cluster.local from pod  dns-3363/dns-test-8ce255aa-2df0-4618-a48f-4b3fd8d01984 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 24 14:34:35.189: INFO: File jessie_udp@dns-test-service-3.dns-3363.svc.cluster.local from pod  dns-3363/dns-test-8ce255aa-2df0-4618-a48f-4b3fd8d01984 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 24 14:34:35.189: INFO: Lookups using dns-3363/dns-test-8ce255aa-2df0-4618-a48f-4b3fd8d01984 failed for: [wheezy_udp@dns-test-service-3.dns-3363.svc.cluster.local jessie_udp@dns-test-service-3.dns-3363.svc.cluster.local]

Jan 24 14:34:40.182: INFO: File wheezy_udp@dns-test-service-3.dns-3363.svc.cluster.local from pod  dns-3363/dns-test-8ce255aa-2df0-4618-a48f-4b3fd8d01984 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 24 14:34:40.188: INFO: File jessie_udp@dns-test-service-3.dns-3363.svc.cluster.local from pod  dns-3363/dns-test-8ce255aa-2df0-4618-a48f-4b3fd8d01984 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 24 14:34:40.188: INFO: Lookups using dns-3363/dns-test-8ce255aa-2df0-4618-a48f-4b3fd8d01984 failed for: [wheezy_udp@dns-test-service-3.dns-3363.svc.cluster.local jessie_udp@dns-test-service-3.dns-3363.svc.cluster.local]

Jan 24 14:34:45.205: INFO: File jessie_udp@dns-test-service-3.dns-3363.svc.cluster.local from pod  dns-3363/dns-test-8ce255aa-2df0-4618-a48f-4b3fd8d01984 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 24 14:34:45.205: INFO: Lookups using dns-3363/dns-test-8ce255aa-2df0-4618-a48f-4b3fd8d01984 failed for: [jessie_udp@dns-test-service-3.dns-3363.svc.cluster.local]

Jan 24 14:34:50.192: INFO: DNS probes using dns-test-8ce255aa-2df0-4618-a48f-4b3fd8d01984 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3363.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3363.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3363.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3363.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 24 14:35:06.457: INFO: File jessie_udp@dns-test-service-3.dns-3363.svc.cluster.local from pod  dns-3363/dns-test-331ee03c-0260-45d0-b4b9-710385426006 contains '' instead of '10.96.61.208'
Jan 24 14:35:06.457: INFO: Lookups using dns-3363/dns-test-331ee03c-0260-45d0-b4b9-710385426006 failed for: [jessie_udp@dns-test-service-3.dns-3363.svc.cluster.local]

Jan 24 14:35:11.476: INFO: DNS probes using dns-test-331ee03c-0260-45d0-b4b9-710385426006 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:35:11.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3363" for this suite.
Jan 24 14:35:17.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:35:18.030: INFO: namespace dns-3363 deletion completed in 6.269420229s

• [SLOW TEST:78.607 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:35:18.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-2145
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2145 to expose endpoints map[]
Jan 24 14:35:18.125: INFO: Get endpoints failed (4.625458ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan 24 14:35:19.136: INFO: successfully validated that service endpoint-test2 in namespace services-2145 exposes endpoints map[] (1.015873927s elapsed)
STEP: Creating pod pod1 in namespace services-2145
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2145 to expose endpoints map[pod1:[80]]
Jan 24 14:35:23.214: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.064727902s elapsed, will retry)
Jan 24 14:35:26.254: INFO: successfully validated that service endpoint-test2 in namespace services-2145 exposes endpoints map[pod1:[80]] (7.105256944s elapsed)
STEP: Creating pod pod2 in namespace services-2145
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2145 to expose endpoints map[pod1:[80] pod2:[80]]
Jan 24 14:35:31.229: INFO: Unexpected endpoints: found map[748610ee-7867-4e5c-8d02-65ae6bac3479:[80]], expected map[pod1:[80] pod2:[80]] (4.9562065s elapsed, will retry)
Jan 24 14:35:34.418: INFO: successfully validated that service endpoint-test2 in namespace services-2145 exposes endpoints map[pod1:[80] pod2:[80]] (8.144733819s elapsed)
STEP: Deleting pod pod1 in namespace services-2145
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2145 to expose endpoints map[pod2:[80]]
Jan 24 14:35:34.496: INFO: successfully validated that service endpoint-test2 in namespace services-2145 exposes endpoints map[pod2:[80]] (60.891471ms elapsed)
STEP: Deleting pod pod2 in namespace services-2145
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2145 to expose endpoints map[]
Jan 24 14:35:34.588: INFO: successfully validated that service endpoint-test2 in namespace services-2145 exposes endpoints map[] (66.150226ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:35:34.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2145" for this suite.
Jan 24 14:35:56.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:35:56.885: INFO: namespace services-2145 deletion completed in 22.202328358s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:38.854 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:35:56.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan 24 14:35:57.058: INFO: Waiting up to 5m0s for pod "pod-796ccefd-5556-4bb6-acbd-e9a527952140" in namespace "emptydir-404" to be "success or failure"
Jan 24 14:35:57.081: INFO: Pod "pod-796ccefd-5556-4bb6-acbd-e9a527952140": Phase="Pending", Reason="", readiness=false. Elapsed: 23.590685ms
Jan 24 14:35:59.094: INFO: Pod "pod-796ccefd-5556-4bb6-acbd-e9a527952140": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036429552s
Jan 24 14:36:01.171: INFO: Pod "pod-796ccefd-5556-4bb6-acbd-e9a527952140": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113025351s
Jan 24 14:36:03.201: INFO: Pod "pod-796ccefd-5556-4bb6-acbd-e9a527952140": Phase="Pending", Reason="", readiness=false. Elapsed: 6.143347494s
Jan 24 14:36:05.211: INFO: Pod "pod-796ccefd-5556-4bb6-acbd-e9a527952140": Phase="Pending", Reason="", readiness=false. Elapsed: 8.153300166s
Jan 24 14:36:07.254: INFO: Pod "pod-796ccefd-5556-4bb6-acbd-e9a527952140": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.196539571s
STEP: Saw pod success
Jan 24 14:36:07.254: INFO: Pod "pod-796ccefd-5556-4bb6-acbd-e9a527952140" satisfied condition "success or failure"
Jan 24 14:36:07.265: INFO: Trying to get logs from node iruya-node pod pod-796ccefd-5556-4bb6-acbd-e9a527952140 container test-container: 
STEP: delete the pod
Jan 24 14:36:07.396: INFO: Waiting for pod pod-796ccefd-5556-4bb6-acbd-e9a527952140 to disappear
Jan 24 14:36:07.403: INFO: Pod pod-796ccefd-5556-4bb6-acbd-e9a527952140 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:36:07.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-404" for this suite.
Jan 24 14:36:13.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:36:13.603: INFO: namespace emptydir-404 deletion completed in 6.191221296s

• [SLOW TEST:16.718 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:36:13.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-ee72e664-d643-471d-b8a0-9d7f46570444
STEP: Creating a pod to test consume secrets
Jan 24 14:36:13.830: INFO: Waiting up to 5m0s for pod "pod-secrets-36ad02c6-aa46-4f9c-b01a-fa1173706081" in namespace "secrets-3743" to be "success or failure"
Jan 24 14:36:13.890: INFO: Pod "pod-secrets-36ad02c6-aa46-4f9c-b01a-fa1173706081": Phase="Pending", Reason="", readiness=false. Elapsed: 59.955029ms
Jan 24 14:36:15.900: INFO: Pod "pod-secrets-36ad02c6-aa46-4f9c-b01a-fa1173706081": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070506965s
Jan 24 14:36:17.917: INFO: Pod "pod-secrets-36ad02c6-aa46-4f9c-b01a-fa1173706081": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087418349s
Jan 24 14:36:19.927: INFO: Pod "pod-secrets-36ad02c6-aa46-4f9c-b01a-fa1173706081": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097304674s
Jan 24 14:36:21.934: INFO: Pod "pod-secrets-36ad02c6-aa46-4f9c-b01a-fa1173706081": Phase="Running", Reason="", readiness=true. Elapsed: 8.103811529s
Jan 24 14:36:23.949: INFO: Pod "pod-secrets-36ad02c6-aa46-4f9c-b01a-fa1173706081": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.119349531s
STEP: Saw pod success
Jan 24 14:36:23.949: INFO: Pod "pod-secrets-36ad02c6-aa46-4f9c-b01a-fa1173706081" satisfied condition "success or failure"
Jan 24 14:36:23.954: INFO: Trying to get logs from node iruya-node pod pod-secrets-36ad02c6-aa46-4f9c-b01a-fa1173706081 container secret-volume-test: 
STEP: delete the pod
Jan 24 14:36:24.097: INFO: Waiting for pod pod-secrets-36ad02c6-aa46-4f9c-b01a-fa1173706081 to disappear
Jan 24 14:36:24.113: INFO: Pod pod-secrets-36ad02c6-aa46-4f9c-b01a-fa1173706081 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:36:24.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3743" for this suite.
Jan 24 14:36:30.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:36:30.240: INFO: namespace secrets-3743 deletion completed in 6.118465037s

• [SLOW TEST:16.636 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:36:30.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 24 14:36:30.400: INFO: Waiting up to 5m0s for pod "pod-e629957b-729e-404c-a64f-00d5544144b1" in namespace "emptydir-8458" to be "success or failure"
Jan 24 14:36:30.406: INFO: Pod "pod-e629957b-729e-404c-a64f-00d5544144b1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.92534ms
Jan 24 14:36:32.414: INFO: Pod "pod-e629957b-729e-404c-a64f-00d5544144b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013827185s
Jan 24 14:36:34.469: INFO: Pod "pod-e629957b-729e-404c-a64f-00d5544144b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069295627s
Jan 24 14:36:36.528: INFO: Pod "pod-e629957b-729e-404c-a64f-00d5544144b1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.128096018s
Jan 24 14:36:38.539: INFO: Pod "pod-e629957b-729e-404c-a64f-00d5544144b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.13886286s
STEP: Saw pod success
Jan 24 14:36:38.539: INFO: Pod "pod-e629957b-729e-404c-a64f-00d5544144b1" satisfied condition "success or failure"
Jan 24 14:36:38.545: INFO: Trying to get logs from node iruya-node pod pod-e629957b-729e-404c-a64f-00d5544144b1 container test-container: 
STEP: delete the pod
Jan 24 14:36:38.604: INFO: Waiting for pod pod-e629957b-729e-404c-a64f-00d5544144b1 to disappear
Jan 24 14:36:38.636: INFO: Pod pod-e629957b-729e-404c-a64f-00d5544144b1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:36:38.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8458" for this suite.
Jan 24 14:36:44.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:36:44.815: INFO: namespace emptydir-8458 deletion completed in 6.17242941s

• [SLOW TEST:14.575 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:36:44.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-9144
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Jan 24 14:36:44.929: INFO: Found 0 stateful pods, waiting for 3
Jan 24 14:36:55.024: INFO: Found 2 stateful pods, waiting for 3
Jan 24 14:37:04.941: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 14:37:04.941: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 14:37:04.941: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 24 14:37:14.941: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 14:37:14.941: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 14:37:14.941: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan 24 14:37:14.977: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan 24 14:37:25.069: INFO: Updating stateful set ss2
Jan 24 14:37:25.187: INFO: Waiting for Pod statefulset-9144/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 24 14:37:35.204: INFO: Waiting for Pod statefulset-9144/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jan 24 14:37:45.546: INFO: Found 2 stateful pods, waiting for 3
Jan 24 14:37:55.556: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 14:37:55.557: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 14:37:55.557: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 24 14:38:05.557: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 14:38:05.557: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 14:38:05.557: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan 24 14:38:05.595: INFO: Updating stateful set ss2
Jan 24 14:38:05.694: INFO: Waiting for Pod statefulset-9144/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 24 14:38:15.715: INFO: Waiting for Pod statefulset-9144/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 24 14:38:25.786: INFO: Updating stateful set ss2
Jan 24 14:38:25.857: INFO: Waiting for StatefulSet statefulset-9144/ss2 to complete update
Jan 24 14:38:25.857: INFO: Waiting for Pod statefulset-9144/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 24 14:38:35.873: INFO: Waiting for StatefulSet statefulset-9144/ss2 to complete update
Jan 24 14:38:35.873: INFO: Waiting for Pod statefulset-9144/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 24 14:38:45.880: INFO: Deleting all statefulset in ns statefulset-9144
Jan 24 14:38:45.887: INFO: Scaling statefulset ss2 to 0
Jan 24 14:39:15.921: INFO: Waiting for statefulset status.replicas updated to 0
Jan 24 14:39:15.926: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:39:15.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9144" for this suite.
Jan 24 14:39:24.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:39:24.136: INFO: namespace statefulset-9144 deletion completed in 8.182237383s

• [SLOW TEST:159.321 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:39:24.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:39:32.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8180" for this suite.
Jan 24 14:39:38.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:39:38.458: INFO: namespace kubelet-test-8180 deletion completed in 6.138943299s

• [SLOW TEST:14.321 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:39:38.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-30f7134d-cd59-4a5e-8130-9f02b9a975f7
STEP: Creating a pod to test consume secrets
Jan 24 14:39:38.582: INFO: Waiting up to 5m0s for pod "pod-secrets-e6153330-6524-4952-9800-9c482306ba36" in namespace "secrets-4069" to be "success or failure"
Jan 24 14:39:38.600: INFO: Pod "pod-secrets-e6153330-6524-4952-9800-9c482306ba36": Phase="Pending", Reason="", readiness=false. Elapsed: 18.019083ms
Jan 24 14:39:40.608: INFO: Pod "pod-secrets-e6153330-6524-4952-9800-9c482306ba36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025325084s
Jan 24 14:39:42.620: INFO: Pod "pod-secrets-e6153330-6524-4952-9800-9c482306ba36": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037605036s
Jan 24 14:39:44.630: INFO: Pod "pod-secrets-e6153330-6524-4952-9800-9c482306ba36": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047938181s
Jan 24 14:39:46.638: INFO: Pod "pod-secrets-e6153330-6524-4952-9800-9c482306ba36": Phase="Running", Reason="", readiness=true. Elapsed: 8.055978378s
Jan 24 14:39:48.645: INFO: Pod "pod-secrets-e6153330-6524-4952-9800-9c482306ba36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.062397469s
STEP: Saw pod success
Jan 24 14:39:48.645: INFO: Pod "pod-secrets-e6153330-6524-4952-9800-9c482306ba36" satisfied condition "success or failure"
Jan 24 14:39:48.649: INFO: Trying to get logs from node iruya-node pod pod-secrets-e6153330-6524-4952-9800-9c482306ba36 container secret-env-test: 
STEP: delete the pod
Jan 24 14:39:48.696: INFO: Waiting for pod pod-secrets-e6153330-6524-4952-9800-9c482306ba36 to disappear
Jan 24 14:39:48.705: INFO: Pod pod-secrets-e6153330-6524-4952-9800-9c482306ba36 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:39:48.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4069" for this suite.
Jan 24 14:39:54.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:39:54.889: INFO: namespace secrets-4069 deletion completed in 6.176947326s

• [SLOW TEST:16.431 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:39:54.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 24 14:39:54.989: INFO: Waiting up to 5m0s for pod "pod-0334e7a5-c03c-4ba9-ac10-9fbc797e48d6" in namespace "emptydir-3328" to be "success or failure"
Jan 24 14:39:55.011: INFO: Pod "pod-0334e7a5-c03c-4ba9-ac10-9fbc797e48d6": Phase="Pending", Reason="", readiness=false. Elapsed: 21.864603ms
Jan 24 14:39:57.155: INFO: Pod "pod-0334e7a5-c03c-4ba9-ac10-9fbc797e48d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166704801s
Jan 24 14:39:59.170: INFO: Pod "pod-0334e7a5-c03c-4ba9-ac10-9fbc797e48d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.181039584s
Jan 24 14:40:01.178: INFO: Pod "pod-0334e7a5-c03c-4ba9-ac10-9fbc797e48d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.18889399s
Jan 24 14:40:03.185: INFO: Pod "pod-0334e7a5-c03c-4ba9-ac10-9fbc797e48d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.196599316s
STEP: Saw pod success
Jan 24 14:40:03.185: INFO: Pod "pod-0334e7a5-c03c-4ba9-ac10-9fbc797e48d6" satisfied condition "success or failure"
Jan 24 14:40:03.190: INFO: Trying to get logs from node iruya-node pod pod-0334e7a5-c03c-4ba9-ac10-9fbc797e48d6 container test-container: 
STEP: delete the pod
Jan 24 14:40:03.286: INFO: Waiting for pod pod-0334e7a5-c03c-4ba9-ac10-9fbc797e48d6 to disappear
Jan 24 14:40:03.291: INFO: Pod pod-0334e7a5-c03c-4ba9-ac10-9fbc797e48d6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:40:03.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3328" for this suite.
Jan 24 14:40:09.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:40:09.439: INFO: namespace emptydir-3328 deletion completed in 6.141930236s

• [SLOW TEST:14.550 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:40:09.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 24 14:40:09.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-811'
Jan 24 14:40:11.402: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 24 14:40:11.402: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jan 24 14:40:11.453: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Jan 24 14:40:11.477: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan 24 14:40:11.506: INFO: scanned /root for discovery docs: 
Jan 24 14:40:11.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-811'
Jan 24 14:40:33.807: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 24 14:40:33.807: INFO: stdout: "Created e2e-test-nginx-rc-2dccfda0f5d042f883722a6e605eb392\nScaling up e2e-test-nginx-rc-2dccfda0f5d042f883722a6e605eb392 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-2dccfda0f5d042f883722a6e605eb392 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-2dccfda0f5d042f883722a6e605eb392 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan 24 14:40:33.808: INFO: stdout: "Created e2e-test-nginx-rc-2dccfda0f5d042f883722a6e605eb392\nScaling up e2e-test-nginx-rc-2dccfda0f5d042f883722a6e605eb392 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-2dccfda0f5d042f883722a6e605eb392 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-2dccfda0f5d042f883722a6e605eb392 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan 24 14:40:33.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-811'
Jan 24 14:40:33.931: INFO: stderr: ""
Jan 24 14:40:33.931: INFO: stdout: "e2e-test-nginx-rc-2dccfda0f5d042f883722a6e605eb392-h2k6g e2e-test-nginx-rc-kl42r "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 24 14:40:38.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-811'
Jan 24 14:40:39.094: INFO: stderr: ""
Jan 24 14:40:39.094: INFO: stdout: "e2e-test-nginx-rc-2dccfda0f5d042f883722a6e605eb392-h2k6g "
Jan 24 14:40:39.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-2dccfda0f5d042f883722a6e605eb392-h2k6g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-811'
Jan 24 14:40:39.211: INFO: stderr: ""
Jan 24 14:40:39.212: INFO: stdout: "true"
Jan 24 14:40:39.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-2dccfda0f5d042f883722a6e605eb392-h2k6g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-811'
Jan 24 14:40:39.297: INFO: stderr: ""
Jan 24 14:40:39.297: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan 24 14:40:39.297: INFO: e2e-test-nginx-rc-2dccfda0f5d042f883722a6e605eb392-h2k6g is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Jan 24 14:40:39.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-811'
Jan 24 14:40:39.428: INFO: stderr: ""
Jan 24 14:40:39.428: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:40:39.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-811" for this suite.
Jan 24 14:40:45.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:40:45.627: INFO: namespace kubectl-811 deletion completed in 6.175295931s

• [SLOW TEST:36.187 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:40:45.628: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-572.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-572.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 24 14:40:57.807: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-572/dns-test-d38dd1d5-b5ec-4719-8d66-2c6ffdef8654: the server could not find the requested resource (get pods dns-test-d38dd1d5-b5ec-4719-8d66-2c6ffdef8654)
Jan 24 14:40:57.816: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-572/dns-test-d38dd1d5-b5ec-4719-8d66-2c6ffdef8654: the server could not find the requested resource (get pods dns-test-d38dd1d5-b5ec-4719-8d66-2c6ffdef8654)
Jan 24 14:40:57.824: INFO: Unable to read wheezy_udp@PodARecord from pod dns-572/dns-test-d38dd1d5-b5ec-4719-8d66-2c6ffdef8654: the server could not find the requested resource (get pods dns-test-d38dd1d5-b5ec-4719-8d66-2c6ffdef8654)
Jan 24 14:40:57.829: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-572/dns-test-d38dd1d5-b5ec-4719-8d66-2c6ffdef8654: the server could not find the requested resource (get pods dns-test-d38dd1d5-b5ec-4719-8d66-2c6ffdef8654)
Jan 24 14:40:57.836: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-572/dns-test-d38dd1d5-b5ec-4719-8d66-2c6ffdef8654: the server could not find the requested resource (get pods dns-test-d38dd1d5-b5ec-4719-8d66-2c6ffdef8654)
Jan 24 14:40:57.843: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-572/dns-test-d38dd1d5-b5ec-4719-8d66-2c6ffdef8654: the server could not find the requested resource (get pods dns-test-d38dd1d5-b5ec-4719-8d66-2c6ffdef8654)
Jan 24 14:40:57.852: INFO: Unable to read jessie_udp@PodARecord from pod dns-572/dns-test-d38dd1d5-b5ec-4719-8d66-2c6ffdef8654: the server could not find the requested resource (get pods dns-test-d38dd1d5-b5ec-4719-8d66-2c6ffdef8654)
Jan 24 14:40:57.859: INFO: Unable to read jessie_tcp@PodARecord from pod dns-572/dns-test-d38dd1d5-b5ec-4719-8d66-2c6ffdef8654: the server could not find the requested resource (get pods dns-test-d38dd1d5-b5ec-4719-8d66-2c6ffdef8654)
Jan 24 14:40:57.859: INFO: Lookups using dns-572/dns-test-d38dd1d5-b5ec-4719-8d66-2c6ffdef8654 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan 24 14:41:02.939: INFO: DNS probes using dns-572/dns-test-d38dd1d5-b5ec-4719-8d66-2c6ffdef8654 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:41:03.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-572" for this suite.
Jan 24 14:41:09.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:41:09.204: INFO: namespace dns-572 deletion completed in 6.115814242s

• [SLOW TEST:23.576 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:41:09.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan 24 14:41:09.268: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 24 14:41:09.401: INFO: Waiting for terminating namespaces to be deleted...
Jan 24 14:41:09.410: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan 24 14:41:09.426: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan 24 14:41:09.426: INFO: 	Container weave ready: true, restart count 0
Jan 24 14:41:09.426: INFO: 	Container weave-npc ready: true, restart count 0
Jan 24 14:41:09.426: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan 24 14:41:09.426: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 24 14:41:09.426: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan 24 14:41:09.439: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan 24 14:41:09.439: INFO: 	Container kube-scheduler ready: true, restart count 13
Jan 24 14:41:09.439: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 24 14:41:09.439: INFO: 	Container coredns ready: true, restart count 0
Jan 24 14:41:09.439: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan 24 14:41:09.439: INFO: 	Container etcd ready: true, restart count 0
Jan 24 14:41:09.439: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan 24 14:41:09.439: INFO: 	Container weave ready: true, restart count 0
Jan 24 14:41:09.439: INFO: 	Container weave-npc ready: true, restart count 0
Jan 24 14:41:09.439: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 24 14:41:09.439: INFO: 	Container coredns ready: true, restart count 0
Jan 24 14:41:09.439: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan 24 14:41:09.439: INFO: 	Container kube-controller-manager ready: true, restart count 19
Jan 24 14:41:09.439: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan 24 14:41:09.439: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 24 14:41:09.439: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan 24 14:41:09.439: INFO: 	Container kube-apiserver ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-0fbfcb4e-5ed2-4d44-91bc-0f6d715c2799 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-0fbfcb4e-5ed2-4d44-91bc-0f6d715c2799 off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-0fbfcb4e-5ed2-4d44-91bc-0f6d715c2799
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:41:27.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3453" for this suite.
Jan 24 14:41:57.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:41:57.906: INFO: namespace sched-pred-3453 deletion completed in 30.200893442s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:48.702 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:41:57.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Jan 24 14:41:58.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan 24 14:41:58.218: INFO: stderr: ""
Jan 24 14:41:58.218: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:41:58.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4767" for this suite.
Jan 24 14:42:04.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:42:04.363: INFO: namespace kubectl-4767 deletion completed in 6.139416446s

• [SLOW TEST:6.455 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:42:04.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Jan 24 14:42:04.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan 24 14:42:04.630: INFO: stderr: ""
Jan 24 14:42:04.630: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:42:04.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9401" for this suite.
Jan 24 14:42:10.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:42:10.777: INFO: namespace kubectl-9401 deletion completed in 6.142883794s

• [SLOW TEST:6.414 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:42:10.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-4230
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-4230
STEP: Creating statefulset with conflicting port in namespace statefulset-4230
STEP: Waiting until pod test-pod will start running in namespace statefulset-4230
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4230
Jan 24 14:42:23.057: INFO: Observed stateful pod in namespace: statefulset-4230, name: ss-0, uid: 7e627e8e-4d40-4b38-9a8c-3483247af689, status phase: Pending. Waiting for statefulset controller to delete.
Jan 24 14:42:26.495: INFO: Observed stateful pod in namespace: statefulset-4230, name: ss-0, uid: 7e627e8e-4d40-4b38-9a8c-3483247af689, status phase: Failed. Waiting for statefulset controller to delete.
Jan 24 14:42:26.525: INFO: Observed stateful pod in namespace: statefulset-4230, name: ss-0, uid: 7e627e8e-4d40-4b38-9a8c-3483247af689, status phase: Failed. Waiting for statefulset controller to delete.
Jan 24 14:42:26.598: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4230
STEP: Removing pod with conflicting port in namespace statefulset-4230
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4230 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 24 14:42:36.725: INFO: Deleting all statefulset in ns statefulset-4230
Jan 24 14:42:36.732: INFO: Scaling statefulset ss to 0
Jan 24 14:42:46.771: INFO: Waiting for statefulset status.replicas updated to 0
Jan 24 14:42:46.776: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:42:46.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4230" for this suite.
Jan 24 14:42:52.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:42:52.969: INFO: namespace statefulset-4230 deletion completed in 6.142693912s

• [SLOW TEST:42.191 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:42:52.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 24 14:42:53.003: INFO: Creating deployment "test-recreate-deployment"
Jan 24 14:42:53.068: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan 24 14:42:53.076: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Jan 24 14:42:55.097: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan 24 14:42:55.104: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715473773, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715473773, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715473773, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715473773, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 14:42:57.112: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715473773, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715473773, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715473773, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715473773, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 14:42:59.110: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715473773, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715473773, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715473773, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715473773, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 14:43:01.111: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan 24 14:43:01.123: INFO: Updating deployment test-recreate-deployment
Jan 24 14:43:01.123: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 24 14:43:01.557: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-6169,SelfLink:/apis/apps/v1/namespaces/deployment-6169/deployments/test-recreate-deployment,UID:62155ddc-0d5e-42bf-81f1-4e70d388dd2c,ResourceVersion:21698688,Generation:2,CreationTimestamp:2020-01-24 14:42:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-24 14:43:01 +0000 UTC 2020-01-24 14:43:01 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-24 14:43:01 +0000 UTC 2020-01-24 14:42:53 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan 24 14:43:01.564: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-6169,SelfLink:/apis/apps/v1/namespaces/deployment-6169/replicasets/test-recreate-deployment-5c8c9cc69d,UID:67ba0bd7-da84-4b01-bd92-af5e93400d07,ResourceVersion:21698686,Generation:1,CreationTimestamp:2020-01-24 14:43:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 62155ddc-0d5e-42bf-81f1-4e70d388dd2c 0xc000afac67 0xc000afac68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 24 14:43:01.564: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan 24 14:43:01.564: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-6169,SelfLink:/apis/apps/v1/namespaces/deployment-6169/replicasets/test-recreate-deployment-6df85df6b9,UID:2fbc0fc7-9b98-4d4f-a1ac-0ece49d78d96,ResourceVersion:21698678,Generation:2,CreationTimestamp:2020-01-24 14:42:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 62155ddc-0d5e-42bf-81f1-4e70d388dd2c 0xc000afad37 0xc000afad38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 24 14:43:01.574: INFO: Pod "test-recreate-deployment-5c8c9cc69d-q899d" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-q899d,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-6169,SelfLink:/api/v1/namespaces/deployment-6169/pods/test-recreate-deployment-5c8c9cc69d-q899d,UID:238cd67b-442e-4732-840c-858a7bde5c4f,ResourceVersion:21698690,Generation:0,CreationTimestamp:2020-01-24 14:43:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 67ba0bd7-da84-4b01-bd92-af5e93400d07 0xc000afb687 0xc000afb688}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f26cw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f26cw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-f26cw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000afb700} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000afb720}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:43:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:43:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:43:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:43:01 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-24 14:43:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:43:01.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6169" for this suite.
Jan 24 14:43:07.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:43:07.813: INFO: namespace deployment-6169 deletion completed in 6.233554668s

• [SLOW TEST:14.844 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:43:07.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 24 14:43:08.051: INFO: Waiting up to 5m0s for pod "downwardapi-volume-660d8e3b-0d90-4eb2-b546-2b280f875f01" in namespace "projected-5847" to be "success or failure"
Jan 24 14:43:08.163: INFO: Pod "downwardapi-volume-660d8e3b-0d90-4eb2-b546-2b280f875f01": Phase="Pending", Reason="", readiness=false. Elapsed: 111.848618ms
Jan 24 14:43:10.175: INFO: Pod "downwardapi-volume-660d8e3b-0d90-4eb2-b546-2b280f875f01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124606776s
Jan 24 14:43:12.193: INFO: Pod "downwardapi-volume-660d8e3b-0d90-4eb2-b546-2b280f875f01": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142048848s
Jan 24 14:43:14.200: INFO: Pod "downwardapi-volume-660d8e3b-0d90-4eb2-b546-2b280f875f01": Phase="Pending", Reason="", readiness=false. Elapsed: 6.149016156s
Jan 24 14:43:16.209: INFO: Pod "downwardapi-volume-660d8e3b-0d90-4eb2-b546-2b280f875f01": Phase="Pending", Reason="", readiness=false. Elapsed: 8.158676357s
Jan 24 14:43:18.222: INFO: Pod "downwardapi-volume-660d8e3b-0d90-4eb2-b546-2b280f875f01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.171775883s
STEP: Saw pod success
Jan 24 14:43:18.223: INFO: Pod "downwardapi-volume-660d8e3b-0d90-4eb2-b546-2b280f875f01" satisfied condition "success or failure"
Jan 24 14:43:18.232: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-660d8e3b-0d90-4eb2-b546-2b280f875f01 container client-container: 
STEP: delete the pod
Jan 24 14:43:18.295: INFO: Waiting for pod downwardapi-volume-660d8e3b-0d90-4eb2-b546-2b280f875f01 to disappear
Jan 24 14:43:18.306: INFO: Pod downwardapi-volume-660d8e3b-0d90-4eb2-b546-2b280f875f01 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:43:18.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5847" for this suite.
Jan 24 14:43:24.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:43:24.573: INFO: namespace projected-5847 deletion completed in 6.25319842s

• [SLOW TEST:16.760 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:43:24.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-8981d7d6-e15e-48fa-be7b-4b0e8e4ef837 in namespace container-probe-8365
Jan 24 14:43:32.775: INFO: Started pod liveness-8981d7d6-e15e-48fa-be7b-4b0e8e4ef837 in namespace container-probe-8365
STEP: checking the pod's current state and verifying that restartCount is present
Jan 24 14:43:32.785: INFO: Initial restart count of pod liveness-8981d7d6-e15e-48fa-be7b-4b0e8e4ef837 is 0
Jan 24 14:43:48.886: INFO: Restart count of pod container-probe-8365/liveness-8981d7d6-e15e-48fa-be7b-4b0e8e4ef837 is now 1 (16.100843462s elapsed)
Jan 24 14:44:08.998: INFO: Restart count of pod container-probe-8365/liveness-8981d7d6-e15e-48fa-be7b-4b0e8e4ef837 is now 2 (36.213534161s elapsed)
Jan 24 14:44:29.099: INFO: Restart count of pod container-probe-8365/liveness-8981d7d6-e15e-48fa-be7b-4b0e8e4ef837 is now 3 (56.313866783s elapsed)
Jan 24 14:44:49.185: INFO: Restart count of pod container-probe-8365/liveness-8981d7d6-e15e-48fa-be7b-4b0e8e4ef837 is now 4 (1m16.400364705s elapsed)
Jan 24 14:45:07.551: INFO: Restart count of pod container-probe-8365/liveness-8981d7d6-e15e-48fa-be7b-4b0e8e4ef837 is now 5 (1m34.765871552s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:45:07.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8365" for this suite.
Jan 24 14:45:13.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:45:13.848: INFO: namespace container-probe-8365 deletion completed in 6.204518188s

• [SLOW TEST:109.275 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:45:13.850: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-82ad62b8-c084-48df-a96e-08e95ad546fc
STEP: Creating a pod to test consume secrets
Jan 24 14:45:14.058: INFO: Waiting up to 5m0s for pod "pod-secrets-c6e4d6e6-5ae6-4a0d-87cc-fadcd770580e" in namespace "secrets-5456" to be "success or failure"
Jan 24 14:45:14.076: INFO: Pod "pod-secrets-c6e4d6e6-5ae6-4a0d-87cc-fadcd770580e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.298576ms
Jan 24 14:45:16.084: INFO: Pod "pod-secrets-c6e4d6e6-5ae6-4a0d-87cc-fadcd770580e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025463817s
Jan 24 14:45:18.098: INFO: Pod "pod-secrets-c6e4d6e6-5ae6-4a0d-87cc-fadcd770580e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039498624s
Jan 24 14:45:20.104: INFO: Pod "pod-secrets-c6e4d6e6-5ae6-4a0d-87cc-fadcd770580e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045441203s
Jan 24 14:45:22.110: INFO: Pod "pod-secrets-c6e4d6e6-5ae6-4a0d-87cc-fadcd770580e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05227931s
STEP: Saw pod success
Jan 24 14:45:22.110: INFO: Pod "pod-secrets-c6e4d6e6-5ae6-4a0d-87cc-fadcd770580e" satisfied condition "success or failure"
Jan 24 14:45:22.115: INFO: Trying to get logs from node iruya-node pod pod-secrets-c6e4d6e6-5ae6-4a0d-87cc-fadcd770580e container secret-volume-test: 
STEP: delete the pod
Jan 24 14:45:22.234: INFO: Waiting for pod pod-secrets-c6e4d6e6-5ae6-4a0d-87cc-fadcd770580e to disappear
Jan 24 14:45:22.243: INFO: Pod pod-secrets-c6e4d6e6-5ae6-4a0d-87cc-fadcd770580e no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:45:22.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5456" for this suite.
Jan 24 14:45:28.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:45:28.502: INFO: namespace secrets-5456 deletion completed in 6.253667853s

• [SLOW TEST:14.653 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:45:28.503: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 24 14:45:28.673: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan 24 14:45:28.715: INFO: Number of nodes with available pods: 0
Jan 24 14:45:28.716: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:45:30.763: INFO: Number of nodes with available pods: 0
Jan 24 14:45:30.763: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:45:31.731: INFO: Number of nodes with available pods: 0
Jan 24 14:45:31.731: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:45:32.733: INFO: Number of nodes with available pods: 0
Jan 24 14:45:32.734: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:45:33.742: INFO: Number of nodes with available pods: 0
Jan 24 14:45:33.742: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:45:35.994: INFO: Number of nodes with available pods: 0
Jan 24 14:45:35.994: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:45:37.026: INFO: Number of nodes with available pods: 0
Jan 24 14:45:37.026: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:45:37.725: INFO: Number of nodes with available pods: 0
Jan 24 14:45:37.725: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:45:38.730: INFO: Number of nodes with available pods: 0
Jan 24 14:45:38.730: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:45:39.731: INFO: Number of nodes with available pods: 2
Jan 24 14:45:39.731: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan 24 14:45:39.868: INFO: Wrong image for pod: daemon-set-zpgwt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:45:39.868: INFO: Wrong image for pod: daemon-set-zphw9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:45:40.907: INFO: Wrong image for pod: daemon-set-zpgwt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:45:40.907: INFO: Wrong image for pod: daemon-set-zphw9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:45:42.093: INFO: Wrong image for pod: daemon-set-zpgwt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:45:42.093: INFO: Wrong image for pod: daemon-set-zphw9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:45:42.905: INFO: Wrong image for pod: daemon-set-zpgwt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:45:42.905: INFO: Wrong image for pod: daemon-set-zphw9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:45:43.990: INFO: Wrong image for pod: daemon-set-zpgwt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:45:43.990: INFO: Wrong image for pod: daemon-set-zphw9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:45:44.904: INFO: Wrong image for pod: daemon-set-zpgwt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:45:44.904: INFO: Pod daemon-set-zpgwt is not available
Jan 24 14:45:44.904: INFO: Wrong image for pod: daemon-set-zphw9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:45:45.910: INFO: Wrong image for pod: daemon-set-zphw9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:45:46.909: INFO: Pod daemon-set-9lm49 is not available
Jan 24 14:45:46.909: INFO: Wrong image for pod: daemon-set-zphw9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:45:48.028: INFO: Pod daemon-set-9lm49 is not available
Jan 24 14:45:48.028: INFO: Wrong image for pod: daemon-set-zphw9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:45:48.913: INFO: Pod daemon-set-9lm49 is not available
Jan 24 14:45:48.913: INFO: Wrong image for pod: daemon-set-zphw9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:45:50.445: INFO: Pod daemon-set-9lm49 is not available
Jan 24 14:45:50.445: INFO: Wrong image for pod: daemon-set-zphw9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:45:50.906: INFO: Pod daemon-set-9lm49 is not available
Jan 24 14:45:50.906: INFO: Wrong image for pod: daemon-set-zphw9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:45:51.905: INFO: Pod daemon-set-9lm49 is not available
Jan 24 14:45:51.905: INFO: Wrong image for pod: daemon-set-zphw9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:45:52.911: INFO: Wrong image for pod: daemon-set-zphw9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:45:53.948: INFO: Wrong image for pod: daemon-set-zphw9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:45:54.905: INFO: Wrong image for pod: daemon-set-zphw9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:45:55.904: INFO: Wrong image for pod: daemon-set-zphw9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:45:56.910: INFO: Wrong image for pod: daemon-set-zphw9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:45:56.910: INFO: Pod daemon-set-zphw9 is not available
Jan 24 14:45:57.929: INFO: Wrong image for pod: daemon-set-zphw9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:45:57.929: INFO: Pod daemon-set-zphw9 is not available
Jan 24 14:45:58.905: INFO: Wrong image for pod: daemon-set-zphw9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:45:58.905: INFO: Pod daemon-set-zphw9 is not available
Jan 24 14:45:59.909: INFO: Wrong image for pod: daemon-set-zphw9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:45:59.909: INFO: Pod daemon-set-zphw9 is not available
Jan 24 14:46:00.908: INFO: Wrong image for pod: daemon-set-zphw9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:46:00.908: INFO: Pod daemon-set-zphw9 is not available
Jan 24 14:46:01.909: INFO: Wrong image for pod: daemon-set-zphw9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:46:01.909: INFO: Pod daemon-set-zphw9 is not available
Jan 24 14:46:02.912: INFO: Wrong image for pod: daemon-set-zphw9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:46:02.912: INFO: Pod daemon-set-zphw9 is not available
Jan 24 14:46:03.906: INFO: Wrong image for pod: daemon-set-zphw9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:46:03.906: INFO: Pod daemon-set-zphw9 is not available
Jan 24 14:46:04.907: INFO: Wrong image for pod: daemon-set-zphw9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:46:04.907: INFO: Pod daemon-set-zphw9 is not available
Jan 24 14:46:05.907: INFO: Wrong image for pod: daemon-set-zphw9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 14:46:05.907: INFO: Pod daemon-set-zphw9 is not available
Jan 24 14:46:06.914: INFO: Pod daemon-set-75st7 is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan 24 14:46:06.931: INFO: Number of nodes with available pods: 1
Jan 24 14:46:06.931: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:46:07.951: INFO: Number of nodes with available pods: 1
Jan 24 14:46:07.951: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:46:08.946: INFO: Number of nodes with available pods: 1
Jan 24 14:46:08.947: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:46:09.949: INFO: Number of nodes with available pods: 1
Jan 24 14:46:09.949: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:46:10.951: INFO: Number of nodes with available pods: 1
Jan 24 14:46:10.951: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:46:11.953: INFO: Number of nodes with available pods: 1
Jan 24 14:46:11.953: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:46:12.951: INFO: Number of nodes with available pods: 1
Jan 24 14:46:12.951: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:46:13.956: INFO: Number of nodes with available pods: 2
Jan 24 14:46:13.956: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8211, will wait for the garbage collector to delete the pods
Jan 24 14:46:14.078: INFO: Deleting DaemonSet.extensions daemon-set took: 37.317808ms
Jan 24 14:46:14.379: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.492722ms
Jan 24 14:46:21.714: INFO: Number of nodes with available pods: 0
Jan 24 14:46:21.714: INFO: Number of running nodes: 0, number of available pods: 0
Jan 24 14:46:21.722: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8211/daemonsets","resourceVersion":"21699154"},"items":null}

Jan 24 14:46:21.728: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8211/pods","resourceVersion":"21699154"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:46:21.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8211" for this suite.
Jan 24 14:46:27.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:46:27.915: INFO: namespace daemonsets-8211 deletion completed in 6.167078315s

• [SLOW TEST:59.412 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:46:27.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan 24 14:46:28.400: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 24 14:46:28.441: INFO: Waiting for terminating namespaces to be deleted...
Jan 24 14:46:28.446: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan 24 14:46:28.484: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan 24 14:46:28.484: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 24 14:46:28.484: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan 24 14:46:28.484: INFO: 	Container weave ready: true, restart count 0
Jan 24 14:46:28.484: INFO: 	Container weave-npc ready: true, restart count 0
Jan 24 14:46:28.484: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan 24 14:46:28.515: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan 24 14:46:28.515: INFO: 	Container kube-controller-manager ready: true, restart count 19
Jan 24 14:46:28.515: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan 24 14:46:28.515: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 24 14:46:28.515: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan 24 14:46:28.515: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan 24 14:46:28.515: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan 24 14:46:28.515: INFO: 	Container kube-scheduler ready: true, restart count 13
Jan 24 14:46:28.515: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 24 14:46:28.515: INFO: 	Container coredns ready: true, restart count 0
Jan 24 14:46:28.515: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan 24 14:46:28.515: INFO: 	Container etcd ready: true, restart count 0
Jan 24 14:46:28.515: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan 24 14:46:28.515: INFO: 	Container weave ready: true, restart count 0
Jan 24 14:46:28.515: INFO: 	Container weave-npc ready: true, restart count 0
Jan 24 14:46:28.515: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 24 14:46:28.515: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Jan 24 14:46:28.751: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan 24 14:46:28.751: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan 24 14:46:28.751: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Jan 24 14:46:28.751: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Jan 24 14:46:28.751: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Jan 24 14:46:28.751: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Jan 24 14:46:28.751: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Jan 24 14:46:28.751: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan 24 14:46:28.751: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Jan 24 14:46:28.751: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-2e8ff4c1-92ab-4f67-9126-9f47884ddffe.15ecd9eec412fa24], Reason = [Scheduled], Message = [Successfully assigned sched-pred-563/filler-pod-2e8ff4c1-92ab-4f67-9126-9f47884ddffe to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-2e8ff4c1-92ab-4f67-9126-9f47884ddffe.15ecd9f0108076e5], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-2e8ff4c1-92ab-4f67-9126-9f47884ddffe.15ecd9f0e6352f45], Reason = [Created], Message = [Created container filler-pod-2e8ff4c1-92ab-4f67-9126-9f47884ddffe]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-2e8ff4c1-92ab-4f67-9126-9f47884ddffe.15ecd9f10336beb0], Reason = [Started], Message = [Started container filler-pod-2e8ff4c1-92ab-4f67-9126-9f47884ddffe]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-d0e1ef67-a82b-493f-a62a-b5a3c2ce880e.15ecd9eec9e335c5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-563/filler-pod-d0e1ef67-a82b-493f-a62a-b5a3c2ce880e to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-d0e1ef67-a82b-493f-a62a-b5a3c2ce880e.15ecd9effc630dd0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-d0e1ef67-a82b-493f-a62a-b5a3c2ce880e.15ecd9f0cfc0d60d], Reason = [Created], Message = [Created container filler-pod-d0e1ef67-a82b-493f-a62a-b5a3c2ce880e]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-d0e1ef67-a82b-493f-a62a-b5a3c2ce880e.15ecd9f0f364d8a4], Reason = [Started], Message = [Started container filler-pod-d0e1ef67-a82b-493f-a62a-b5a3c2ce880e]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15ecd9f197c5d3ad], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:46:42.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-563" for this suite.
Jan 24 14:46:49.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:46:49.996: INFO: namespace sched-pred-563 deletion completed in 7.866483007s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:22.081 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:46:49.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 24 14:47:00.020: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:47:00.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8718" for this suite.
Jan 24 14:47:06.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:47:06.340: INFO: namespace container-runtime-8718 deletion completed in 6.208747198s

• [SLOW TEST:16.342 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:47:06.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 24 14:47:06.503: INFO: Waiting up to 5m0s for pod "downwardapi-volume-83176c1a-0090-4d1a-bd1d-37c59775fe71" in namespace "downward-api-1554" to be "success or failure"
Jan 24 14:47:06.520: INFO: Pod "downwardapi-volume-83176c1a-0090-4d1a-bd1d-37c59775fe71": Phase="Pending", Reason="", readiness=false. Elapsed: 17.00837ms
Jan 24 14:47:08.535: INFO: Pod "downwardapi-volume-83176c1a-0090-4d1a-bd1d-37c59775fe71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031998325s
Jan 24 14:47:10.552: INFO: Pod "downwardapi-volume-83176c1a-0090-4d1a-bd1d-37c59775fe71": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049113724s
Jan 24 14:47:12.562: INFO: Pod "downwardapi-volume-83176c1a-0090-4d1a-bd1d-37c59775fe71": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059202071s
Jan 24 14:47:14.571: INFO: Pod "downwardapi-volume-83176c1a-0090-4d1a-bd1d-37c59775fe71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068131856s
STEP: Saw pod success
Jan 24 14:47:14.571: INFO: Pod "downwardapi-volume-83176c1a-0090-4d1a-bd1d-37c59775fe71" satisfied condition "success or failure"
Jan 24 14:47:14.574: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-83176c1a-0090-4d1a-bd1d-37c59775fe71 container client-container: 
STEP: delete the pod
Jan 24 14:47:14.723: INFO: Waiting for pod downwardapi-volume-83176c1a-0090-4d1a-bd1d-37c59775fe71 to disappear
Jan 24 14:47:14.789: INFO: Pod downwardapi-volume-83176c1a-0090-4d1a-bd1d-37c59775fe71 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:47:14.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1554" for this suite.
Jan 24 14:47:20.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:47:20.999: INFO: namespace downward-api-1554 deletion completed in 6.195251984s

• [SLOW TEST:14.658 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:47:20.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 24 14:47:47.155: INFO: Container started at 2020-01-24 14:47:27 +0000 UTC, pod became ready at 2020-01-24 14:47:46 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:47:47.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-731" for this suite.
Jan 24 14:48:09.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:48:09.355: INFO: namespace container-probe-731 deletion completed in 22.192347295s

• [SLOW TEST:48.356 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:48:09.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 24 14:48:09.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan 24 14:48:09.618: INFO: stderr: ""
Jan 24 14:48:09.618: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:48:09.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2211" for this suite.
Jan 24 14:48:15.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:48:15.789: INFO: namespace kubectl-2211 deletion completed in 6.16110218s

• [SLOW TEST:6.433 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:48:15.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan 24 14:48:23.971: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-ec5bbf49-c854-4980-834b-95999ef4f1c2,GenerateName:,Namespace:events-3051,SelfLink:/api/v1/namespaces/events-3051/pods/send-events-ec5bbf49-c854-4980-834b-95999ef4f1c2,UID:9242e4d9-6bfa-4bbb-bfab-4857c5f0c519,ResourceVersion:21699483,Generation:0,CreationTimestamp:2020-01-24 14:48:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 930588948,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pf2qv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pf2qv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-pf2qv true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00210bd30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00210bd50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:48:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:48:22 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:48:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 14:48:15 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-24 14:48:16 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-24 14:48:21 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://f4e11382d08e61d74ee5d4c66817cce547e71205fdeb63533da5ff6e4cd89493}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan 24 14:48:25.981: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan 24 14:48:27.991: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:48:28.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-3051" for this suite.
Jan 24 14:49:08.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:49:08.208: INFO: namespace events-3051 deletion completed in 40.19126401s

• [SLOW TEST:52.419 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:49:08.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:49:14.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-2363" for this suite.
Jan 24 14:49:20.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:49:20.934: INFO: namespace namespaces-2363 deletion completed in 6.210935615s
STEP: Destroying namespace "nsdeletetest-4105" for this suite.
Jan 24 14:49:20.937: INFO: Namespace nsdeletetest-4105 was already deleted
STEP: Destroying namespace "nsdeletetest-6620" for this suite.
Jan 24 14:49:27.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:49:27.125: INFO: namespace nsdeletetest-6620 deletion completed in 6.188544785s

• [SLOW TEST:18.917 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:49:27.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 24 14:49:27.320: INFO: Number of nodes with available pods: 0
Jan 24 14:49:27.320: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:49:28.934: INFO: Number of nodes with available pods: 0
Jan 24 14:49:28.934: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:49:29.508: INFO: Number of nodes with available pods: 0
Jan 24 14:49:29.508: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:49:30.408: INFO: Number of nodes with available pods: 0
Jan 24 14:49:30.408: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:49:31.342: INFO: Number of nodes with available pods: 0
Jan 24 14:49:31.342: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:49:32.331: INFO: Number of nodes with available pods: 0
Jan 24 14:49:32.331: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:49:33.815: INFO: Number of nodes with available pods: 0
Jan 24 14:49:33.815: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:49:35.819: INFO: Number of nodes with available pods: 0
Jan 24 14:49:35.819: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:49:36.359: INFO: Number of nodes with available pods: 0
Jan 24 14:49:36.359: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:49:37.344: INFO: Number of nodes with available pods: 2
Jan 24 14:49:37.344: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan 24 14:49:37.444: INFO: Number of nodes with available pods: 1
Jan 24 14:49:37.444: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 24 14:49:38.833: INFO: Number of nodes with available pods: 1
Jan 24 14:49:38.833: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 24 14:49:39.659: INFO: Number of nodes with available pods: 1
Jan 24 14:49:39.659: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 24 14:49:40.460: INFO: Number of nodes with available pods: 1
Jan 24 14:49:40.460: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 24 14:49:41.460: INFO: Number of nodes with available pods: 1
Jan 24 14:49:41.460: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 24 14:49:42.465: INFO: Number of nodes with available pods: 1
Jan 24 14:49:42.465: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 24 14:49:43.516: INFO: Number of nodes with available pods: 1
Jan 24 14:49:43.516: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 24 14:49:44.462: INFO: Number of nodes with available pods: 1
Jan 24 14:49:44.462: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 24 14:49:45.459: INFO: Number of nodes with available pods: 1
Jan 24 14:49:45.459: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 24 14:49:46.460: INFO: Number of nodes with available pods: 2
Jan 24 14:49:46.460: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-476, will wait for the garbage collector to delete the pods
Jan 24 14:49:46.537: INFO: Deleting DaemonSet.extensions daemon-set took: 14.833557ms
Jan 24 14:49:46.837: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.431845ms
Jan 24 14:49:56.645: INFO: Number of nodes with available pods: 0
Jan 24 14:49:56.645: INFO: Number of running nodes: 0, number of available pods: 0
Jan 24 14:49:56.649: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-476/daemonsets","resourceVersion":"21699705"},"items":null}

Jan 24 14:49:56.652: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-476/pods","resourceVersion":"21699705"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:49:56.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-476" for this suite.
Jan 24 14:50:02.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:50:02.819: INFO: namespace daemonsets-476 deletion completed in 6.153756477s

• [SLOW TEST:35.694 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:50:02.820: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan 24 14:50:02.960: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-2393,SelfLink:/api/v1/namespaces/watch-2393/configmaps/e2e-watch-test-resource-version,UID:9d181899-81cc-46ec-a433-f55894f63806,ResourceVersion:21699745,Generation:0,CreationTimestamp:2020-01-24 14:50:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 24 14:50:02.960: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-2393,SelfLink:/api/v1/namespaces/watch-2393/configmaps/e2e-watch-test-resource-version,UID:9d181899-81cc-46ec-a433-f55894f63806,ResourceVersion:21699746,Generation:0,CreationTimestamp:2020-01-24 14:50:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:50:02.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2393" for this suite.
Jan 24 14:50:08.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:50:09.146: INFO: namespace watch-2393 deletion completed in 6.182209741s

• [SLOW TEST:6.326 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:50:09.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-042d2224-54d2-407a-b341-d42523cace37
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-042d2224-54d2-407a-b341-d42523cace37
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:50:19.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4974" for this suite.
Jan 24 14:50:41.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:50:41.608: INFO: namespace configmap-4974 deletion completed in 22.134427924s

• [SLOW TEST:32.461 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:50:41.608: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-8107
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8107 to expose endpoints map[]
Jan 24 14:50:41.792: INFO: successfully validated that service multi-endpoint-test in namespace services-8107 exposes endpoints map[] (33.962459ms elapsed)
STEP: Creating pod pod1 in namespace services-8107
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8107 to expose endpoints map[pod1:[100]]
Jan 24 14:50:46.033: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.178227582s elapsed, will retry)
Jan 24 14:50:49.074: INFO: successfully validated that service multi-endpoint-test in namespace services-8107 exposes endpoints map[pod1:[100]] (7.219528971s elapsed)
STEP: Creating pod pod2 in namespace services-8107
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8107 to expose endpoints map[pod1:[100] pod2:[101]]
Jan 24 14:50:55.351: INFO: Unexpected endpoints: found map[9c818292-9ad9-4ee6-8374-099012e8691d:[100]], expected map[pod1:[100] pod2:[101]] (6.266643921s elapsed, will retry)
Jan 24 14:50:57.396: INFO: successfully validated that service multi-endpoint-test in namespace services-8107 exposes endpoints map[pod1:[100] pod2:[101]] (8.311620243s elapsed)
STEP: Deleting pod pod1 in namespace services-8107
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8107 to expose endpoints map[pod2:[101]]
Jan 24 14:50:57.472: INFO: successfully validated that service multi-endpoint-test in namespace services-8107 exposes endpoints map[pod2:[101]] (62.060005ms elapsed)
STEP: Deleting pod pod2 in namespace services-8107
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8107 to expose endpoints map[]
Jan 24 14:50:57.517: INFO: successfully validated that service multi-endpoint-test in namespace services-8107 exposes endpoints map[] (14.554265ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:50:57.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8107" for this suite.
Jan 24 14:51:19.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:51:19.768: INFO: namespace services-8107 deletion completed in 22.15564543s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:38.160 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:51:19.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-6715/secret-test-6907a775-0aaa-4553-a863-9f1c667b9492
STEP: Creating a pod to test consume secrets
Jan 24 14:51:19.916: INFO: Waiting up to 5m0s for pod "pod-configmaps-f59778eb-f992-4d7d-93c3-570026529d84" in namespace "secrets-6715" to be "success or failure"
Jan 24 14:51:19.937: INFO: Pod "pod-configmaps-f59778eb-f992-4d7d-93c3-570026529d84": Phase="Pending", Reason="", readiness=false. Elapsed: 21.310489ms
Jan 24 14:51:21.946: INFO: Pod "pod-configmaps-f59778eb-f992-4d7d-93c3-570026529d84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03063328s
Jan 24 14:51:23.965: INFO: Pod "pod-configmaps-f59778eb-f992-4d7d-93c3-570026529d84": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049208468s
Jan 24 14:51:25.973: INFO: Pod "pod-configmaps-f59778eb-f992-4d7d-93c3-570026529d84": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056818349s
Jan 24 14:51:27.981: INFO: Pod "pod-configmaps-f59778eb-f992-4d7d-93c3-570026529d84": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065106732s
Jan 24 14:51:29.988: INFO: Pod "pod-configmaps-f59778eb-f992-4d7d-93c3-570026529d84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072577692s
STEP: Saw pod success
Jan 24 14:51:29.988: INFO: Pod "pod-configmaps-f59778eb-f992-4d7d-93c3-570026529d84" satisfied condition "success or failure"
Jan 24 14:51:29.993: INFO: Trying to get logs from node iruya-node pod pod-configmaps-f59778eb-f992-4d7d-93c3-570026529d84 container env-test: 
STEP: delete the pod
Jan 24 14:51:30.044: INFO: Waiting for pod pod-configmaps-f59778eb-f992-4d7d-93c3-570026529d84 to disappear
Jan 24 14:51:30.060: INFO: Pod pod-configmaps-f59778eb-f992-4d7d-93c3-570026529d84 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:51:30.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6715" for this suite.
Jan 24 14:51:36.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:51:36.297: INFO: namespace secrets-6715 deletion completed in 6.233343979s

• [SLOW TEST:16.529 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:51:36.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan 24 14:51:36.446: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8247,SelfLink:/api/v1/namespaces/watch-8247/configmaps/e2e-watch-test-label-changed,UID:2a4f7b8c-6134-4e9e-84b2-cb2f43308bb2,ResourceVersion:21699981,Generation:0,CreationTimestamp:2020-01-24 14:51:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 24 14:51:36.447: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8247,SelfLink:/api/v1/namespaces/watch-8247/configmaps/e2e-watch-test-label-changed,UID:2a4f7b8c-6134-4e9e-84b2-cb2f43308bb2,ResourceVersion:21699982,Generation:0,CreationTimestamp:2020-01-24 14:51:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 24 14:51:36.447: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8247,SelfLink:/api/v1/namespaces/watch-8247/configmaps/e2e-watch-test-label-changed,UID:2a4f7b8c-6134-4e9e-84b2-cb2f43308bb2,ResourceVersion:21699983,Generation:0,CreationTimestamp:2020-01-24 14:51:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan 24 14:51:46.530: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8247,SelfLink:/api/v1/namespaces/watch-8247/configmaps/e2e-watch-test-label-changed,UID:2a4f7b8c-6134-4e9e-84b2-cb2f43308bb2,ResourceVersion:21699998,Generation:0,CreationTimestamp:2020-01-24 14:51:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 24 14:51:46.531: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8247,SelfLink:/api/v1/namespaces/watch-8247/configmaps/e2e-watch-test-label-changed,UID:2a4f7b8c-6134-4e9e-84b2-cb2f43308bb2,ResourceVersion:21699999,Generation:0,CreationTimestamp:2020-01-24 14:51:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan 24 14:51:46.531: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8247,SelfLink:/api/v1/namespaces/watch-8247/configmaps/e2e-watch-test-label-changed,UID:2a4f7b8c-6134-4e9e-84b2-cb2f43308bb2,ResourceVersion:21700000,Generation:0,CreationTimestamp:2020-01-24 14:51:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:51:46.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8247" for this suite.
Jan 24 14:51:52.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:51:52.824: INFO: namespace watch-8247 deletion completed in 6.123119317s

• [SLOW TEST:16.526 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:51:52.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:52:44.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6965" for this suite.
Jan 24 14:52:50.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:52:50.250: INFO: namespace container-runtime-6965 deletion completed in 6.229538493s

• [SLOW TEST:57.426 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:52:50.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-1658ecec-8eb1-468b-a725-499008787a05
STEP: Creating secret with name s-test-opt-upd-966dc07c-06e8-4cbf-8a0e-b70fa5bb141b
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-1658ecec-8eb1-468b-a725-499008787a05
STEP: Updating secret s-test-opt-upd-966dc07c-06e8-4cbf-8a0e-b70fa5bb141b
STEP: Creating secret with name s-test-opt-create-b7809421-48ea-41b9-a70b-2e9eaf986d6b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:53:04.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2203" for this suite.
Jan 24 14:53:37.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:53:37.131: INFO: namespace projected-2203 deletion completed in 32.136874976s

• [SLOW TEST:46.880 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:53:37.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-46qh
STEP: Creating a pod to test atomic-volume-subpath
Jan 24 14:53:37.233: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-46qh" in namespace "subpath-3373" to be "success or failure"
Jan 24 14:53:37.267: INFO: Pod "pod-subpath-test-downwardapi-46qh": Phase="Pending", Reason="", readiness=false. Elapsed: 33.811852ms
Jan 24 14:53:39.278: INFO: Pod "pod-subpath-test-downwardapi-46qh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0444422s
Jan 24 14:53:41.284: INFO: Pod "pod-subpath-test-downwardapi-46qh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050624644s
Jan 24 14:53:43.297: INFO: Pod "pod-subpath-test-downwardapi-46qh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063832538s
Jan 24 14:53:45.308: INFO: Pod "pod-subpath-test-downwardapi-46qh": Phase="Running", Reason="", readiness=true. Elapsed: 8.07521804s
Jan 24 14:53:47.318: INFO: Pod "pod-subpath-test-downwardapi-46qh": Phase="Running", Reason="", readiness=true. Elapsed: 10.084976014s
Jan 24 14:53:49.325: INFO: Pod "pod-subpath-test-downwardapi-46qh": Phase="Running", Reason="", readiness=true. Elapsed: 12.092003577s
Jan 24 14:53:51.333: INFO: Pod "pod-subpath-test-downwardapi-46qh": Phase="Running", Reason="", readiness=true. Elapsed: 14.100209114s
Jan 24 14:53:53.341: INFO: Pod "pod-subpath-test-downwardapi-46qh": Phase="Running", Reason="", readiness=true. Elapsed: 16.108363739s
Jan 24 14:53:55.351: INFO: Pod "pod-subpath-test-downwardapi-46qh": Phase="Running", Reason="", readiness=true. Elapsed: 18.118009735s
Jan 24 14:53:57.359: INFO: Pod "pod-subpath-test-downwardapi-46qh": Phase="Running", Reason="", readiness=true. Elapsed: 20.12601493s
Jan 24 14:53:59.381: INFO: Pod "pod-subpath-test-downwardapi-46qh": Phase="Running", Reason="", readiness=true. Elapsed: 22.147697102s
Jan 24 14:54:01.391: INFO: Pod "pod-subpath-test-downwardapi-46qh": Phase="Running", Reason="", readiness=true. Elapsed: 24.158364744s
Jan 24 14:54:03.403: INFO: Pod "pod-subpath-test-downwardapi-46qh": Phase="Running", Reason="", readiness=true. Elapsed: 26.169460748s
Jan 24 14:54:05.415: INFO: Pod "pod-subpath-test-downwardapi-46qh": Phase="Running", Reason="", readiness=true. Elapsed: 28.181591804s
Jan 24 14:54:07.434: INFO: Pod "pod-subpath-test-downwardapi-46qh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.200458608s
STEP: Saw pod success
Jan 24 14:54:07.434: INFO: Pod "pod-subpath-test-downwardapi-46qh" satisfied condition "success or failure"
Jan 24 14:54:07.438: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-46qh container test-container-subpath-downwardapi-46qh: 
STEP: delete the pod
Jan 24 14:54:07.493: INFO: Waiting for pod pod-subpath-test-downwardapi-46qh to disappear
Jan 24 14:54:07.498: INFO: Pod pod-subpath-test-downwardapi-46qh no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-46qh
Jan 24 14:54:07.498: INFO: Deleting pod "pod-subpath-test-downwardapi-46qh" in namespace "subpath-3373"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:54:07.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3373" for this suite.
Jan 24 14:54:13.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:54:13.692: INFO: namespace subpath-3373 deletion completed in 6.185936781s

• [SLOW TEST:36.561 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:54:13.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-dbe38526-1848-4c12-915d-ac4118b270e4
STEP: Creating a pod to test consume secrets
Jan 24 14:54:13.819: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8a7ba40e-425c-4274-96d8-e5cdc9d7ae65" in namespace "projected-3327" to be "success or failure"
Jan 24 14:54:13.853: INFO: Pod "pod-projected-secrets-8a7ba40e-425c-4274-96d8-e5cdc9d7ae65": Phase="Pending", Reason="", readiness=false. Elapsed: 33.373076ms
Jan 24 14:54:15.868: INFO: Pod "pod-projected-secrets-8a7ba40e-425c-4274-96d8-e5cdc9d7ae65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048740653s
Jan 24 14:54:17.932: INFO: Pod "pod-projected-secrets-8a7ba40e-425c-4274-96d8-e5cdc9d7ae65": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112535623s
Jan 24 14:54:19.941: INFO: Pod "pod-projected-secrets-8a7ba40e-425c-4274-96d8-e5cdc9d7ae65": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121387726s
Jan 24 14:54:21.948: INFO: Pod "pod-projected-secrets-8a7ba40e-425c-4274-96d8-e5cdc9d7ae65": Phase="Pending", Reason="", readiness=false. Elapsed: 8.129115426s
Jan 24 14:54:23.958: INFO: Pod "pod-projected-secrets-8a7ba40e-425c-4274-96d8-e5cdc9d7ae65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.138160024s
STEP: Saw pod success
Jan 24 14:54:23.958: INFO: Pod "pod-projected-secrets-8a7ba40e-425c-4274-96d8-e5cdc9d7ae65" satisfied condition "success or failure"
Jan 24 14:54:23.963: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-8a7ba40e-425c-4274-96d8-e5cdc9d7ae65 container projected-secret-volume-test: 
STEP: delete the pod
Jan 24 14:54:24.144: INFO: Waiting for pod pod-projected-secrets-8a7ba40e-425c-4274-96d8-e5cdc9d7ae65 to disappear
Jan 24 14:54:24.151: INFO: Pod pod-projected-secrets-8a7ba40e-425c-4274-96d8-e5cdc9d7ae65 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:54:24.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3327" for this suite.
Jan 24 14:54:30.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:54:30.295: INFO: namespace projected-3327 deletion completed in 6.140049967s

• [SLOW TEST:16.601 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:54:30.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 24 14:54:30.437: INFO: Number of nodes with available pods: 0
Jan 24 14:54:30.437: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:54:31.458: INFO: Number of nodes with available pods: 0
Jan 24 14:54:31.458: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:54:32.456: INFO: Number of nodes with available pods: 0
Jan 24 14:54:32.456: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:54:33.676: INFO: Number of nodes with available pods: 0
Jan 24 14:54:33.676: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:54:34.452: INFO: Number of nodes with available pods: 0
Jan 24 14:54:34.452: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:54:35.555: INFO: Number of nodes with available pods: 0
Jan 24 14:54:35.555: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:54:37.096: INFO: Number of nodes with available pods: 0
Jan 24 14:54:37.096: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:54:38.231: INFO: Number of nodes with available pods: 0
Jan 24 14:54:38.231: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:54:39.085: INFO: Number of nodes with available pods: 0
Jan 24 14:54:39.085: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:54:39.447: INFO: Number of nodes with available pods: 1
Jan 24 14:54:39.447: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 24 14:54:40.455: INFO: Number of nodes with available pods: 1
Jan 24 14:54:40.455: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 24 14:54:41.451: INFO: Number of nodes with available pods: 2
Jan 24 14:54:41.451: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan 24 14:54:41.529: INFO: Number of nodes with available pods: 1
Jan 24 14:54:41.529: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:54:42.552: INFO: Number of nodes with available pods: 1
Jan 24 14:54:42.552: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:54:43.547: INFO: Number of nodes with available pods: 1
Jan 24 14:54:43.547: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:54:44.550: INFO: Number of nodes with available pods: 1
Jan 24 14:54:44.550: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:54:45.543: INFO: Number of nodes with available pods: 1
Jan 24 14:54:45.543: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:54:46.564: INFO: Number of nodes with available pods: 1
Jan 24 14:54:46.564: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:54:47.544: INFO: Number of nodes with available pods: 1
Jan 24 14:54:47.544: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:54:48.553: INFO: Number of nodes with available pods: 1
Jan 24 14:54:48.553: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:54:49.547: INFO: Number of nodes with available pods: 1
Jan 24 14:54:49.547: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:54:50.555: INFO: Number of nodes with available pods: 1
Jan 24 14:54:50.555: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:54:51.577: INFO: Number of nodes with available pods: 1
Jan 24 14:54:51.577: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:54:52.556: INFO: Number of nodes with available pods: 1
Jan 24 14:54:52.556: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:54:53.551: INFO: Number of nodes with available pods: 1
Jan 24 14:54:53.551: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:54:54.546: INFO: Number of nodes with available pods: 1
Jan 24 14:54:54.546: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:54:55.557: INFO: Number of nodes with available pods: 1
Jan 24 14:54:55.557: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:54:56.560: INFO: Number of nodes with available pods: 1
Jan 24 14:54:56.560: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:54:57.544: INFO: Number of nodes with available pods: 1
Jan 24 14:54:57.544: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:54:58.554: INFO: Number of nodes with available pods: 1
Jan 24 14:54:58.554: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:54:59.547: INFO: Number of nodes with available pods: 1
Jan 24 14:54:59.547: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:55:00.556: INFO: Number of nodes with available pods: 1
Jan 24 14:55:00.556: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:55:01.544: INFO: Number of nodes with available pods: 1
Jan 24 14:55:01.544: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:55:02.564: INFO: Number of nodes with available pods: 1
Jan 24 14:55:02.564: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:55:03.544: INFO: Number of nodes with available pods: 1
Jan 24 14:55:03.544: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:55:04.553: INFO: Number of nodes with available pods: 2
Jan 24 14:55:04.553: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7405, will wait for the garbage collector to delete the pods
Jan 24 14:55:04.623: INFO: Deleting DaemonSet.extensions daemon-set took: 9.490795ms
Jan 24 14:55:04.924: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.47187ms
Jan 24 14:55:17.930: INFO: Number of nodes with available pods: 0
Jan 24 14:55:17.930: INFO: Number of running nodes: 0, number of available pods: 0
Jan 24 14:55:17.933: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7405/daemonsets","resourceVersion":"21700507"},"items":null}

Jan 24 14:55:17.937: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7405/pods","resourceVersion":"21700507"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:55:17.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7405" for this suite.
Jan 24 14:55:24.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:55:24.148: INFO: namespace daemonsets-7405 deletion completed in 6.142965636s

• [SLOW TEST:53.852 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:55:24.148: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan 24 14:55:24.226: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:55:37.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9213" for this suite.
Jan 24 14:55:43.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:55:43.644: INFO: namespace init-container-9213 deletion completed in 6.187233861s

• [SLOW TEST:19.496 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:55:43.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Jan 24 14:55:43.873: INFO: Waiting up to 5m0s for pod "client-containers-b51e185d-5cd9-4246-b5b7-208168c607c1" in namespace "containers-1414" to be "success or failure"
Jan 24 14:55:43.892: INFO: Pod "client-containers-b51e185d-5cd9-4246-b5b7-208168c607c1": Phase="Pending", Reason="", readiness=false. Elapsed: 18.3018ms
Jan 24 14:55:45.900: INFO: Pod "client-containers-b51e185d-5cd9-4246-b5b7-208168c607c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026744278s
Jan 24 14:55:47.911: INFO: Pod "client-containers-b51e185d-5cd9-4246-b5b7-208168c607c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037133387s
Jan 24 14:55:49.925: INFO: Pod "client-containers-b51e185d-5cd9-4246-b5b7-208168c607c1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051615518s
Jan 24 14:55:51.934: INFO: Pod "client-containers-b51e185d-5cd9-4246-b5b7-208168c607c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060457882s
STEP: Saw pod success
Jan 24 14:55:51.934: INFO: Pod "client-containers-b51e185d-5cd9-4246-b5b7-208168c607c1" satisfied condition "success or failure"
Jan 24 14:55:51.941: INFO: Trying to get logs from node iruya-node pod client-containers-b51e185d-5cd9-4246-b5b7-208168c607c1 container test-container: 
STEP: delete the pod
Jan 24 14:55:52.098: INFO: Waiting for pod client-containers-b51e185d-5cd9-4246-b5b7-208168c607c1 to disappear
Jan 24 14:55:52.107: INFO: Pod client-containers-b51e185d-5cd9-4246-b5b7-208168c607c1 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:55:52.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1414" for this suite.
Jan 24 14:55:58.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:55:58.278: INFO: namespace containers-1414 deletion completed in 6.162489994s

• [SLOW TEST:14.633 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:55:58.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-67d3b1c1-0fc4-4a04-a1cf-c9d8fda12e6c
STEP: Creating a pod to test consume secrets
Jan 24 14:55:58.353: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2e77af64-2c04-4588-b0cb-08614d7a517b" in namespace "projected-7061" to be "success or failure"
Jan 24 14:55:58.375: INFO: Pod "pod-projected-secrets-2e77af64-2c04-4588-b0cb-08614d7a517b": Phase="Pending", Reason="", readiness=false. Elapsed: 22.205641ms
Jan 24 14:56:00.387: INFO: Pod "pod-projected-secrets-2e77af64-2c04-4588-b0cb-08614d7a517b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034810452s
Jan 24 14:56:02.393: INFO: Pod "pod-projected-secrets-2e77af64-2c04-4588-b0cb-08614d7a517b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040270069s
Jan 24 14:56:04.404: INFO: Pod "pod-projected-secrets-2e77af64-2c04-4588-b0cb-08614d7a517b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051053486s
Jan 24 14:56:06.414: INFO: Pod "pod-projected-secrets-2e77af64-2c04-4588-b0cb-08614d7a517b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061142232s
Jan 24 14:56:08.425: INFO: Pod "pod-projected-secrets-2e77af64-2c04-4588-b0cb-08614d7a517b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072749232s
STEP: Saw pod success
Jan 24 14:56:08.425: INFO: Pod "pod-projected-secrets-2e77af64-2c04-4588-b0cb-08614d7a517b" satisfied condition "success or failure"
Jan 24 14:56:08.432: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-2e77af64-2c04-4588-b0cb-08614d7a517b container secret-volume-test: 
STEP: delete the pod
Jan 24 14:56:08.522: INFO: Waiting for pod pod-projected-secrets-2e77af64-2c04-4588-b0cb-08614d7a517b to disappear
Jan 24 14:56:08.528: INFO: Pod pod-projected-secrets-2e77af64-2c04-4588-b0cb-08614d7a517b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:56:08.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7061" for this suite.
Jan 24 14:56:14.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:56:14.712: INFO: namespace projected-7061 deletion completed in 6.178695416s

• [SLOW TEST:16.434 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:56:14.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 24 14:56:14.961: INFO: Waiting up to 5m0s for pod "pod-ce10821e-8780-4eec-8f87-638f51228a3b" in namespace "emptydir-8088" to be "success or failure"
Jan 24 14:56:15.023: INFO: Pod "pod-ce10821e-8780-4eec-8f87-638f51228a3b": Phase="Pending", Reason="", readiness=false. Elapsed: 61.794085ms
Jan 24 14:56:17.031: INFO: Pod "pod-ce10821e-8780-4eec-8f87-638f51228a3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069914186s
Jan 24 14:56:19.044: INFO: Pod "pod-ce10821e-8780-4eec-8f87-638f51228a3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082488954s
Jan 24 14:56:21.065: INFO: Pod "pod-ce10821e-8780-4eec-8f87-638f51228a3b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103400961s
Jan 24 14:56:23.072: INFO: Pod "pod-ce10821e-8780-4eec-8f87-638f51228a3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.11124993s
STEP: Saw pod success
Jan 24 14:56:23.073: INFO: Pod "pod-ce10821e-8780-4eec-8f87-638f51228a3b" satisfied condition "success or failure"
Jan 24 14:56:23.076: INFO: Trying to get logs from node iruya-node pod pod-ce10821e-8780-4eec-8f87-638f51228a3b container test-container: 
STEP: delete the pod
Jan 24 14:56:23.114: INFO: Waiting for pod pod-ce10821e-8780-4eec-8f87-638f51228a3b to disappear
Jan 24 14:56:23.136: INFO: Pod pod-ce10821e-8780-4eec-8f87-638f51228a3b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:56:23.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8088" for this suite.
Jan 24 14:56:29.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:56:29.440: INFO: namespace emptydir-8088 deletion completed in 6.298637683s

• [SLOW TEST:14.727 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:56:29.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-d6e3d1c8-af8a-4cee-91d5-7b97a7e3be74
STEP: Creating a pod to test consume configMaps
Jan 24 14:56:29.550: INFO: Waiting up to 5m0s for pod "pod-configmaps-5b6ca81a-5f6a-4c33-902e-b6d31e94d387" in namespace "configmap-3622" to be "success or failure"
Jan 24 14:56:29.558: INFO: Pod "pod-configmaps-5b6ca81a-5f6a-4c33-902e-b6d31e94d387": Phase="Pending", Reason="", readiness=false. Elapsed: 7.743085ms
Jan 24 14:56:31.579: INFO: Pod "pod-configmaps-5b6ca81a-5f6a-4c33-902e-b6d31e94d387": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028832992s
Jan 24 14:56:33.589: INFO: Pod "pod-configmaps-5b6ca81a-5f6a-4c33-902e-b6d31e94d387": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038694109s
Jan 24 14:56:35.600: INFO: Pod "pod-configmaps-5b6ca81a-5f6a-4c33-902e-b6d31e94d387": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049409374s
Jan 24 14:56:37.607: INFO: Pod "pod-configmaps-5b6ca81a-5f6a-4c33-902e-b6d31e94d387": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.056710999s
STEP: Saw pod success
Jan 24 14:56:37.607: INFO: Pod "pod-configmaps-5b6ca81a-5f6a-4c33-902e-b6d31e94d387" satisfied condition "success or failure"
Jan 24 14:56:37.611: INFO: Trying to get logs from node iruya-node pod pod-configmaps-5b6ca81a-5f6a-4c33-902e-b6d31e94d387 container configmap-volume-test: 
STEP: delete the pod
Jan 24 14:56:37.703: INFO: Waiting for pod pod-configmaps-5b6ca81a-5f6a-4c33-902e-b6d31e94d387 to disappear
Jan 24 14:56:37.754: INFO: Pod pod-configmaps-5b6ca81a-5f6a-4c33-902e-b6d31e94d387 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:56:37.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3622" for this suite.
Jan 24 14:56:43.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:56:43.949: INFO: namespace configmap-3622 deletion completed in 6.188734062s

• [SLOW TEST:14.507 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:56:43.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 24 14:56:44.110: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan 24 14:56:44.118: INFO: Number of nodes with available pods: 0
Jan 24 14:56:44.118: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan 24 14:56:44.183: INFO: Number of nodes with available pods: 0
Jan 24 14:56:44.183: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:56:45.196: INFO: Number of nodes with available pods: 0
Jan 24 14:56:45.196: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:56:46.190: INFO: Number of nodes with available pods: 0
Jan 24 14:56:46.190: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:56:47.189: INFO: Number of nodes with available pods: 0
Jan 24 14:56:47.189: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:56:48.231: INFO: Number of nodes with available pods: 0
Jan 24 14:56:48.231: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:56:49.189: INFO: Number of nodes with available pods: 0
Jan 24 14:56:49.189: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:56:50.191: INFO: Number of nodes with available pods: 0
Jan 24 14:56:50.191: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:56:51.217: INFO: Number of nodes with available pods: 0
Jan 24 14:56:51.217: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:56:52.190: INFO: Number of nodes with available pods: 1
Jan 24 14:56:52.190: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan 24 14:56:52.300: INFO: Number of nodes with available pods: 1
Jan 24 14:56:52.300: INFO: Number of running nodes: 0, number of available pods: 1
Jan 24 14:56:53.309: INFO: Number of nodes with available pods: 0
Jan 24 14:56:53.309: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan 24 14:56:53.335: INFO: Number of nodes with available pods: 0
Jan 24 14:56:53.336: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:56:54.346: INFO: Number of nodes with available pods: 0
Jan 24 14:56:54.346: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:56:55.344: INFO: Number of nodes with available pods: 0
Jan 24 14:56:55.344: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:56:56.348: INFO: Number of nodes with available pods: 0
Jan 24 14:56:56.348: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:56:57.348: INFO: Number of nodes with available pods: 0
Jan 24 14:56:57.348: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:56:58.345: INFO: Number of nodes with available pods: 0
Jan 24 14:56:58.345: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:56:59.348: INFO: Number of nodes with available pods: 0
Jan 24 14:56:59.348: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:57:00.344: INFO: Number of nodes with available pods: 0
Jan 24 14:57:00.344: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:57:01.343: INFO: Number of nodes with available pods: 0
Jan 24 14:57:01.343: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:57:02.345: INFO: Number of nodes with available pods: 0
Jan 24 14:57:02.345: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:57:03.341: INFO: Number of nodes with available pods: 0
Jan 24 14:57:03.341: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:57:04.345: INFO: Number of nodes with available pods: 0
Jan 24 14:57:04.345: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:57:05.350: INFO: Number of nodes with available pods: 0
Jan 24 14:57:05.350: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:57:06.343: INFO: Number of nodes with available pods: 0
Jan 24 14:57:06.343: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:57:07.343: INFO: Number of nodes with available pods: 0
Jan 24 14:57:07.343: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:57:08.347: INFO: Number of nodes with available pods: 0
Jan 24 14:57:08.347: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:57:09.350: INFO: Number of nodes with available pods: 0
Jan 24 14:57:09.350: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:57:10.344: INFO: Number of nodes with available pods: 0
Jan 24 14:57:10.344: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:57:11.349: INFO: Number of nodes with available pods: 0
Jan 24 14:57:11.349: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:57:12.342: INFO: Number of nodes with available pods: 0
Jan 24 14:57:12.342: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:57:13.342: INFO: Number of nodes with available pods: 0
Jan 24 14:57:13.342: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:57:14.342: INFO: Number of nodes with available pods: 1
Jan 24 14:57:14.342: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2608, will wait for the garbage collector to delete the pods
Jan 24 14:57:14.426: INFO: Deleting DaemonSet.extensions daemon-set took: 23.532821ms
Jan 24 14:57:14.727: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.419393ms
Jan 24 14:57:26.635: INFO: Number of nodes with available pods: 0
Jan 24 14:57:26.635: INFO: Number of running nodes: 0, number of available pods: 0
Jan 24 14:57:26.639: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2608/daemonsets","resourceVersion":"21700880"},"items":null}

Jan 24 14:57:26.642: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2608/pods","resourceVersion":"21700880"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:57:26.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2608" for this suite.
Jan 24 14:57:32.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:57:32.853: INFO: namespace daemonsets-2608 deletion completed in 6.130447627s

• [SLOW TEST:48.904 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:57:32.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 24 14:57:32.912: INFO: Waiting up to 5m0s for pod "pod-069f18c3-2d54-4c2a-8821-393d53191dee" in namespace "emptydir-801" to be "success or failure"
Jan 24 14:57:32.954: INFO: Pod "pod-069f18c3-2d54-4c2a-8821-393d53191dee": Phase="Pending", Reason="", readiness=false. Elapsed: 42.32484ms
Jan 24 14:57:34.965: INFO: Pod "pod-069f18c3-2d54-4c2a-8821-393d53191dee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053328572s
Jan 24 14:57:36.971: INFO: Pod "pod-069f18c3-2d54-4c2a-8821-393d53191dee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05921914s
Jan 24 14:57:38.988: INFO: Pod "pod-069f18c3-2d54-4c2a-8821-393d53191dee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075770083s
Jan 24 14:57:41.009: INFO: Pod "pod-069f18c3-2d54-4c2a-8821-393d53191dee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.096672565s
STEP: Saw pod success
Jan 24 14:57:41.009: INFO: Pod "pod-069f18c3-2d54-4c2a-8821-393d53191dee" satisfied condition "success or failure"
Jan 24 14:57:41.019: INFO: Trying to get logs from node iruya-node pod pod-069f18c3-2d54-4c2a-8821-393d53191dee container test-container: 
STEP: delete the pod
Jan 24 14:57:41.095: INFO: Waiting for pod pod-069f18c3-2d54-4c2a-8821-393d53191dee to disappear
Jan 24 14:57:41.102: INFO: Pod pod-069f18c3-2d54-4c2a-8821-393d53191dee no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:57:41.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-801" for this suite.
Jan 24 14:57:47.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:57:47.243: INFO: namespace emptydir-801 deletion completed in 6.136456146s

• [SLOW TEST:14.389 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:57:47.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 24 14:57:47.378: INFO: Create a RollingUpdate DaemonSet
Jan 24 14:57:47.385: INFO: Check that daemon pods launch on every node of the cluster
Jan 24 14:57:47.414: INFO: Number of nodes with available pods: 0
Jan 24 14:57:47.415: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:57:48.714: INFO: Number of nodes with available pods: 0
Jan 24 14:57:48.714: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:57:49.808: INFO: Number of nodes with available pods: 0
Jan 24 14:57:49.808: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:57:50.432: INFO: Number of nodes with available pods: 0
Jan 24 14:57:50.433: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:57:51.464: INFO: Number of nodes with available pods: 0
Jan 24 14:57:51.464: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:57:54.046: INFO: Number of nodes with available pods: 0
Jan 24 14:57:54.046: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:57:54.476: INFO: Number of nodes with available pods: 0
Jan 24 14:57:54.476: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:57:56.107: INFO: Number of nodes with available pods: 0
Jan 24 14:57:56.107: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:57:56.462: INFO: Number of nodes with available pods: 0
Jan 24 14:57:56.462: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:57:57.428: INFO: Number of nodes with available pods: 1
Jan 24 14:57:57.428: INFO: Node iruya-node is running more than one daemon pod
Jan 24 14:57:58.445: INFO: Number of nodes with available pods: 2
Jan 24 14:57:58.445: INFO: Number of running nodes: 2, number of available pods: 2
Jan 24 14:57:58.445: INFO: Update the DaemonSet to trigger a rollout
Jan 24 14:57:58.458: INFO: Updating DaemonSet daemon-set
Jan 24 14:58:17.503: INFO: Roll back the DaemonSet before rollout is complete
Jan 24 14:58:17.542: INFO: Updating DaemonSet daemon-set
Jan 24 14:58:17.542: INFO: Make sure DaemonSet rollback is complete
Jan 24 14:58:17.562: INFO: Wrong image for pod: daemon-set-qxrbv. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan 24 14:58:17.562: INFO: Pod daemon-set-qxrbv is not available
Jan 24 14:58:18.587: INFO: Wrong image for pod: daemon-set-qxrbv. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan 24 14:58:18.587: INFO: Pod daemon-set-qxrbv is not available
Jan 24 14:58:19.582: INFO: Wrong image for pod: daemon-set-qxrbv. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan 24 14:58:19.582: INFO: Pod daemon-set-qxrbv is not available
Jan 24 14:58:20.586: INFO: Wrong image for pod: daemon-set-qxrbv. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan 24 14:58:20.586: INFO: Pod daemon-set-qxrbv is not available
Jan 24 14:58:21.577: INFO: Wrong image for pod: daemon-set-qxrbv. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan 24 14:58:21.577: INFO: Pod daemon-set-qxrbv is not available
Jan 24 14:58:22.590: INFO: Wrong image for pod: daemon-set-qxrbv. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan 24 14:58:22.590: INFO: Pod daemon-set-qxrbv is not available
Jan 24 14:58:23.584: INFO: Pod daemon-set-pv4f6 is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8156, will wait for the garbage collector to delete the pods
Jan 24 14:58:23.737: INFO: Deleting DaemonSet.extensions daemon-set took: 23.745926ms
Jan 24 14:58:24.038: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.644985ms
Jan 24 14:58:37.942: INFO: Number of nodes with available pods: 0
Jan 24 14:58:37.942: INFO: Number of running nodes: 0, number of available pods: 0
Jan 24 14:58:37.945: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8156/daemonsets","resourceVersion":"21701092"},"items":null}

Jan 24 14:58:37.947: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8156/pods","resourceVersion":"21701092"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:58:37.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8156" for this suite.
Jan 24 14:58:43.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:58:44.116: INFO: namespace daemonsets-8156 deletion completed in 6.155990411s

• [SLOW TEST:56.873 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:58:44.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-abc3e3af-e40a-431a-a2ee-fa080aa11100
STEP: Creating a pod to test consume configMaps
Jan 24 14:58:44.224: INFO: Waiting up to 5m0s for pod "pod-configmaps-8dc96a1e-3159-4f9e-87fd-c8abf3cb4837" in namespace "configmap-9146" to be "success or failure"
Jan 24 14:58:44.231: INFO: Pod "pod-configmaps-8dc96a1e-3159-4f9e-87fd-c8abf3cb4837": Phase="Pending", Reason="", readiness=false. Elapsed: 6.616717ms
Jan 24 14:58:46.241: INFO: Pod "pod-configmaps-8dc96a1e-3159-4f9e-87fd-c8abf3cb4837": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016977574s
Jan 24 14:58:48.253: INFO: Pod "pod-configmaps-8dc96a1e-3159-4f9e-87fd-c8abf3cb4837": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028232151s
Jan 24 14:58:50.260: INFO: Pod "pod-configmaps-8dc96a1e-3159-4f9e-87fd-c8abf3cb4837": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035886074s
Jan 24 14:58:52.268: INFO: Pod "pod-configmaps-8dc96a1e-3159-4f9e-87fd-c8abf3cb4837": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.043979144s
STEP: Saw pod success
Jan 24 14:58:52.269: INFO: Pod "pod-configmaps-8dc96a1e-3159-4f9e-87fd-c8abf3cb4837" satisfied condition "success or failure"
Jan 24 14:58:52.276: INFO: Trying to get logs from node iruya-node pod pod-configmaps-8dc96a1e-3159-4f9e-87fd-c8abf3cb4837 container configmap-volume-test: 
STEP: delete the pod
Jan 24 14:58:52.378: INFO: Waiting for pod pod-configmaps-8dc96a1e-3159-4f9e-87fd-c8abf3cb4837 to disappear
Jan 24 14:58:52.393: INFO: Pod pod-configmaps-8dc96a1e-3159-4f9e-87fd-c8abf3cb4837 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:58:52.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9146" for this suite.
Jan 24 14:58:58.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:58:58.624: INFO: namespace configmap-9146 deletion completed in 6.220162779s

• [SLOW TEST:14.507 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:58:58.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan 24 14:59:07.404: INFO: Successfully updated pod "annotationupdate1ca6834f-2951-4043-9c53-14e79aacfacf"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:59:09.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9793" for this suite.
Jan 24 14:59:31.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:59:31.633: INFO: namespace downward-api-9793 deletion completed in 22.156828145s

• [SLOW TEST:33.009 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:59:31.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-9dcc6f64-7811-4b91-ba64-52a314a922a9
STEP: Creating a pod to test consume configMaps
Jan 24 14:59:31.754: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2a85f300-f3f3-452b-b29b-efb00136842e" in namespace "projected-8594" to be "success or failure"
Jan 24 14:59:31.769: INFO: Pod "pod-projected-configmaps-2a85f300-f3f3-452b-b29b-efb00136842e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.37928ms
Jan 24 14:59:33.796: INFO: Pod "pod-projected-configmaps-2a85f300-f3f3-452b-b29b-efb00136842e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041939998s
Jan 24 14:59:35.805: INFO: Pod "pod-projected-configmaps-2a85f300-f3f3-452b-b29b-efb00136842e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051606436s
Jan 24 14:59:37.820: INFO: Pod "pod-projected-configmaps-2a85f300-f3f3-452b-b29b-efb00136842e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066719137s
Jan 24 14:59:39.828: INFO: Pod "pod-projected-configmaps-2a85f300-f3f3-452b-b29b-efb00136842e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074264695s
STEP: Saw pod success
Jan 24 14:59:39.828: INFO: Pod "pod-projected-configmaps-2a85f300-f3f3-452b-b29b-efb00136842e" satisfied condition "success or failure"
Jan 24 14:59:39.830: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-2a85f300-f3f3-452b-b29b-efb00136842e container projected-configmap-volume-test: 
STEP: delete the pod
Jan 24 14:59:39.976: INFO: Waiting for pod pod-projected-configmaps-2a85f300-f3f3-452b-b29b-efb00136842e to disappear
Jan 24 14:59:40.023: INFO: Pod pod-projected-configmaps-2a85f300-f3f3-452b-b29b-efb00136842e no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 14:59:40.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8594" for this suite.
Jan 24 14:59:46.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 14:59:46.297: INFO: namespace projected-8594 deletion completed in 6.264268459s

• [SLOW TEST:14.664 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 14:59:46.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Jan 24 14:59:54.472: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jan 24 15:00:09.649: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 15:00:09.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1004" for this suite.
Jan 24 15:00:15.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 15:00:15.888: INFO: namespace pods-1004 deletion completed in 6.226114804s

• [SLOW TEST:29.590 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 15:00:15.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-b861ce7e-b9ed-46f7-992b-e7a1414ebd4f
STEP: Creating a pod to test consume configMaps
Jan 24 15:00:16.080: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-32c92326-0896-4439-ac67-96f816f71ce7" in namespace "projected-2089" to be "success or failure"
Jan 24 15:00:16.168: INFO: Pod "pod-projected-configmaps-32c92326-0896-4439-ac67-96f816f71ce7": Phase="Pending", Reason="", readiness=false. Elapsed: 88.157165ms
Jan 24 15:00:18.175: INFO: Pod "pod-projected-configmaps-32c92326-0896-4439-ac67-96f816f71ce7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095062104s
Jan 24 15:00:20.186: INFO: Pod "pod-projected-configmaps-32c92326-0896-4439-ac67-96f816f71ce7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105635409s
Jan 24 15:00:22.300: INFO: Pod "pod-projected-configmaps-32c92326-0896-4439-ac67-96f816f71ce7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.220153584s
Jan 24 15:00:24.316: INFO: Pod "pod-projected-configmaps-32c92326-0896-4439-ac67-96f816f71ce7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.235751814s
STEP: Saw pod success
Jan 24 15:00:24.316: INFO: Pod "pod-projected-configmaps-32c92326-0896-4439-ac67-96f816f71ce7" satisfied condition "success or failure"
Jan 24 15:00:24.324: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-32c92326-0896-4439-ac67-96f816f71ce7 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 24 15:00:24.377: INFO: Waiting for pod pod-projected-configmaps-32c92326-0896-4439-ac67-96f816f71ce7 to disappear
Jan 24 15:00:24.380: INFO: Pod pod-projected-configmaps-32c92326-0896-4439-ac67-96f816f71ce7 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 15:00:24.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2089" for this suite.
Jan 24 15:00:30.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 15:00:30.628: INFO: namespace projected-2089 deletion completed in 6.244559255s

• [SLOW TEST:14.740 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 15:00:30.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 24 15:00:30.754: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4fba8479-c956-467e-bded-1c546c91d875" in namespace "downward-api-2105" to be "success or failure"
Jan 24 15:00:30.765: INFO: Pod "downwardapi-volume-4fba8479-c956-467e-bded-1c546c91d875": Phase="Pending", Reason="", readiness=false. Elapsed: 11.0939ms
Jan 24 15:00:33.119: INFO: Pod "downwardapi-volume-4fba8479-c956-467e-bded-1c546c91d875": Phase="Pending", Reason="", readiness=false. Elapsed: 2.365439552s
Jan 24 15:00:35.129: INFO: Pod "downwardapi-volume-4fba8479-c956-467e-bded-1c546c91d875": Phase="Pending", Reason="", readiness=false. Elapsed: 4.375349169s
Jan 24 15:00:37.145: INFO: Pod "downwardapi-volume-4fba8479-c956-467e-bded-1c546c91d875": Phase="Pending", Reason="", readiness=false. Elapsed: 6.391112911s
Jan 24 15:00:39.158: INFO: Pod "downwardapi-volume-4fba8479-c956-467e-bded-1c546c91d875": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.404028435s
STEP: Saw pod success
Jan 24 15:00:39.158: INFO: Pod "downwardapi-volume-4fba8479-c956-467e-bded-1c546c91d875" satisfied condition "success or failure"
Jan 24 15:00:39.162: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4fba8479-c956-467e-bded-1c546c91d875 container client-container: 
STEP: delete the pod
Jan 24 15:00:39.226: INFO: Waiting for pod downwardapi-volume-4fba8479-c956-467e-bded-1c546c91d875 to disappear
Jan 24 15:00:39.256: INFO: Pod downwardapi-volume-4fba8479-c956-467e-bded-1c546c91d875 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 15:00:39.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2105" for this suite.
Jan 24 15:00:45.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 15:00:45.480: INFO: namespace downward-api-2105 deletion completed in 6.216589344s

• [SLOW TEST:14.851 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 15:00:45.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 24 15:00:45.573: INFO: Waiting up to 5m0s for pod "pod-d0671c7d-740b-49fb-b91b-91500f632b60" in namespace "emptydir-8958" to be "success or failure"
Jan 24 15:00:45.578: INFO: Pod "pod-d0671c7d-740b-49fb-b91b-91500f632b60": Phase="Pending", Reason="", readiness=false. Elapsed: 5.254739ms
Jan 24 15:00:47.586: INFO: Pod "pod-d0671c7d-740b-49fb-b91b-91500f632b60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013288673s
Jan 24 15:00:49.597: INFO: Pod "pod-d0671c7d-740b-49fb-b91b-91500f632b60": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023631739s
Jan 24 15:00:51.605: INFO: Pod "pod-d0671c7d-740b-49fb-b91b-91500f632b60": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032416127s
Jan 24 15:00:53.624: INFO: Pod "pod-d0671c7d-740b-49fb-b91b-91500f632b60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050774207s
STEP: Saw pod success
Jan 24 15:00:53.624: INFO: Pod "pod-d0671c7d-740b-49fb-b91b-91500f632b60" satisfied condition "success or failure"
Jan 24 15:00:53.630: INFO: Trying to get logs from node iruya-node pod pod-d0671c7d-740b-49fb-b91b-91500f632b60 container test-container: 
STEP: delete the pod
Jan 24 15:00:53.692: INFO: Waiting for pod pod-d0671c7d-740b-49fb-b91b-91500f632b60 to disappear
Jan 24 15:00:53.695: INFO: Pod pod-d0671c7d-740b-49fb-b91b-91500f632b60 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 15:00:53.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8958" for this suite.
Jan 24 15:00:59.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 15:00:59.922: INFO: namespace emptydir-8958 deletion completed in 6.221266052s

• [SLOW TEST:14.442 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 15:00:59.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 24 15:01:00.070: INFO: Waiting up to 5m0s for pod "downward-api-13aafb35-901a-48f2-9dee-e1618cb86063" in namespace "downward-api-457" to be "success or failure"
Jan 24 15:01:00.104: INFO: Pod "downward-api-13aafb35-901a-48f2-9dee-e1618cb86063": Phase="Pending", Reason="", readiness=false. Elapsed: 33.348614ms
Jan 24 15:01:02.112: INFO: Pod "downward-api-13aafb35-901a-48f2-9dee-e1618cb86063": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041492336s
Jan 24 15:01:04.122: INFO: Pod "downward-api-13aafb35-901a-48f2-9dee-e1618cb86063": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05143278s
Jan 24 15:01:06.130: INFO: Pod "downward-api-13aafb35-901a-48f2-9dee-e1618cb86063": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059879641s
Jan 24 15:01:08.139: INFO: Pod "downward-api-13aafb35-901a-48f2-9dee-e1618cb86063": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068838638s
STEP: Saw pod success
Jan 24 15:01:08.139: INFO: Pod "downward-api-13aafb35-901a-48f2-9dee-e1618cb86063" satisfied condition "success or failure"
Jan 24 15:01:08.144: INFO: Trying to get logs from node iruya-node pod downward-api-13aafb35-901a-48f2-9dee-e1618cb86063 container dapi-container: 
STEP: delete the pod
Jan 24 15:01:08.225: INFO: Waiting for pod downward-api-13aafb35-901a-48f2-9dee-e1618cb86063 to disappear
Jan 24 15:01:08.231: INFO: Pod downward-api-13aafb35-901a-48f2-9dee-e1618cb86063 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 15:01:08.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-457" for this suite.
Jan 24 15:01:14.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 15:01:14.424: INFO: namespace downward-api-457 deletion completed in 6.179394163s

• [SLOW TEST:14.501 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 15:01:14.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-acd2472a-6125-4c1c-8604-c57d285ba0b5
STEP: Creating a pod to test consume configMaps
Jan 24 15:01:14.590: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9d4c5a6b-3cd0-4017-922f-0a1528d5a3b3" in namespace "projected-9876" to be "success or failure"
Jan 24 15:01:14.603: INFO: Pod "pod-projected-configmaps-9d4c5a6b-3cd0-4017-922f-0a1528d5a3b3": Phase="Pending", Reason="", readiness=false. Elapsed: 13.21457ms
Jan 24 15:01:16.620: INFO: Pod "pod-projected-configmaps-9d4c5a6b-3cd0-4017-922f-0a1528d5a3b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029748724s
Jan 24 15:01:18.627: INFO: Pod "pod-projected-configmaps-9d4c5a6b-3cd0-4017-922f-0a1528d5a3b3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037277552s
Jan 24 15:01:20.644: INFO: Pod "pod-projected-configmaps-9d4c5a6b-3cd0-4017-922f-0a1528d5a3b3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053684326s
Jan 24 15:01:22.651: INFO: Pod "pod-projected-configmaps-9d4c5a6b-3cd0-4017-922f-0a1528d5a3b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060654548s
STEP: Saw pod success
Jan 24 15:01:22.651: INFO: Pod "pod-projected-configmaps-9d4c5a6b-3cd0-4017-922f-0a1528d5a3b3" satisfied condition "success or failure"
Jan 24 15:01:22.655: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-9d4c5a6b-3cd0-4017-922f-0a1528d5a3b3 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 24 15:01:22.983: INFO: Waiting for pod pod-projected-configmaps-9d4c5a6b-3cd0-4017-922f-0a1528d5a3b3 to disappear
Jan 24 15:01:22.992: INFO: Pod pod-projected-configmaps-9d4c5a6b-3cd0-4017-922f-0a1528d5a3b3 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 15:01:22.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9876" for this suite.
Jan 24 15:01:29.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 15:01:29.115: INFO: namespace projected-9876 deletion completed in 6.117586856s

• [SLOW TEST:14.691 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 15:01:29.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 24 15:01:36.444: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 15:01:36.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2825" for this suite.
Jan 24 15:01:42.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 15:01:42.667: INFO: namespace container-runtime-2825 deletion completed in 6.129449729s

• [SLOW TEST:13.552 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 15:01:42.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Jan 24 15:01:42.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5748'
Jan 24 15:01:45.103: INFO: stderr: ""
Jan 24 15:01:45.103: INFO: stdout: "pod/pause created\n"
Jan 24 15:01:45.103: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan 24 15:01:45.103: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5748" to be "running and ready"
Jan 24 15:01:45.125: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 21.795461ms
Jan 24 15:01:47.136: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03278071s
Jan 24 15:01:49.142: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039086539s
Jan 24 15:01:51.147: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044527545s
Jan 24 15:01:53.156: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.052714249s
Jan 24 15:01:53.156: INFO: Pod "pause" satisfied condition "running and ready"
Jan 24 15:01:53.156: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Jan 24 15:01:53.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-5748'
Jan 24 15:01:53.335: INFO: stderr: ""
Jan 24 15:01:53.335: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan 24 15:01:53.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5748'
Jan 24 15:01:53.453: INFO: stderr: ""
Jan 24 15:01:53.453: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan 24 15:01:53.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-5748'
Jan 24 15:01:53.573: INFO: stderr: ""
Jan 24 15:01:53.573: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan 24 15:01:53.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5748'
Jan 24 15:01:53.763: INFO: stderr: ""
Jan 24 15:01:53.763: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Jan 24 15:01:53.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5748'
Jan 24 15:01:53.954: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 24 15:01:53.954: INFO: stdout: "pod \"pause\" force deleted\n"
Jan 24 15:01:53.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-5748'
Jan 24 15:01:54.273: INFO: stderr: "No resources found.\n"
Jan 24 15:01:54.273: INFO: stdout: ""
Jan 24 15:01:54.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-5748 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 24 15:01:54.343: INFO: stderr: ""
Jan 24 15:01:54.343: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 15:01:54.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5748" for this suite.
Jan 24 15:02:00.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 15:02:00.475: INFO: namespace kubectl-5748 deletion completed in 6.126772854s

• [SLOW TEST:17.807 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 15:02:00.475: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 24 15:02:00.630: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d463d172-7bcd-49df-8958-f231e4ff4790" in namespace "downward-api-4258" to be "success or failure"
Jan 24 15:02:00.653: INFO: Pod "downwardapi-volume-d463d172-7bcd-49df-8958-f231e4ff4790": Phase="Pending", Reason="", readiness=false. Elapsed: 22.923748ms
Jan 24 15:02:02.664: INFO: Pod "downwardapi-volume-d463d172-7bcd-49df-8958-f231e4ff4790": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034133898s
Jan 24 15:02:04.674: INFO: Pod "downwardapi-volume-d463d172-7bcd-49df-8958-f231e4ff4790": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044266829s
Jan 24 15:02:06.679: INFO: Pod "downwardapi-volume-d463d172-7bcd-49df-8958-f231e4ff4790": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049483486s
Jan 24 15:02:08.687: INFO: Pod "downwardapi-volume-d463d172-7bcd-49df-8958-f231e4ff4790": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.056779799s
STEP: Saw pod success
Jan 24 15:02:08.687: INFO: Pod "downwardapi-volume-d463d172-7bcd-49df-8958-f231e4ff4790" satisfied condition "success or failure"
Jan 24 15:02:08.692: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d463d172-7bcd-49df-8958-f231e4ff4790 container client-container: 
STEP: delete the pod
Jan 24 15:02:09.049: INFO: Waiting for pod downwardapi-volume-d463d172-7bcd-49df-8958-f231e4ff4790 to disappear
Jan 24 15:02:09.064: INFO: Pod downwardapi-volume-d463d172-7bcd-49df-8958-f231e4ff4790 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 15:02:09.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4258" for this suite.
Jan 24 15:02:15.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 15:02:15.245: INFO: namespace downward-api-4258 deletion completed in 6.172049982s

• [SLOW TEST:14.770 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 15:02:15.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 24 15:02:15.383: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dd26fff0-a9ae-4935-babc-2e56f87464ee" in namespace "downward-api-6558" to be "success or failure"
Jan 24 15:02:15.400: INFO: Pod "downwardapi-volume-dd26fff0-a9ae-4935-babc-2e56f87464ee": Phase="Pending", Reason="", readiness=false. Elapsed: 17.213692ms
Jan 24 15:02:17.409: INFO: Pod "downwardapi-volume-dd26fff0-a9ae-4935-babc-2e56f87464ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025914528s
Jan 24 15:02:19.415: INFO: Pod "downwardapi-volume-dd26fff0-a9ae-4935-babc-2e56f87464ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031701066s
Jan 24 15:02:21.425: INFO: Pod "downwardapi-volume-dd26fff0-a9ae-4935-babc-2e56f87464ee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041362694s
Jan 24 15:02:23.433: INFO: Pod "downwardapi-volume-dd26fff0-a9ae-4935-babc-2e56f87464ee": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049881876s
Jan 24 15:02:25.443: INFO: Pod "downwardapi-volume-dd26fff0-a9ae-4935-babc-2e56f87464ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059988041s
STEP: Saw pod success
Jan 24 15:02:25.443: INFO: Pod "downwardapi-volume-dd26fff0-a9ae-4935-babc-2e56f87464ee" satisfied condition "success or failure"
Jan 24 15:02:25.447: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-dd26fff0-a9ae-4935-babc-2e56f87464ee container client-container: 
STEP: delete the pod
Jan 24 15:02:25.536: INFO: Waiting for pod downwardapi-volume-dd26fff0-a9ae-4935-babc-2e56f87464ee to disappear
Jan 24 15:02:25.541: INFO: Pod downwardapi-volume-dd26fff0-a9ae-4935-babc-2e56f87464ee no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 15:02:25.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6558" for this suite.
Jan 24 15:02:31.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 15:02:31.681: INFO: namespace downward-api-6558 deletion completed in 6.133450558s

• [SLOW TEST:16.435 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 15:02:31.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan 24 15:02:38.771: INFO: 0 pods remaining
Jan 24 15:02:38.772: INFO: 0 pods has nil DeletionTimestamp
Jan 24 15:02:38.772: INFO: 
STEP: Gathering metrics
W0124 15:02:39.775988       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 24 15:02:39.776: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 15:02:39.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9936" for this suite.
Jan 24 15:02:53.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 15:02:53.976: INFO: namespace gc-9936 deletion completed in 14.194622498s

• [SLOW TEST:22.295 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 15:02:53.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-4ef60948-0c13-4460-8646-19056ab4f128
STEP: Creating a pod to test consume configMaps
Jan 24 15:02:54.155: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-aaead944-b2e3-4e11-8d25-53ffb1c86e34" in namespace "projected-2858" to be "success or failure"
Jan 24 15:02:54.177: INFO: Pod "pod-projected-configmaps-aaead944-b2e3-4e11-8d25-53ffb1c86e34": Phase="Pending", Reason="", readiness=false. Elapsed: 21.31989ms
Jan 24 15:02:56.188: INFO: Pod "pod-projected-configmaps-aaead944-b2e3-4e11-8d25-53ffb1c86e34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032371989s
Jan 24 15:02:58.197: INFO: Pod "pod-projected-configmaps-aaead944-b2e3-4e11-8d25-53ffb1c86e34": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041507729s
Jan 24 15:03:00.208: INFO: Pod "pod-projected-configmaps-aaead944-b2e3-4e11-8d25-53ffb1c86e34": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052554126s
Jan 24 15:03:02.213: INFO: Pod "pod-projected-configmaps-aaead944-b2e3-4e11-8d25-53ffb1c86e34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058099994s
STEP: Saw pod success
Jan 24 15:03:02.214: INFO: Pod "pod-projected-configmaps-aaead944-b2e3-4e11-8d25-53ffb1c86e34" satisfied condition "success or failure"
Jan 24 15:03:02.216: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-aaead944-b2e3-4e11-8d25-53ffb1c86e34 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 24 15:03:02.261: INFO: Waiting for pod pod-projected-configmaps-aaead944-b2e3-4e11-8d25-53ffb1c86e34 to disappear
Jan 24 15:03:02.279: INFO: Pod pod-projected-configmaps-aaead944-b2e3-4e11-8d25-53ffb1c86e34 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 15:03:02.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2858" for this suite.
Jan 24 15:03:08.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 15:03:08.448: INFO: namespace projected-2858 deletion completed in 6.164796746s

• [SLOW TEST:14.471 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 24 15:03:08.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0124 15:03:20.277166       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 24 15:03:20.277: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 24 15:03:20.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9750" for this suite.
Jan 24 15:03:41.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 15:03:41.935: INFO: namespace gc-9750 deletion completed in 18.831319369s

• [SLOW TEST:33.487 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJan 24 15:03:41.936: INFO: Running AfterSuite actions on all nodes
Jan 24 15:03:41.936: INFO: Running AfterSuite actions on node 1
Jan 24 15:03:41.936: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 7650.563 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS