I0213 12:56:10.551451 8 e2e.go:243] Starting e2e run "ca3a3677-8b5b-42db-ad91-9cc60f12a6da" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1581598569 - Will randomize all specs Will run 215 of 4412 specs Feb 13 12:56:11.145: INFO: >>> kubeConfig: /root/.kube/config Feb 13 12:56:11.150: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 13 12:56:11.180: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 13 12:56:11.216: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 13 12:56:11.216: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 13 12:56:11.216: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 13 12:56:11.226: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 13 12:56:11.226: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 13 12:56:11.226: INFO: e2e test version: v1.15.7 Feb 13 12:56:11.227: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 12:56:11.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir Feb 13 12:56:13.162: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 13 12:56:13.188: INFO: Waiting up to 5m0s for pod "pod-b2d05b6b-6e35-4d8a-accd-d9be23d80002" in namespace "emptydir-3070" to be "success or failure" Feb 13 12:56:13.242: INFO: Pod "pod-b2d05b6b-6e35-4d8a-accd-d9be23d80002": Phase="Pending", Reason="", readiness=false. Elapsed: 53.927867ms Feb 13 12:56:15.255: INFO: Pod "pod-b2d05b6b-6e35-4d8a-accd-d9be23d80002": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066744463s Feb 13 12:56:17.346: INFO: Pod "pod-b2d05b6b-6e35-4d8a-accd-d9be23d80002": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157295179s Feb 13 12:56:19.355: INFO: Pod "pod-b2d05b6b-6e35-4d8a-accd-d9be23d80002": Phase="Pending", Reason="", readiness=false. Elapsed: 6.167029865s Feb 13 12:56:21.373: INFO: Pod "pod-b2d05b6b-6e35-4d8a-accd-d9be23d80002": Phase="Pending", Reason="", readiness=false. Elapsed: 8.185059468s Feb 13 12:56:23.381: INFO: Pod "pod-b2d05b6b-6e35-4d8a-accd-d9be23d80002": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.192093363s STEP: Saw pod success Feb 13 12:56:23.381: INFO: Pod "pod-b2d05b6b-6e35-4d8a-accd-d9be23d80002" satisfied condition "success or failure" Feb 13 12:56:23.386: INFO: Trying to get logs from node iruya-node pod pod-b2d05b6b-6e35-4d8a-accd-d9be23d80002 container test-container: STEP: delete the pod Feb 13 12:56:23.529: INFO: Waiting for pod pod-b2d05b6b-6e35-4d8a-accd-d9be23d80002 to disappear Feb 13 12:56:23.540: INFO: Pod pod-b2d05b6b-6e35-4d8a-accd-d9be23d80002 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 12:56:23.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3070" for this suite. Feb 13 12:56:29.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:56:29.803: INFO: namespace emptydir-3070 deletion completed in 6.174735845s • [SLOW TEST:18.575 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 12:56:29.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 13 12:56:29.970: INFO: Waiting up to 5m0s for pod "downwardapi-volume-650cd596-6ffe-490c-966a-6e28fd621b0b" in namespace "projected-5103" to be "success or failure" Feb 13 12:56:30.078: INFO: Pod "downwardapi-volume-650cd596-6ffe-490c-966a-6e28fd621b0b": Phase="Pending", Reason="", readiness=false. Elapsed: 108.577603ms Feb 13 12:56:32.084: INFO: Pod "downwardapi-volume-650cd596-6ffe-490c-966a-6e28fd621b0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114289827s Feb 13 12:56:34.099: INFO: Pod "downwardapi-volume-650cd596-6ffe-490c-966a-6e28fd621b0b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.128679008s Feb 13 12:56:36.108: INFO: Pod "downwardapi-volume-650cd596-6ffe-490c-966a-6e28fd621b0b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138123328s Feb 13 12:56:38.116: INFO: Pod "downwardapi-volume-650cd596-6ffe-490c-966a-6e28fd621b0b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.146511716s Feb 13 12:56:40.131: INFO: Pod "downwardapi-volume-650cd596-6ffe-490c-966a-6e28fd621b0b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.16068212s Feb 13 12:56:42.139: INFO: Pod "downwardapi-volume-650cd596-6ffe-490c-966a-6e28fd621b0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.169383002s STEP: Saw pod success Feb 13 12:56:42.139: INFO: Pod "downwardapi-volume-650cd596-6ffe-490c-966a-6e28fd621b0b" satisfied condition "success or failure" Feb 13 12:56:42.142: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-650cd596-6ffe-490c-966a-6e28fd621b0b container client-container: STEP: delete the pod Feb 13 12:56:42.337: INFO: Waiting for pod downwardapi-volume-650cd596-6ffe-490c-966a-6e28fd621b0b to disappear Feb 13 12:56:42.353: INFO: Pod downwardapi-volume-650cd596-6ffe-490c-966a-6e28fd621b0b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 12:56:42.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5103" for this suite. Feb 13 12:56:48.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:56:48.608: INFO: namespace projected-5103 deletion completed in 6.248072729s • [SLOW TEST:18.805 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 12:56:48.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-6015 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Feb 13 12:56:48.873: INFO: Found 0 stateful pods, waiting for 3 Feb 13 12:56:58.951: INFO: Found 2 stateful pods, waiting for 3 Feb 13 12:57:08.882: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 13 12:57:08.882: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 13 12:57:08.882: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 13 12:57:18.891: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 13 12:57:18.891: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 13 12:57:18.891: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Feb 13 12:57:18.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6015 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 13 12:57:21.243: INFO: stderr: "I0213 12:57:20.978117 38 log.go:172] (0xc0007300b0) (0xc000648320) Create stream\nI0213 12:57:20.978520 38 log.go:172] (0xc0007300b0) (0xc000648320) Stream added, broadcasting: 1\nI0213 12:57:20.984212 38 log.go:172] (0xc0007300b0) Reply frame received for 1\nI0213 12:57:20.984327 38 log.go:172] (0xc0007300b0) (0xc00074c0a0) Create stream\nI0213 12:57:20.984350 38 log.go:172] (0xc0007300b0) (0xc00074c0a0) Stream added, broadcasting: 3\nI0213 12:57:20.986961 38 log.go:172] (0xc0007300b0) Reply frame received for 3\nI0213 12:57:20.987023 38 log.go:172] (0xc0007300b0) (0xc0003ee000) Create stream\nI0213 12:57:20.987043 38 log.go:172] (0xc0007300b0) (0xc0003ee000) Stream added, broadcasting: 5\nI0213 12:57:20.988283 38 log.go:172] (0xc0007300b0) Reply frame received for 5\nI0213 12:57:21.103920 38 log.go:172] (0xc0007300b0) Data frame received for 5\nI0213 12:57:21.104151 38 log.go:172] (0xc0003ee000) (5) Data frame handling\nI0213 12:57:21.104218 38 log.go:172] (0xc0003ee000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0213 12:57:21.153690 38 log.go:172] (0xc0007300b0) Data frame received for 3\nI0213 12:57:21.153765 38 log.go:172] (0xc00074c0a0) (3) Data frame handling\nI0213 12:57:21.153790 38 log.go:172] (0xc00074c0a0) (3) Data frame sent\nI0213 12:57:21.232729 38 log.go:172] (0xc0007300b0) Data frame received for 1\nI0213 12:57:21.232912 38 log.go:172] (0xc0007300b0) (0xc0003ee000) Stream removed, broadcasting: 5\nI0213 12:57:21.233053 38 log.go:172] (0xc000648320) (1) Data frame handling\nI0213 12:57:21.233075 38 log.go:172] (0xc000648320) (1) Data frame sent\nI0213 12:57:21.233084 38 log.go:172] (0xc0007300b0) (0xc00074c0a0) Stream removed, broadcasting: 3\nI0213 12:57:21.233104 38 log.go:172] (0xc0007300b0) (0xc000648320) Stream removed, broadcasting: 1\nI0213 12:57:21.233128 38 log.go:172] (0xc0007300b0) Go away received\nI0213 12:57:21.234114 38 log.go:172] (0xc0007300b0) (0xc000648320) Stream removed, broadcasting: 1\nI0213 12:57:21.234129 38 log.go:172] (0xc0007300b0) (0xc00074c0a0) Stream removed, broadcasting: 3\nI0213 12:57:21.234134 38 log.go:172] (0xc0007300b0) (0xc0003ee000) Stream removed, broadcasting: 5\n" Feb 13 12:57:21.243: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 13 12:57:21.243: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Feb 13 12:57:31.301: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Feb 13 12:57:41.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6015 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:57:41.953: INFO: stderr: "I0213 12:57:41.740224 65 log.go:172] (0xc00012a6e0) (0xc0004386e0) Create stream\nI0213 12:57:41.740476 65 log.go:172] (0xc00012a6e0) (0xc0004386e0) Stream added, broadcasting: 1\nI0213 12:57:41.758243 65 log.go:172] (0xc00012a6e0) Reply frame received for 1\nI0213 12:57:41.758503 65 log.go:172] (0xc00012a6e0) (0xc0004ce320) Create stream\nI0213 12:57:41.758573 65 log.go:172] (0xc00012a6e0) (0xc0004ce320) Stream added, broadcasting: 3\nI0213 12:57:41.762409 65 log.go:172] (0xc00012a6e0) Reply frame received for 3\nI0213 12:57:41.762476 65 log.go:172] (0xc00012a6e0) (0xc000438000) Create stream\nI0213 12:57:41.762526 65 log.go:172] (0xc00012a6e0) (0xc000438000) Stream added, broadcasting: 5\nI0213 12:57:41.764539 65 log.go:172] (0xc00012a6e0) Reply frame received for 5\nI0213 12:57:41.858716 65 log.go:172] (0xc00012a6e0) Data frame received for 5\nI0213 12:57:41.859154 65 log.go:172] (0xc000438000) (5) Data frame handling\nI0213 12:57:41.859218 65 log.go:172] (0xc000438000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0213 12:57:41.859463 65 log.go:172] (0xc00012a6e0) Data frame received for 3\nI0213 12:57:41.859681 65 log.go:172] (0xc0004ce320) (3) Data frame handling\nI0213 12:57:41.859814 65 log.go:172] (0xc0004ce320) (3) Data frame sent\nI0213 12:57:41.944786 65 log.go:172] (0xc00012a6e0) (0xc0004ce320) Stream removed, broadcasting: 3\nI0213 12:57:41.944863 65 log.go:172] (0xc00012a6e0) Data frame received for 1\nI0213 12:57:41.944886 65 log.go:172] (0xc0004386e0) (1) Data frame handling\nI0213 12:57:41.944903 65 log.go:172] (0xc0004386e0) (1) Data frame sent\nI0213 12:57:41.944919 65 log.go:172] (0xc00012a6e0) (0xc0004386e0) Stream removed, broadcasting: 1\nI0213 12:57:41.944957 65 log.go:172] (0xc00012a6e0) (0xc000438000) Stream removed, broadcasting: 5\nI0213 12:57:41.944991 65 log.go:172] (0xc00012a6e0) Go away received\nI0213 12:57:41.945851 65 log.go:172] (0xc00012a6e0) (0xc0004386e0) Stream removed, broadcasting: 1\nI0213 12:57:41.945879 65 log.go:172] (0xc00012a6e0) (0xc0004ce320) Stream removed, broadcasting: 3\nI0213 12:57:41.945904 65 log.go:172] (0xc00012a6e0) (0xc000438000) Stream removed, broadcasting: 5\n" Feb 13 12:57:41.954: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 13 12:57:41.954: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 13 12:57:51.990: INFO: Waiting for StatefulSet statefulset-6015/ss2 to complete update Feb 13 12:57:51.990: INFO: Waiting for Pod statefulset-6015/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 13 12:57:51.990: INFO: Waiting for Pod statefulset-6015/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 13 12:57:51.990: INFO: Waiting for Pod statefulset-6015/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 13 12:58:02.013: INFO: Waiting for StatefulSet statefulset-6015/ss2 to complete update Feb 13 12:58:02.013: INFO: Waiting for Pod statefulset-6015/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 13 12:58:02.013: INFO: Waiting for Pod statefulset-6015/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 13 12:58:12.633: INFO: Waiting for StatefulSet statefulset-6015/ss2 to complete update Feb 13 12:58:12.633: INFO: Waiting for Pod statefulset-6015/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 13 12:58:22.006: INFO: Waiting for StatefulSet statefulset-6015/ss2 to complete update Feb 13 12:58:22.006: INFO: Waiting for Pod statefulset-6015/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 13 12:58:32.025: INFO: Waiting for StatefulSet statefulset-6015/ss2 to complete update Feb 13 12:58:32.026: INFO: Waiting for Pod statefulset-6015/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 13 12:58:42.011: INFO: Waiting for StatefulSet statefulset-6015/ss2 to complete update STEP: Rolling back to a previous revision Feb 13 12:58:52.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6015 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 13 12:58:52.470: INFO: stderr: "I0213 12:58:52.220234 86 log.go:172] (0xc0009880b0) (0xc0007da640) Create stream\nI0213 12:58:52.220621 86 log.go:172] (0xc0009880b0) (0xc0007da640) Stream added, broadcasting: 1\nI0213 12:58:52.225043 86 log.go:172] (0xc0009880b0) Reply frame received for 1\nI0213 12:58:52.225095 86 log.go:172] (0xc0009880b0) (0xc0007b4000) Create stream\nI0213 12:58:52.225107 86 log.go:172] (0xc0009880b0) (0xc0007b4000) Stream added, broadcasting: 3\nI0213 12:58:52.226093 86 log.go:172] (0xc0009880b0) Reply frame received for 3\nI0213 12:58:52.226117 86 log.go:172] (0xc0009880b0) (0xc0007da6e0) Create stream\nI0213 12:58:52.226123 86 log.go:172] (0xc0009880b0) (0xc0007da6e0) Stream added, broadcasting: 5\nI0213 12:58:52.227086 86 log.go:172] (0xc0009880b0) Reply frame received for 5\nI0213 12:58:52.349058 86 log.go:172] (0xc0009880b0) Data frame received for 5\nI0213 12:58:52.349178 86 log.go:172] (0xc0007da6e0) (5) Data frame handling\nI0213 12:58:52.349206 86 log.go:172] (0xc0007da6e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0213 12:58:52.380535 86 log.go:172] (0xc0009880b0) Data frame received for 3\nI0213 12:58:52.380559 86 log.go:172] (0xc0007b4000) (3) Data frame handling\nI0213 12:58:52.380577 86 log.go:172] (0xc0007b4000) (3) Data frame sent\nI0213 12:58:52.456009 86 log.go:172] (0xc0009880b0) Data frame received for 1\nI0213 12:58:52.456072 86 log.go:172] (0xc0007da640) (1) Data frame handling\nI0213 12:58:52.456091 86 log.go:172] (0xc0007da640) (1) Data frame sent\nI0213 12:58:52.456685 86 log.go:172] (0xc0009880b0) (0xc0007da640) Stream removed, broadcasting: 1\nI0213 12:58:52.457125 86 log.go:172] (0xc0009880b0) (0xc0007b4000) Stream removed, broadcasting: 3\nI0213 12:58:52.457660 86 log.go:172] (0xc0009880b0) (0xc0007da6e0) Stream removed, broadcasting: 5\nI0213 12:58:52.457736 86 log.go:172] (0xc0009880b0) (0xc0007da640) Stream removed, broadcasting: 1\nI0213 12:58:52.457756 86 log.go:172] (0xc0009880b0) (0xc0007b4000) Stream removed, broadcasting: 3\nI0213 12:58:52.457766 86 log.go:172] (0xc0009880b0) (0xc0007da6e0) Stream removed, broadcasting: 5\n" Feb 13 12:58:52.470: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 13 12:58:52.470: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 13 12:59:02.543: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Feb 13 12:59:12.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6015 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:59:13.197: INFO: stderr: "I0213 12:59:12.993791 104 log.go:172] (0xc00097e2c0) (0xc000890640) Create stream\nI0213 12:59:12.993944 104 log.go:172] (0xc00097e2c0) (0xc000890640) Stream added, broadcasting: 1\nI0213 12:59:13.006653 104 log.go:172] (0xc00097e2c0) Reply frame received for 1\nI0213 12:59:13.006720 104 log.go:172] (0xc00097e2c0) (0xc000828000) Create stream\nI0213 12:59:13.006729 104 log.go:172] (0xc00097e2c0) (0xc000828000) Stream added, broadcasting: 3\nI0213 12:59:13.008069 104 log.go:172] (0xc00097e2c0) Reply frame received for 3\nI0213 12:59:13.008109 104 log.go:172] (0xc00097e2c0) (0xc00065a280) Create stream\nI0213 12:59:13.008125 104 log.go:172] (0xc00097e2c0) (0xc00065a280) Stream added, broadcasting: 5\nI0213 12:59:13.009908 104 log.go:172] (0xc00097e2c0) Reply frame received for 5\nI0213 12:59:13.094509 104 log.go:172] (0xc00097e2c0) Data frame received for 3\nI0213 12:59:13.094640 104 log.go:172] (0xc000828000) (3) Data frame handling\nI0213 12:59:13.094667 104 log.go:172] (0xc000828000) (3) Data frame sent\nI0213 12:59:13.094681 104 log.go:172] (0xc00097e2c0) Data frame received for 5\nI0213 12:59:13.094692 104 log.go:172] (0xc00065a280) (5) Data frame handling\nI0213 12:59:13.094709 104 log.go:172] (0xc00065a280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0213 12:59:13.183852 104 log.go:172] (0xc00097e2c0) (0xc00065a280) Stream removed, broadcasting: 5\nI0213 12:59:13.184009 104 log.go:172] (0xc00097e2c0) Data frame received for 1\nI0213 12:59:13.184046 104 log.go:172] (0xc00097e2c0) (0xc000828000) Stream removed, broadcasting: 3\nI0213 12:59:13.184131 104 log.go:172] (0xc000890640) (1) Data frame handling\nI0213 12:59:13.184141 104 log.go:172] (0xc000890640) (1) Data frame sent\nI0213 12:59:13.184146 104 log.go:172] (0xc00097e2c0) (0xc000890640) Stream removed, broadcasting: 1\nI0213 12:59:13.184158 104 log.go:172] (0xc00097e2c0) Go away received\nI0213 12:59:13.185134 104 log.go:172] (0xc00097e2c0) (0xc000890640) Stream removed, broadcasting: 1\nI0213 12:59:13.185144 104 log.go:172] (0xc00097e2c0) (0xc000828000) Stream removed, broadcasting: 3\nI0213 12:59:13.185150 104 log.go:172] (0xc00097e2c0) (0xc00065a280) Stream removed, broadcasting: 5\n" Feb 13 12:59:13.198: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 13 12:59:13.198: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 13 12:59:23.298: INFO: Waiting for StatefulSet statefulset-6015/ss2 to complete update Feb 13 12:59:23.298: INFO: Waiting for Pod statefulset-6015/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 13 12:59:23.298: INFO: Waiting for Pod statefulset-6015/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 13 12:59:33.316: INFO: Waiting for StatefulSet statefulset-6015/ss2 to complete update Feb 13 12:59:33.316: INFO: Waiting for Pod statefulset-6015/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 13 12:59:33.316: INFO: Waiting for Pod statefulset-6015/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 13 12:59:43.818: INFO: Waiting for StatefulSet statefulset-6015/ss2 to complete update Feb 13 12:59:43.819: INFO: Waiting for Pod statefulset-6015/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 13 12:59:53.371: INFO: Waiting for StatefulSet statefulset-6015/ss2 to complete update Feb 13 12:59:53.371: INFO: Waiting for Pod statefulset-6015/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 13 13:00:03.327: INFO: Waiting for StatefulSet statefulset-6015/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 13 13:00:13.312: INFO: Deleting all statefulset in ns statefulset-6015 Feb 13 13:00:13.318: INFO: Scaling statefulset ss2 to 0 Feb 13 13:00:43.376: INFO: Waiting for statefulset status.replicas updated to 0 Feb 13 13:00:43.385: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:00:43.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6015" for this suite. Feb 13 13:00:51.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:00:51.638: INFO: namespace statefulset-6015 deletion completed in 8.196415234s • [SLOW TEST:243.030 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:00:51.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-8c6d35d2-075d-4b49-a6e6-ab7554dec233 STEP: Creating a pod to test consume configMaps Feb 13 13:00:51.944: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a83f78ae-6d6a-4a4a-9a2d-5b58e8ce299b" in namespace "projected-1478" to be "success or failure" Feb 13 13:00:51.996: INFO: Pod "pod-projected-configmaps-a83f78ae-6d6a-4a4a-9a2d-5b58e8ce299b": Phase="Pending", Reason="", readiness=false. Elapsed: 51.773581ms Feb 13 13:00:54.010: INFO: Pod "pod-projected-configmaps-a83f78ae-6d6a-4a4a-9a2d-5b58e8ce299b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065388501s Feb 13 13:00:56.020: INFO: Pod "pod-projected-configmaps-a83f78ae-6d6a-4a4a-9a2d-5b58e8ce299b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07524737s Feb 13 13:00:58.030: INFO: Pod "pod-projected-configmaps-a83f78ae-6d6a-4a4a-9a2d-5b58e8ce299b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085321197s Feb 13 13:01:00.044: INFO: Pod "pod-projected-configmaps-a83f78ae-6d6a-4a4a-9a2d-5b58e8ce299b": Phase="Running", Reason="", readiness=true. Elapsed: 8.099314363s Feb 13 13:01:02.055: INFO: Pod "pod-projected-configmaps-a83f78ae-6d6a-4a4a-9a2d-5b58e8ce299b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.110664432s STEP: Saw pod success Feb 13 13:01:02.055: INFO: Pod "pod-projected-configmaps-a83f78ae-6d6a-4a4a-9a2d-5b58e8ce299b" satisfied condition "success or failure" Feb 13 13:01:02.059: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-a83f78ae-6d6a-4a4a-9a2d-5b58e8ce299b container projected-configmap-volume-test: STEP: delete the pod Feb 13 13:01:02.349: INFO: Waiting for pod pod-projected-configmaps-a83f78ae-6d6a-4a4a-9a2d-5b58e8ce299b to disappear Feb 13 13:01:02.355: INFO: Pod pod-projected-configmaps-a83f78ae-6d6a-4a4a-9a2d-5b58e8ce299b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:01:02.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1478" for this suite. Feb 13 13:01:09.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:01:09.160: INFO: namespace projected-1478 deletion completed in 6.80115254s • [SLOW TEST:17.521 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:01:09.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 13 13:01:09.299: INFO: Waiting up to 5m0s for pod "pod-e750b8f5-4a2e-4a29-a883-fedd4cc4e88c" in namespace "emptydir-2593" to be "success or failure" Feb 13 13:01:09.305: INFO: Pod "pod-e750b8f5-4a2e-4a29-a883-fedd4cc4e88c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.409286ms Feb 13 13:01:11.328: INFO: Pod "pod-e750b8f5-4a2e-4a29-a883-fedd4cc4e88c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028213411s Feb 13 13:01:13.336: INFO: Pod "pod-e750b8f5-4a2e-4a29-a883-fedd4cc4e88c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036443822s Feb 13 13:01:15.345: INFO: Pod "pod-e750b8f5-4a2e-4a29-a883-fedd4cc4e88c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045791581s Feb 13 13:01:17.352: INFO: Pod "pod-e750b8f5-4a2e-4a29-a883-fedd4cc4e88c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052523046s STEP: Saw pod success Feb 13 13:01:17.352: INFO: Pod "pod-e750b8f5-4a2e-4a29-a883-fedd4cc4e88c" satisfied condition "success or failure" Feb 13 13:01:17.356: INFO: Trying to get logs from node iruya-node pod pod-e750b8f5-4a2e-4a29-a883-fedd4cc4e88c container test-container: STEP: delete the pod Feb 13 13:01:17.416: INFO: Waiting for pod pod-e750b8f5-4a2e-4a29-a883-fedd4cc4e88c to disappear Feb 13 13:01:17.457: INFO: Pod pod-e750b8f5-4a2e-4a29-a883-fedd4cc4e88c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:01:17.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2593" for this suite. Feb 13 13:01:23.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:01:23.743: INFO: namespace emptydir-2593 deletion completed in 6.281426641s • [SLOW TEST:14.582 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:01:23.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Feb 13 13:01:32.887: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7774 pod-service-account-5c35e8b9-5abe-4b57-af3e-8028618c5bef -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Feb 13 13:01:33.404: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7774 pod-service-account-5c35e8b9-5abe-4b57-af3e-8028618c5bef -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Feb 13 13:01:34.164: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7774 pod-service-account-5c35e8b9-5abe-4b57-af3e-8028618c5bef -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:01:34.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7774" for this suite. Feb 13 13:01:40.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:01:40.727: INFO: namespace svcaccounts-7774 deletion completed in 6.155343686s • [SLOW TEST:16.984 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:01:40.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0213 13:02:23.377437 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 13 13:02:23.377: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:02:23.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6602" for this suite. Feb 13 13:02:31.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:02:31.556: INFO: namespace gc-6602 deletion completed in 8.162781568s • [SLOW TEST:50.828 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:02:31.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:03:32.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1606" for this suite. Feb 13 13:03:54.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:03:55.043: INFO: namespace container-probe-1606 deletion completed in 22.198496285s • [SLOW TEST:83.487 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:03:55.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 13 13:03:55.157: INFO: Waiting up to 5m0s for pod "pod-7080f592-3925-4f47-8158-48018f90e869" in namespace "emptydir-4358" to be "success or failure" Feb 13 13:03:55.162: INFO: Pod "pod-7080f592-3925-4f47-8158-48018f90e869": Phase="Pending", Reason="", readiness=false. Elapsed: 4.905975ms Feb 13 13:03:57.168: INFO: Pod "pod-7080f592-3925-4f47-8158-48018f90e869": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011023596s Feb 13 13:03:59.183: INFO: Pod "pod-7080f592-3925-4f47-8158-48018f90e869": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025630755s Feb 13 13:04:01.190: INFO: Pod "pod-7080f592-3925-4f47-8158-48018f90e869": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032809717s Feb 13 13:04:03.225: INFO: Pod "pod-7080f592-3925-4f47-8158-48018f90e869": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067917596s Feb 13 13:04:05.235: INFO: Pod "pod-7080f592-3925-4f47-8158-48018f90e869": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.078345756s STEP: Saw pod success Feb 13 13:04:05.236: INFO: Pod "pod-7080f592-3925-4f47-8158-48018f90e869" satisfied condition "success or failure" Feb 13 13:04:05.239: INFO: Trying to get logs from node iruya-node pod pod-7080f592-3925-4f47-8158-48018f90e869 container test-container: STEP: delete the pod Feb 13 13:04:05.524: INFO: Waiting for pod pod-7080f592-3925-4f47-8158-48018f90e869 to disappear Feb 13 13:04:05.532: INFO: Pod pod-7080f592-3925-4f47-8158-48018f90e869 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:04:05.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4358" for this suite. Feb 13 13:04:11.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:04:11.745: INFO: namespace emptydir-4358 deletion completed in 6.199693411s • [SLOW TEST:16.701 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:04:11.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Feb 13 13:04:11.882: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:04:27.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7025" for this suite. Feb 13 13:04:34.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:04:34.177: INFO: namespace init-container-7025 deletion completed in 6.208925435s • [SLOW TEST:22.432 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:04:34.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-5392 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 13 13:04:34.253: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 13 13:05:16.526: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 13 13:05:16.526: INFO: >>> kubeConfig: /root/.kube/config I0213 13:05:16.678830 8 log.go:172] (0xc000a99600) (0xc001b965a0) Create stream I0213 13:05:16.679332 8 log.go:172] (0xc000a99600) (0xc001b965a0) Stream added, broadcasting: 1 I0213 13:05:16.818314 8 log.go:172] (0xc000a99600) Reply frame received for 1 I0213 13:05:16.818638 8 log.go:172] (0xc000a99600) (0xc00201a8c0) Create stream I0213 13:05:16.818690 8 log.go:172] (0xc000a99600) (0xc00201a8c0) Stream added, broadcasting: 3 I0213 13:05:16.824912 8 log.go:172] (0xc000a99600) Reply frame received for 3 I0213 13:05:16.824976 8 log.go:172] (0xc000a99600) (0xc0017d20a0) Create stream I0213 13:05:16.824998 8 log.go:172] (0xc000a99600) (0xc0017d20a0) Stream added, broadcasting: 5 I0213 13:05:16.832273 8 log.go:172] (0xc000a99600) Reply frame received for 5 I0213 13:05:17.152318 8 log.go:172] (0xc000a99600) Data frame received for 3 I0213 13:05:17.152361 8 log.go:172] (0xc00201a8c0) (3) Data frame handling I0213 13:05:17.152376 8 log.go:172] (0xc00201a8c0) (3) Data frame sent I0213 13:05:17.334147 8 log.go:172] (0xc000a99600) (0xc00201a8c0) Stream removed, broadcasting: 3 I0213 13:05:17.334504 8 log.go:172] (0xc000a99600) (0xc0017d20a0) Stream removed, broadcasting: 5 I0213 13:05:17.334829 8 log.go:172] (0xc000a99600) Data frame received for 1 I0213 13:05:17.334872 8 log.go:172] (0xc001b965a0) (1) Data frame handling I0213 13:05:17.334907 8 log.go:172] (0xc001b965a0) (1) Data frame sent I0213 13:05:17.334926 8 log.go:172] (0xc000a99600) (0xc001b965a0) Stream removed, broadcasting: 1 I0213 13:05:17.334951 8 log.go:172] (0xc000a99600) Go away received I0213 13:05:17.336110 8 log.go:172] (0xc000a99600) (0xc001b965a0) Stream removed, broadcasting: 1 I0213 13:05:17.336144 8 log.go:172] (0xc000a99600) (0xc00201a8c0) Stream removed, broadcasting: 3 I0213 13:05:17.336165 8 log.go:172] (0xc000a99600) (0xc0017d20a0) Stream removed, broadcasting: 5 Feb 13 13:05:17.336: INFO: Waiting for endpoints: map[] Feb 13 13:05:17.346: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 13 13:05:17.346: INFO: >>> kubeConfig: /root/.kube/config I0213 13:05:17.419851 8 log.go:172] (0xc0009f8420) (0xc0004555e0) Create stream I0213 13:05:17.419970 8 log.go:172] (0xc0009f8420) (0xc0004555e0) Stream added, broadcasting: 1 I0213 13:05:17.429129 8 log.go:172] (0xc0009f8420) Reply frame received for 1 I0213 13:05:17.429204 8 log.go:172] (0xc0009f8420) (0xc0017d2140) Create stream I0213 13:05:17.429228 8 log.go:172] (0xc0009f8420) (0xc0017d2140) Stream added, broadcasting: 3 I0213 13:05:17.431826 8 log.go:172] (0xc0009f8420) Reply frame received for 3 I0213 13:05:17.431990 8 log.go:172] (0xc0009f8420) (0xc00201a960) Create stream I0213 13:05:17.432011 8 log.go:172] (0xc0009f8420) (0xc00201a960) Stream added, broadcasting: 5 I0213 13:05:17.433922 8 log.go:172] (0xc0009f8420) Reply frame received for 5 I0213 13:05:17.546665 8 log.go:172] (0xc0009f8420) Data frame received for 3 I0213 13:05:17.546943 8 log.go:172] (0xc0017d2140) (3) Data frame handling I0213 13:05:17.547030 8 log.go:172] (0xc0017d2140) (3) Data frame sent I0213 13:05:17.666496 8 log.go:172] (0xc0009f8420) Data frame received for 1 I0213 13:05:17.666629 8 log.go:172] (0xc0009f8420) (0xc0017d2140) Stream removed, broadcasting: 3 I0213 13:05:17.666713 8 log.go:172] (0xc0004555e0) (1) Data frame handling I0213 13:05:17.666731 8 log.go:172] (0xc0004555e0) (1) Data frame sent I0213 13:05:17.666824 8 log.go:172] (0xc0009f8420) (0xc00201a960) Stream removed, broadcasting: 5 I0213 13:05:17.666880 8 log.go:172] (0xc0009f8420) (0xc0004555e0) Stream removed, broadcasting: 1 I0213 13:05:17.666899 8 log.go:172] (0xc0009f8420) Go away received I0213 13:05:17.667044 8 log.go:172] (0xc0009f8420) (0xc0004555e0) Stream removed, broadcasting: 1 I0213 13:05:17.667066 8 log.go:172] (0xc0009f8420) (0xc0017d2140) Stream removed, broadcasting: 3 I0213 13:05:17.667088 8 log.go:172] (0xc0009f8420) (0xc00201a960) Stream removed, broadcasting: 5 Feb 13 13:05:17.667: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:05:17.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5392" for this suite. Feb 13 13:05:39.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:05:39.920: INFO: namespace pod-network-test-5392 deletion completed in 22.244664973s • [SLOW TEST:65.743 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:05:39.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:06:38.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6293" for this suite. Feb 13 13:06:44.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:06:44.970: INFO: namespace container-runtime-6293 deletion completed in 6.145208829s • [SLOW TEST:65.050 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:06:44.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-e158d37c-44a7-488e-b79c-50426d6ca146 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:06:45.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5685" for this suite. Feb 13 13:06:51.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:06:51.201: INFO: namespace configmap-5685 deletion completed in 6.09668292s • [SLOW TEST:6.230 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:06:51.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Feb 13 13:06:59.930: INFO: Successfully updated pod "annotationupdate71f72706-8c20-473b-bc12-f5ea9fe22164" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:07:02.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2692" for this suite. Feb 13 13:07:24.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:07:24.215: INFO: namespace projected-2692 deletion completed in 22.126857346s • [SLOW TEST:33.014 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:07:24.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-4909 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4909 to expose endpoints map[] Feb 13 13:07:24.375: INFO: Get endpoints failed (7.922688ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Feb 13 13:07:26.363: INFO: successfully validated that service endpoint-test2 in namespace services-4909 exposes endpoints map[] (1.995905896s elapsed) STEP: Creating pod pod1 in namespace services-4909 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4909 to expose endpoints map[pod1:[80]] Feb 13 13:07:30.609: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.22839278s elapsed, will retry) Feb 13 13:07:33.651: INFO: successfully validated that service endpoint-test2 in namespace services-4909 exposes endpoints map[pod1:[80]] (7.270397547s elapsed) STEP: Creating pod pod2 in namespace services-4909 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4909 to expose endpoints map[pod1:[80] pod2:[80]] Feb 13 13:07:38.604: INFO: Unexpected endpoints: found map[3c84c7de-683a-40d6-ac4f-500b48aa4b8b:[80]], expected map[pod1:[80] pod2:[80]] (4.933852626s elapsed, will retry) Feb 13 13:07:41.673: INFO: successfully validated that service endpoint-test2 in namespace services-4909 exposes endpoints map[pod1:[80] pod2:[80]] (8.00326433s elapsed) STEP: Deleting pod pod1 in namespace services-4909 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4909 to expose endpoints map[pod2:[80]] Feb 13 13:07:42.771: INFO: successfully validated that service endpoint-test2 in namespace services-4909 exposes endpoints map[pod2:[80]] (1.087233117s elapsed) STEP: Deleting pod pod2 in namespace services-4909 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4909 to expose endpoints map[] Feb 13 13:07:43.896: INFO: successfully validated that service endpoint-test2 in namespace services-4909 exposes endpoints map[] (1.116980084s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:07:44.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4909" for this suite. Feb 13 13:08:06.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:08:07.061: INFO: namespace services-4909 deletion completed in 22.128246451s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:42.845 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:08:07.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-6824 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 13 13:08:07.164: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 13 13:08:41.357: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6824 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 13 13:08:41.357: INFO: >>> kubeConfig: /root/.kube/config I0213 13:08:41.432616 8 log.go:172] (0xc000a99ad0) (0xc001a67e00) Create stream I0213 13:08:41.433333 8 log.go:172] (0xc000a99ad0) (0xc001a67e00) Stream added, broadcasting: 1 I0213 13:08:41.444736 8 log.go:172] (0xc000a99ad0) Reply frame received for 1 I0213 13:08:41.444856 8 log.go:172] (0xc000a99ad0) (0xc001e39d60) Create stream I0213 13:08:41.444887 8 log.go:172] (0xc000a99ad0) (0xc001e39d60) Stream added, broadcasting: 3 I0213 13:08:41.448078 8 log.go:172] (0xc000a99ad0) Reply frame received for 3 I0213 13:08:41.448116 8 log.go:172] (0xc000a99ad0) (0xc00201b040) Create stream I0213 13:08:41.448133 8 log.go:172] (0xc000a99ad0) (0xc00201b040) Stream added, broadcasting: 5 I0213 13:08:41.452926 8 log.go:172] (0xc000a99ad0) Reply frame received for 5 I0213 13:08:42.645984 8 log.go:172] (0xc000a99ad0) Data frame received for 3 I0213 13:08:42.646148 8 log.go:172] (0xc001e39d60) (3) Data frame handling I0213 13:08:42.646192 8 log.go:172] (0xc001e39d60) (3) Data frame sent I0213 13:08:42.962017 8 log.go:172] (0xc000a99ad0) (0xc001e39d60) Stream removed, broadcasting: 3 I0213 13:08:42.962616 8 log.go:172] (0xc000a99ad0) Data frame received for 1 I0213 13:08:42.962692 8 log.go:172] (0xc001a67e00) (1) Data frame handling I0213 13:08:42.962760 8 log.go:172] (0xc001a67e00) (1) Data frame sent I0213 13:08:42.962778 8 log.go:172] (0xc000a99ad0) (0xc001a67e00) Stream removed, broadcasting: 1 I0213 13:08:42.962941 8 log.go:172] (0xc000a99ad0) (0xc00201b040) Stream removed, broadcasting: 5 I0213 13:08:42.963583 8 log.go:172] (0xc000a99ad0) (0xc001a67e00) Stream removed, broadcasting: 1 I0213 13:08:42.963647 8 log.go:172] (0xc000a99ad0) (0xc001e39d60) Stream removed, broadcasting: 3 I0213 13:08:42.963771 8 log.go:172] (0xc000a99ad0) (0xc00201b040) Stream removed, broadcasting: 5 I0213 13:08:42.963904 8 log.go:172] (0xc000a99ad0) Go away received Feb 13 13:08:42.963: INFO: Found all expected endpoints: [netserver-0] Feb 13 13:08:42.976: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6824 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 13 13:08:42.976: INFO: >>> kubeConfig: /root/.kube/config I0213 13:08:43.035220 8 log.go:172] (0xc00127cbb0) (0xc00201b4a0) Create stream I0213 13:08:43.035362 8 log.go:172] (0xc00127cbb0) (0xc00201b4a0) Stream added, broadcasting: 1 I0213 13:08:43.044656 8 log.go:172] (0xc00127cbb0) Reply frame received for 1 I0213 13:08:43.044771 8 log.go:172] (0xc00127cbb0) (0xc001a67ea0) Create stream I0213 13:08:43.044787 8 log.go:172] (0xc00127cbb0) (0xc001a67ea0) Stream added, broadcasting: 3 I0213 13:08:43.048253 8 log.go:172] (0xc00127cbb0) Reply frame received for 3 I0213 13:08:43.048339 8 log.go:172] (0xc00127cbb0) (0xc00201b540) Create stream I0213 13:08:43.048356 8 log.go:172] (0xc00127cbb0) (0xc00201b540) Stream added, broadcasting: 5 I0213 13:08:43.052596 8 log.go:172] (0xc00127cbb0) Reply frame received for 5 I0213 13:08:44.166965 8 log.go:172] (0xc00127cbb0) Data frame received for 3 I0213 13:08:44.167176 8 log.go:172] (0xc001a67ea0) (3) Data frame handling I0213 13:08:44.167227 8 log.go:172] (0xc001a67ea0) (3) Data frame sent I0213 13:08:44.277594 8 log.go:172] (0xc00127cbb0) (0xc001a67ea0) Stream removed, broadcasting: 3 I0213 13:08:44.277912 8 log.go:172] (0xc00127cbb0) Data frame received for 1 I0213 13:08:44.277934 8 log.go:172] (0xc00201b4a0) (1) Data frame handling I0213 13:08:44.278055 8 log.go:172] (0xc00201b4a0) (1) Data frame sent I0213 13:08:44.278072 8 log.go:172] (0xc00127cbb0) (0xc00201b4a0) Stream removed, broadcasting: 1 I0213 13:08:44.278316 8 log.go:172] (0xc00127cbb0) (0xc00201b540) Stream removed, broadcasting: 5 I0213 13:08:44.278383 8 log.go:172] (0xc00127cbb0) (0xc00201b4a0) Stream removed, broadcasting: 1 I0213 13:08:44.278396 8 log.go:172] (0xc00127cbb0) (0xc001a67ea0) Stream removed, broadcasting: 3 I0213 13:08:44.278409 8 log.go:172] (0xc00127cbb0) (0xc00201b540) Stream removed, broadcasting: 5 I0213 13:08:44.278735 8 log.go:172] (0xc00127cbb0) Go away received Feb 13 13:08:44.278: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:08:44.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6824" for this suite. Feb 13 13:09:10.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:09:10.430: INFO: namespace pod-network-test-6824 deletion completed in 26.141976624s • [SLOW TEST:63.369 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:09:10.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-3811/secret-test-d8da396f-1063-4f44-8399-243bdd18bd0b STEP: Creating a pod to test consume secrets Feb 13 13:09:10.523: INFO: Waiting up to 5m0s for pod "pod-configmaps-6d2d0639-a5e7-4142-a324-c4fc4e7e429a" in namespace "secrets-3811" to be "success or failure" Feb 13 13:09:10.532: INFO: Pod "pod-configmaps-6d2d0639-a5e7-4142-a324-c4fc4e7e429a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.856692ms Feb 13 13:09:12.546: INFO: Pod "pod-configmaps-6d2d0639-a5e7-4142-a324-c4fc4e7e429a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022779999s Feb 13 13:09:14.559: INFO: Pod "pod-configmaps-6d2d0639-a5e7-4142-a324-c4fc4e7e429a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036336232s Feb 13 13:09:16.589: INFO: Pod "pod-configmaps-6d2d0639-a5e7-4142-a324-c4fc4e7e429a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066221457s Feb 13 13:09:18.597: INFO: Pod "pod-configmaps-6d2d0639-a5e7-4142-a324-c4fc4e7e429a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073708606s Feb 13 13:09:20.604: INFO: Pod "pod-configmaps-6d2d0639-a5e7-4142-a324-c4fc4e7e429a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.08119181s STEP: Saw pod success Feb 13 13:09:20.604: INFO: Pod "pod-configmaps-6d2d0639-a5e7-4142-a324-c4fc4e7e429a" satisfied condition "success or failure" Feb 13 13:09:20.609: INFO: Trying to get logs from node iruya-node pod pod-configmaps-6d2d0639-a5e7-4142-a324-c4fc4e7e429a container env-test: STEP: delete the pod Feb 13 13:09:20.948: INFO: Waiting for pod pod-configmaps-6d2d0639-a5e7-4142-a324-c4fc4e7e429a to disappear Feb 13 13:09:20.956: INFO: Pod pod-configmaps-6d2d0639-a5e7-4142-a324-c4fc4e7e429a no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:09:20.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3811" for this suite. Feb 13 13:09:27.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:09:27.112: INFO: namespace secrets-3811 deletion completed in 6.146229769s • [SLOW TEST:16.682 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:09:27.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 13 13:09:27.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-5668' Feb 13 13:09:31.164: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 13 13:09:31.164: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Feb 13 13:09:35.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-5668' Feb 13 13:09:35.414: INFO: stderr: "" Feb 13 13:09:35.415: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:09:35.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5668" for this suite. Feb 13 13:09:43.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:09:43.603: INFO: namespace kubectl-5668 deletion completed in 8.172174581s • [SLOW TEST:16.491 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:09:43.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-3e539a9d-a963-4db1-9b33-474ff040041b STEP: Creating a pod to test consume secrets Feb 13 13:09:43.835: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-801481aa-6722-4695-bdbf-7789d17086ef" in namespace "projected-2601" to be "success or failure" Feb 13 13:09:43.853: INFO: Pod "pod-projected-secrets-801481aa-6722-4695-bdbf-7789d17086ef": Phase="Pending", Reason="", readiness=false. Elapsed: 17.047634ms Feb 13 13:09:45.874: INFO: Pod "pod-projected-secrets-801481aa-6722-4695-bdbf-7789d17086ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038227442s Feb 13 13:09:47.886: INFO: Pod "pod-projected-secrets-801481aa-6722-4695-bdbf-7789d17086ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050341471s Feb 13 13:09:49.899: INFO: Pod "pod-projected-secrets-801481aa-6722-4695-bdbf-7789d17086ef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063736977s Feb 13 13:09:51.910: INFO: Pod "pod-projected-secrets-801481aa-6722-4695-bdbf-7789d17086ef": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074404708s Feb 13 13:09:53.919: INFO: Pod "pod-projected-secrets-801481aa-6722-4695-bdbf-7789d17086ef": Phase="Pending", Reason="", readiness=false. Elapsed: 10.083035828s Feb 13 13:09:55.931: INFO: Pod "pod-projected-secrets-801481aa-6722-4695-bdbf-7789d17086ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.095741702s STEP: Saw pod success Feb 13 13:09:55.932: INFO: Pod "pod-projected-secrets-801481aa-6722-4695-bdbf-7789d17086ef" satisfied condition "success or failure" Feb 13 13:09:55.937: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-801481aa-6722-4695-bdbf-7789d17086ef container projected-secret-volume-test: STEP: delete the pod Feb 13 13:09:56.161: INFO: Waiting for pod pod-projected-secrets-801481aa-6722-4695-bdbf-7789d17086ef to disappear Feb 13 13:09:56.174: INFO: Pod pod-projected-secrets-801481aa-6722-4695-bdbf-7789d17086ef no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:09:56.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2601" for this suite. Feb 13 13:10:04.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:10:04.319: INFO: namespace projected-2601 deletion completed in 8.119805561s • [SLOW TEST:20.715 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:10:04.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-b2c92fb6-526e-40d2-aeb4-32ea16b1c560 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-b2c92fb6-526e-40d2-aeb4-32ea16b1c560 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:11:44.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6612" for this suite. Feb 13 13:12:08.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:12:08.757: INFO: namespace configmap-6612 deletion completed in 24.229840924s • [SLOW TEST:124.438 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:12:08.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-q7dc STEP: Creating a pod to test atomic-volume-subpath Feb 13 13:12:08.837: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-q7dc" in namespace "subpath-9101" to be "success or failure" Feb 13 13:12:08.843: INFO: Pod "pod-subpath-test-secret-q7dc": Phase="Pending", Reason="", readiness=false. Elapsed: 5.663898ms Feb 13 13:12:10.866: INFO: Pod "pod-subpath-test-secret-q7dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02906778s Feb 13 13:12:12.879: INFO: Pod "pod-subpath-test-secret-q7dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041723507s Feb 13 13:12:14.888: INFO: Pod "pod-subpath-test-secret-q7dc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050218107s Feb 13 13:12:16.896: INFO: Pod "pod-subpath-test-secret-q7dc": Phase="Running", Reason="", readiness=true. Elapsed: 8.05845095s Feb 13 13:12:18.908: INFO: Pod "pod-subpath-test-secret-q7dc": Phase="Running", Reason="", readiness=true. Elapsed: 10.070801647s Feb 13 13:12:20.918: INFO: Pod "pod-subpath-test-secret-q7dc": Phase="Running", Reason="", readiness=true. Elapsed: 12.080323247s Feb 13 13:12:22.928: INFO: Pod "pod-subpath-test-secret-q7dc": Phase="Running", Reason="", readiness=true. Elapsed: 14.090595181s Feb 13 13:12:24.971: INFO: Pod "pod-subpath-test-secret-q7dc": Phase="Running", Reason="", readiness=true. Elapsed: 16.133583773s Feb 13 13:12:27.419: INFO: Pod "pod-subpath-test-secret-q7dc": Phase="Running", Reason="", readiness=true. Elapsed: 18.581748056s Feb 13 13:12:29.429: INFO: Pod "pod-subpath-test-secret-q7dc": Phase="Running", Reason="", readiness=true. Elapsed: 20.591205363s Feb 13 13:12:31.439: INFO: Pod "pod-subpath-test-secret-q7dc": Phase="Running", Reason="", readiness=true. Elapsed: 22.602005966s Feb 13 13:12:33.450: INFO: Pod "pod-subpath-test-secret-q7dc": Phase="Running", Reason="", readiness=true. Elapsed: 24.612462916s Feb 13 13:12:35.460: INFO: Pod "pod-subpath-test-secret-q7dc": Phase="Running", Reason="", readiness=true. Elapsed: 26.622910406s Feb 13 13:12:37.469: INFO: Pod "pod-subpath-test-secret-q7dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.631736286s STEP: Saw pod success Feb 13 13:12:37.469: INFO: Pod "pod-subpath-test-secret-q7dc" satisfied condition "success or failure" Feb 13 13:12:37.473: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-q7dc container test-container-subpath-secret-q7dc: STEP: delete the pod Feb 13 13:12:37.545: INFO: Waiting for pod pod-subpath-test-secret-q7dc to disappear Feb 13 13:12:37.561: INFO: Pod pod-subpath-test-secret-q7dc no longer exists STEP: Deleting pod pod-subpath-test-secret-q7dc Feb 13 13:12:37.562: INFO: Deleting pod "pod-subpath-test-secret-q7dc" in namespace "subpath-9101" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:12:37.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9101" for this suite. Feb 13 13:12:43.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:12:43.765: INFO: namespace subpath-9101 deletion completed in 6.161398449s • [SLOW TEST:35.008 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:12:43.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 13 13:12:44.008: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"26faf95d-0d81-4965-acf1-3591d9ea9077", Controller:(*bool)(0xc000f4d80a), BlockOwnerDeletion:(*bool)(0xc000f4d80b)}} Feb 13 13:12:44.116: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"221ed878-290c-4e07-8ee8-df9da07d8c2a", Controller:(*bool)(0xc0019172f2), BlockOwnerDeletion:(*bool)(0xc0019172f3)}} Feb 13 13:12:44.136: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"e4a4763c-ca44-4df5-8888-8aa229555abf", Controller:(*bool)(0xc001b9c69a), BlockOwnerDeletion:(*bool)(0xc001b9c69b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:12:49.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7" for this suite. Feb 13 13:12:55.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:12:55.497: INFO: namespace gc-7 deletion completed in 6.306927145s • [SLOW TEST:11.732 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:12:55.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 13 13:13:11.842: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 13 13:13:11.852: INFO: Pod pod-with-poststart-http-hook still exists Feb 13 13:13:13.853: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 13 13:13:13.862: INFO: Pod pod-with-poststart-http-hook still exists Feb 13 13:13:15.854: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 13 13:13:15.874: INFO: Pod pod-with-poststart-http-hook still exists Feb 13 13:13:17.853: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 13 13:13:17.869: INFO: Pod pod-with-poststart-http-hook still exists Feb 13 13:13:19.853: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 13 13:13:19.864: INFO: Pod pod-with-poststart-http-hook still exists Feb 13 13:13:21.853: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 13 13:13:21.874: INFO: Pod pod-with-poststart-http-hook still exists Feb 13 13:13:23.853: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 13 13:13:23.865: INFO: Pod pod-with-poststart-http-hook still exists Feb 13 13:13:25.853: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 13 13:13:25.867: INFO: Pod pod-with-poststart-http-hook still exists Feb 13 13:13:27.853: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 13 13:13:27.865: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:13:27.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3363" for this suite. Feb 13 13:13:49.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:13:50.165: INFO: namespace container-lifecycle-hook-3363 deletion completed in 22.287599678s • [SLOW TEST:54.667 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:13:50.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-e8044a69-d9ec-4794-982d-592cab6c2636 STEP: Creating a pod to test consume configMaps Feb 13 13:13:50.300: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fc70d83f-ad39-4d37-b0fb-73b5a611042c" in namespace "projected-4827" to be "success or failure" Feb 13 13:13:50.308: INFO: Pod "pod-projected-configmaps-fc70d83f-ad39-4d37-b0fb-73b5a611042c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.891435ms Feb 13 13:13:52.316: INFO: Pod "pod-projected-configmaps-fc70d83f-ad39-4d37-b0fb-73b5a611042c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01574913s Feb 13 13:13:54.337: INFO: Pod "pod-projected-configmaps-fc70d83f-ad39-4d37-b0fb-73b5a611042c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036699277s Feb 13 13:13:56.348: INFO: Pod "pod-projected-configmaps-fc70d83f-ad39-4d37-b0fb-73b5a611042c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048308444s Feb 13 13:13:58.357: INFO: Pod "pod-projected-configmaps-fc70d83f-ad39-4d37-b0fb-73b5a611042c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056653971s Feb 13 13:14:00.368: INFO: Pod "pod-projected-configmaps-fc70d83f-ad39-4d37-b0fb-73b5a611042c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068264437s STEP: Saw pod success Feb 13 13:14:00.369: INFO: Pod "pod-projected-configmaps-fc70d83f-ad39-4d37-b0fb-73b5a611042c" satisfied condition "success or failure" Feb 13 13:14:00.375: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-fc70d83f-ad39-4d37-b0fb-73b5a611042c container projected-configmap-volume-test: STEP: delete the pod Feb 13 13:14:00.626: INFO: Waiting for pod pod-projected-configmaps-fc70d83f-ad39-4d37-b0fb-73b5a611042c to disappear Feb 13 13:14:00.635: INFO: Pod pod-projected-configmaps-fc70d83f-ad39-4d37-b0fb-73b5a611042c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:14:00.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4827" for this suite. Feb 13 13:14:06.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:14:06.786: INFO: namespace projected-4827 deletion completed in 6.144759272s • [SLOW TEST:16.621 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:14:06.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 13 13:14:06.908: INFO: Pod name rollover-pod: Found 0 pods out of 1 Feb 13 13:14:11.925: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 13 13:14:15.937: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Feb 13 13:14:17.987: INFO: Creating deployment "test-rollover-deployment" Feb 13 13:14:18.050: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Feb 13 13:14:20.062: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Feb 13 13:14:20.071: INFO: Ensure that both replica sets have 1 created replica Feb 13 13:14:20.078: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Feb 13 13:14:20.088: INFO: Updating deployment test-rollover-deployment Feb 13 13:14:20.088: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Feb 13 13:14:22.359: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Feb 13 13:14:22.408: INFO: Make sure deployment "test-rollover-deployment" is complete Feb 13 13:14:22.418: INFO: all replica sets need to contain the pod-template-hash label Feb 13 13:14:22.418: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196458, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196458, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196460, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196458, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 13:14:24.427: INFO: all replica sets need to contain the pod-template-hash label Feb 13 13:14:24.427: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196458, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196458, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196460, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196458, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 13:14:27.955: INFO: all replica sets need to contain the pod-template-hash label Feb 13 13:14:27.955: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196458, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196458, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196460, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196458, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 13:14:28.433: INFO: all replica sets need to contain the pod-template-hash label Feb 13 13:14:28.433: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196458, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196458, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196460, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196458, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 13:14:30.449: INFO: all replica sets need to contain the pod-template-hash label Feb 13 13:14:30.449: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196458, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196458, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196470, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196458, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 13:14:32.428: INFO: all replica sets need to contain the pod-template-hash label Feb 13 13:14:32.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196458, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196458, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196470, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196458, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 13:14:34.441: INFO: all replica sets need to contain the pod-template-hash label Feb 13 13:14:34.441: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196458, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196458, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196470, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196458, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 13:14:36.432: INFO: all replica sets need to contain the pod-template-hash label Feb 13 13:14:36.432: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196458, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196458, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196470, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196458, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 13:14:38.439: INFO: all replica sets need to contain the pod-template-hash label Feb 13 13:14:38.440: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196458, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196458, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196470, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196458, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 13:14:40.434: INFO: Feb 13 13:14:40.434: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 13 13:14:40.478: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-9950,SelfLink:/apis/apps/v1/namespaces/deployment-9950/deployments/test-rollover-deployment,UID:b7b9ffff-aec4-4678-855a-384f0c6b1533,ResourceVersion:24195910,Generation:2,CreationTimestamp:2020-02-13 13:14:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-13 13:14:18 +0000 UTC 2020-02-13 13:14:18 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-13 13:14:40 +0000 UTC 2020-02-13 13:14:18 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 13 13:14:40.484: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-9950,SelfLink:/apis/apps/v1/namespaces/deployment-9950/replicasets/test-rollover-deployment-854595fc44,UID:6f0ded91-e47d-4f47-b0b8-5ad23bf6f8d3,ResourceVersion:24195897,Generation:2,CreationTimestamp:2020-02-13 13:14:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b7b9ffff-aec4-4678-855a-384f0c6b1533 0xc0024bfaf7 0xc0024bfaf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 13 13:14:40.484: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Feb 13 13:14:40.485: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-9950,SelfLink:/apis/apps/v1/namespaces/deployment-9950/replicasets/test-rollover-controller,UID:340313e5-2664-420c-b1ee-c0a2c3a1626b,ResourceVersion:24195909,Generation:2,CreationTimestamp:2020-02-13 13:14:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b7b9ffff-aec4-4678-855a-384f0c6b1533 0xc0024bfa27 0xc0024bfa28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 13 13:14:40.485: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-9950,SelfLink:/apis/apps/v1/namespaces/deployment-9950/replicasets/test-rollover-deployment-9b8b997cf,UID:6c524eb8-73c5-4ab3-b22c-b5dd8391ebe9,ResourceVersion:24195863,Generation:2,CreationTimestamp:2020-02-13 13:14:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b7b9ffff-aec4-4678-855a-384f0c6b1533 0xc0024bfbc0 0xc0024bfbc1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 13 13:14:40.490: INFO: Pod "test-rollover-deployment-854595fc44-99s24" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-99s24,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-9950,SelfLink:/api/v1/namespaces/deployment-9950/pods/test-rollover-deployment-854595fc44-99s24,UID:4c0dded9-dc21-4064-af08-679ed8d3d0e0,ResourceVersion:24195882,Generation:0,CreationTimestamp:2020-02-13 13:14:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 6f0ded91-e47d-4f47-b0b8-5ad23bf6f8d3 0xc00072e7f7 0xc00072e7f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4mtw7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4mtw7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-4mtw7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00072e860} {node.kubernetes.io/unreachable Exists NoExecute 0xc00072e880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:14:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:14:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:14:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:14:20 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-02-13 13:14:20 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-13 13:14:29 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://8423ee3f7ee1f965fd38cec95e7f222e6bf8182e094ec0c60ec3666a42c4f70d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:14:40.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9950" for this suite. Feb 13 13:14:48.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:14:48.620: INFO: namespace deployment-9950 deletion completed in 8.125322412s • [SLOW TEST:41.833 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:14:48.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Feb 13 13:14:59.698: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:14:59.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4805" for this suite. Feb 13 13:15:23.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:15:24.034: INFO: namespace replicaset-4805 deletion completed in 24.205801705s • [SLOW TEST:35.412 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:15:24.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-66jx STEP: Creating a pod to test atomic-volume-subpath Feb 13 13:15:24.185: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-66jx" in namespace "subpath-55" to be "success or failure" Feb 13 13:15:24.215: INFO: Pod "pod-subpath-test-downwardapi-66jx": Phase="Pending", Reason="", readiness=false. Elapsed: 30.251523ms Feb 13 13:15:26.228: INFO: Pod "pod-subpath-test-downwardapi-66jx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042417812s Feb 13 13:15:28.237: INFO: Pod "pod-subpath-test-downwardapi-66jx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052232239s Feb 13 13:15:30.247: INFO: Pod "pod-subpath-test-downwardapi-66jx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061513893s Feb 13 13:15:32.256: INFO: Pod "pod-subpath-test-downwardapi-66jx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071031846s Feb 13 13:15:34.264: INFO: Pod "pod-subpath-test-downwardapi-66jx": Phase="Running", Reason="", readiness=true. Elapsed: 10.078488733s Feb 13 13:15:36.273: INFO: Pod "pod-subpath-test-downwardapi-66jx": Phase="Running", Reason="", readiness=true. Elapsed: 12.088123787s Feb 13 13:15:38.282: INFO: Pod "pod-subpath-test-downwardapi-66jx": Phase="Running", Reason="", readiness=true. Elapsed: 14.096613781s Feb 13 13:15:40.291: INFO: Pod "pod-subpath-test-downwardapi-66jx": Phase="Running", Reason="", readiness=true. Elapsed: 16.105594223s Feb 13 13:15:42.296: INFO: Pod "pod-subpath-test-downwardapi-66jx": Phase="Running", Reason="", readiness=true. Elapsed: 18.111083389s Feb 13 13:15:44.304: INFO: Pod "pod-subpath-test-downwardapi-66jx": Phase="Running", Reason="", readiness=true. Elapsed: 20.119165281s Feb 13 13:15:46.318: INFO: Pod "pod-subpath-test-downwardapi-66jx": Phase="Running", Reason="", readiness=true. Elapsed: 22.132636606s Feb 13 13:15:48.333: INFO: Pod "pod-subpath-test-downwardapi-66jx": Phase="Running", Reason="", readiness=true. Elapsed: 24.147325489s Feb 13 13:15:50.342: INFO: Pod "pod-subpath-test-downwardapi-66jx": Phase="Running", Reason="", readiness=true. Elapsed: 26.156579046s Feb 13 13:15:52.357: INFO: Pod "pod-subpath-test-downwardapi-66jx": Phase="Running", Reason="", readiness=true. Elapsed: 28.172104408s Feb 13 13:15:54.368: INFO: Pod "pod-subpath-test-downwardapi-66jx": Phase="Running", Reason="", readiness=true. Elapsed: 30.182411061s Feb 13 13:15:56.378: INFO: Pod "pod-subpath-test-downwardapi-66jx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.192756098s STEP: Saw pod success Feb 13 13:15:56.378: INFO: Pod "pod-subpath-test-downwardapi-66jx" satisfied condition "success or failure" Feb 13 13:15:56.384: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-66jx container test-container-subpath-downwardapi-66jx: STEP: delete the pod Feb 13 13:15:57.788: INFO: Waiting for pod pod-subpath-test-downwardapi-66jx to disappear Feb 13 13:15:57.798: INFO: Pod pod-subpath-test-downwardapi-66jx no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-66jx Feb 13 13:15:57.798: INFO: Deleting pod "pod-subpath-test-downwardapi-66jx" in namespace "subpath-55" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:15:57.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-55" for this suite. Feb 13 13:16:03.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:16:04.136: INFO: namespace subpath-55 deletion completed in 6.276828365s • [SLOW TEST:40.102 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:16:04.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 13 13:16:04.201: INFO: Waiting up to 5m0s for pod "pod-b0bd4411-843b-4988-9e7a-58350d76be81" in namespace "emptydir-524" to be "success or failure" Feb 13 13:16:04.214: INFO: Pod "pod-b0bd4411-843b-4988-9e7a-58350d76be81": Phase="Pending", Reason="", readiness=false. Elapsed: 12.829596ms Feb 13 13:16:06.221: INFO: Pod "pod-b0bd4411-843b-4988-9e7a-58350d76be81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019390649s Feb 13 13:16:08.240: INFO: Pod "pod-b0bd4411-843b-4988-9e7a-58350d76be81": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038295892s Feb 13 13:16:10.246: INFO: Pod "pod-b0bd4411-843b-4988-9e7a-58350d76be81": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044340822s Feb 13 13:16:12.292: INFO: Pod "pod-b0bd4411-843b-4988-9e7a-58350d76be81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.090486963s STEP: Saw pod success Feb 13 13:16:12.292: INFO: Pod "pod-b0bd4411-843b-4988-9e7a-58350d76be81" satisfied condition "success or failure" Feb 13 13:16:12.297: INFO: Trying to get logs from node iruya-node pod pod-b0bd4411-843b-4988-9e7a-58350d76be81 container test-container: STEP: delete the pod Feb 13 13:16:12.375: INFO: Waiting for pod pod-b0bd4411-843b-4988-9e7a-58350d76be81 to disappear Feb 13 13:16:12.381: INFO: Pod pod-b0bd4411-843b-4988-9e7a-58350d76be81 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:16:12.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-524" for this suite. Feb 13 13:16:18.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:16:18.634: INFO: namespace emptydir-524 deletion completed in 6.249210156s • [SLOW TEST:14.497 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:16:18.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 13 13:16:27.346: INFO: Successfully updated pod "pod-update-f9974868-1f09-463b-8bfe-6737b75e3738" STEP: verifying the updated pod is in kubernetes Feb 13 13:16:27.360: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:16:27.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1763" for this suite. Feb 13 13:16:49.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:16:49.616: INFO: namespace pods-1763 deletion completed in 22.228924864s • [SLOW TEST:30.980 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:16:49.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Feb 13 13:16:49.763: INFO: PodSpec: initContainers in spec.initContainers Feb 13 13:18:00.284: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-de8c8e87-a32b-445a-afb3-f697b00e493e", GenerateName:"", Namespace:"init-container-3253", SelfLink:"/api/v1/namespaces/init-container-3253/pods/pod-init-de8c8e87-a32b-445a-afb3-f697b00e493e", UID:"cab4c05f-e066-4cc8-8355-4bb37857b949", ResourceVersion:"24196356", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717196609, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"763663834"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-qwtvn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0028ac3c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qwtvn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qwtvn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qwtvn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0029764c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0019662a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002976550)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002976570)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002976578), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00297657c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196610, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196610, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196610, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717196609, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc00207e340), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0024f02a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0024f0310)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://a1a1a374728d88054bfba2ca800c0839f82fcccafd221a7646e5505169e15a5f"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00207e380), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00207e360), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:18:00.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3253" for this suite. Feb 13 13:18:22.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:18:22.457: INFO: namespace init-container-3253 deletion completed in 22.120650489s • [SLOW TEST:92.841 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:18:22.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7379.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7379.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7379.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7379.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 13 13:18:38.742: INFO: File wheezy_udp@dns-test-service-3.dns-7379.svc.cluster.local from pod dns-7379/dns-test-a6e6dc45-97f1-4846-9f9c-8e7d2e8105ab contains '' instead of 'foo.example.com.' Feb 13 13:18:38.831: INFO: File jessie_udp@dns-test-service-3.dns-7379.svc.cluster.local from pod dns-7379/dns-test-a6e6dc45-97f1-4846-9f9c-8e7d2e8105ab contains '' instead of 'foo.example.com.' Feb 13 13:18:38.831: INFO: Lookups using dns-7379/dns-test-a6e6dc45-97f1-4846-9f9c-8e7d2e8105ab failed for: [wheezy_udp@dns-test-service-3.dns-7379.svc.cluster.local jessie_udp@dns-test-service-3.dns-7379.svc.cluster.local] Feb 13 13:18:43.857: INFO: DNS probes using dns-test-a6e6dc45-97f1-4846-9f9c-8e7d2e8105ab succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7379.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7379.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7379.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7379.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 13 13:19:02.118: INFO: File wheezy_udp@dns-test-service-3.dns-7379.svc.cluster.local from pod dns-7379/dns-test-d90967cd-ac6a-432e-8a4b-53170e4d5c9c contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 13 13:19:02.128: INFO: File jessie_udp@dns-test-service-3.dns-7379.svc.cluster.local from pod dns-7379/dns-test-d90967cd-ac6a-432e-8a4b-53170e4d5c9c contains '' instead of 'bar.example.com.' Feb 13 13:19:02.128: INFO: Lookups using dns-7379/dns-test-d90967cd-ac6a-432e-8a4b-53170e4d5c9c failed for: [wheezy_udp@dns-test-service-3.dns-7379.svc.cluster.local jessie_udp@dns-test-service-3.dns-7379.svc.cluster.local] Feb 13 13:19:07.155: INFO: File wheezy_udp@dns-test-service-3.dns-7379.svc.cluster.local from pod dns-7379/dns-test-d90967cd-ac6a-432e-8a4b-53170e4d5c9c contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 13 13:19:07.165: INFO: File jessie_udp@dns-test-service-3.dns-7379.svc.cluster.local from pod dns-7379/dns-test-d90967cd-ac6a-432e-8a4b-53170e4d5c9c contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 13 13:19:07.165: INFO: Lookups using dns-7379/dns-test-d90967cd-ac6a-432e-8a4b-53170e4d5c9c failed for: [wheezy_udp@dns-test-service-3.dns-7379.svc.cluster.local jessie_udp@dns-test-service-3.dns-7379.svc.cluster.local] Feb 13 13:19:12.178: INFO: DNS probes using dns-test-d90967cd-ac6a-432e-8a4b-53170e4d5c9c succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7379.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7379.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7379.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7379.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 13 13:19:32.620: INFO: File wheezy_udp@dns-test-service-3.dns-7379.svc.cluster.local from pod dns-7379/dns-test-9f8b8b04-8f54-4e4d-a06e-615b32fa1911 contains '' instead of '10.110.127.51' Feb 13 13:19:32.634: INFO: File jessie_udp@dns-test-service-3.dns-7379.svc.cluster.local from pod dns-7379/dns-test-9f8b8b04-8f54-4e4d-a06e-615b32fa1911 contains '' instead of '10.110.127.51' Feb 13 13:19:32.634: INFO: Lookups using dns-7379/dns-test-9f8b8b04-8f54-4e4d-a06e-615b32fa1911 failed for: [wheezy_udp@dns-test-service-3.dns-7379.svc.cluster.local jessie_udp@dns-test-service-3.dns-7379.svc.cluster.local] Feb 13 13:19:37.660: INFO: DNS probes using dns-test-9f8b8b04-8f54-4e4d-a06e-615b32fa1911 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:19:37.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7379" for this suite. Feb 13 13:19:46.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:19:46.201: INFO: namespace dns-7379 deletion completed in 8.239836583s • [SLOW TEST:83.743 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:19:46.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Feb 13 13:19:46.270: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:19:46.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-511" for this suite. Feb 13 13:19:52.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:19:52.586: INFO: namespace kubectl-511 deletion completed in 6.177769035s • [SLOW TEST:6.385 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:19:52.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Feb 13 13:19:52.664: INFO: Waiting up to 5m0s for pod "client-containers-e378aac5-3c14-409d-b1e3-fc79d1957574" in namespace "containers-5643" to be "success or failure" Feb 13 13:19:52.739: INFO: Pod "client-containers-e378aac5-3c14-409d-b1e3-fc79d1957574": Phase="Pending", Reason="", readiness=false. Elapsed: 74.42669ms Feb 13 13:19:54.748: INFO: Pod "client-containers-e378aac5-3c14-409d-b1e3-fc79d1957574": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083395002s Feb 13 13:19:56.755: INFO: Pod "client-containers-e378aac5-3c14-409d-b1e3-fc79d1957574": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090211008s Feb 13 13:19:58.809: INFO: Pod "client-containers-e378aac5-3c14-409d-b1e3-fc79d1957574": Phase="Pending", Reason="", readiness=false. Elapsed: 6.144566862s Feb 13 13:20:00.824: INFO: Pod "client-containers-e378aac5-3c14-409d-b1e3-fc79d1957574": Phase="Pending", Reason="", readiness=false. Elapsed: 8.159050371s Feb 13 13:20:02.833: INFO: Pod "client-containers-e378aac5-3c14-409d-b1e3-fc79d1957574": Phase="Pending", Reason="", readiness=false. Elapsed: 10.168853212s Feb 13 13:20:04.856: INFO: Pod "client-containers-e378aac5-3c14-409d-b1e3-fc79d1957574": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.191624364s STEP: Saw pod success Feb 13 13:20:04.856: INFO: Pod "client-containers-e378aac5-3c14-409d-b1e3-fc79d1957574" satisfied condition "success or failure" Feb 13 13:20:04.864: INFO: Trying to get logs from node iruya-node pod client-containers-e378aac5-3c14-409d-b1e3-fc79d1957574 container test-container: STEP: delete the pod Feb 13 13:20:04.979: INFO: Waiting for pod client-containers-e378aac5-3c14-409d-b1e3-fc79d1957574 to disappear Feb 13 13:20:04.987: INFO: Pod client-containers-e378aac5-3c14-409d-b1e3-fc79d1957574 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:20:04.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5643" for this suite. Feb 13 13:20:11.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:20:11.162: INFO: namespace containers-5643 deletion completed in 6.16616943s • [SLOW TEST:18.574 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:20:11.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Feb 13 13:20:11.290: INFO: Waiting up to 5m0s for pod "var-expansion-0a902899-99d9-400c-989b-136f1749cd38" in namespace "var-expansion-6678" to be "success or failure" Feb 13 13:20:11.311: INFO: Pod "var-expansion-0a902899-99d9-400c-989b-136f1749cd38": Phase="Pending", Reason="", readiness=false. Elapsed: 20.764881ms Feb 13 13:20:13.317: INFO: Pod "var-expansion-0a902899-99d9-400c-989b-136f1749cd38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026629818s Feb 13 13:20:15.331: INFO: Pod "var-expansion-0a902899-99d9-400c-989b-136f1749cd38": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040446819s Feb 13 13:20:17.339: INFO: Pod "var-expansion-0a902899-99d9-400c-989b-136f1749cd38": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048561261s Feb 13 13:20:19.346: INFO: Pod "var-expansion-0a902899-99d9-400c-989b-136f1749cd38": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056118613s Feb 13 13:20:21.354: INFO: Pod "var-expansion-0a902899-99d9-400c-989b-136f1749cd38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063677712s STEP: Saw pod success Feb 13 13:20:21.354: INFO: Pod "var-expansion-0a902899-99d9-400c-989b-136f1749cd38" satisfied condition "success or failure" Feb 13 13:20:21.357: INFO: Trying to get logs from node iruya-node pod var-expansion-0a902899-99d9-400c-989b-136f1749cd38 container dapi-container: STEP: delete the pod Feb 13 13:20:21.400: INFO: Waiting for pod var-expansion-0a902899-99d9-400c-989b-136f1749cd38 to disappear Feb 13 13:20:21.448: INFO: Pod var-expansion-0a902899-99d9-400c-989b-136f1749cd38 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:20:21.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6678" for this suite. Feb 13 13:20:27.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:20:27.660: INFO: namespace var-expansion-6678 deletion completed in 6.206817553s • [SLOW TEST:16.498 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:20:27.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Feb 13 13:20:27.768: INFO: Waiting up to 5m0s for pod "client-containers-cb709b68-e245-4d24-be36-c55011d49ec8" in namespace "containers-3651" to be "success or failure" Feb 13 13:20:27.799: INFO: Pod "client-containers-cb709b68-e245-4d24-be36-c55011d49ec8": Phase="Pending", Reason="", readiness=false. Elapsed: 30.583349ms Feb 13 13:20:29.809: INFO: Pod "client-containers-cb709b68-e245-4d24-be36-c55011d49ec8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040139669s Feb 13 13:20:32.910: INFO: Pod "client-containers-cb709b68-e245-4d24-be36-c55011d49ec8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.14099437s Feb 13 13:20:34.931: INFO: Pod "client-containers-cb709b68-e245-4d24-be36-c55011d49ec8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.162732328s Feb 13 13:20:36.943: INFO: Pod "client-containers-cb709b68-e245-4d24-be36-c55011d49ec8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.174584506s Feb 13 13:20:38.953: INFO: Pod "client-containers-cb709b68-e245-4d24-be36-c55011d49ec8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.1843569s STEP: Saw pod success Feb 13 13:20:38.953: INFO: Pod "client-containers-cb709b68-e245-4d24-be36-c55011d49ec8" satisfied condition "success or failure" Feb 13 13:20:38.959: INFO: Trying to get logs from node iruya-node pod client-containers-cb709b68-e245-4d24-be36-c55011d49ec8 container test-container: STEP: delete the pod Feb 13 13:20:39.046: INFO: Waiting for pod client-containers-cb709b68-e245-4d24-be36-c55011d49ec8 to disappear Feb 13 13:20:39.083: INFO: Pod client-containers-cb709b68-e245-4d24-be36-c55011d49ec8 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:20:39.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3651" for this suite. Feb 13 13:20:45.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:20:45.316: INFO: namespace containers-3651 deletion completed in 6.227018835s • [SLOW TEST:17.655 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:20:45.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 13 13:20:45.401: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d7388cd5-14ed-4214-8524-3ead5b6e98bc" in namespace "downward-api-2473" to be "success or failure" Feb 13 13:20:45.476: INFO: Pod "downwardapi-volume-d7388cd5-14ed-4214-8524-3ead5b6e98bc": Phase="Pending", Reason="", readiness=false. Elapsed: 74.576164ms Feb 13 13:20:47.485: INFO: Pod "downwardapi-volume-d7388cd5-14ed-4214-8524-3ead5b6e98bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084029532s Feb 13 13:20:49.497: INFO: Pod "downwardapi-volume-d7388cd5-14ed-4214-8524-3ead5b6e98bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095417098s Feb 13 13:20:51.510: INFO: Pod "downwardapi-volume-d7388cd5-14ed-4214-8524-3ead5b6e98bc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109337605s Feb 13 13:20:53.520: INFO: Pod "downwardapi-volume-d7388cd5-14ed-4214-8524-3ead5b6e98bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.118866713s STEP: Saw pod success Feb 13 13:20:53.520: INFO: Pod "downwardapi-volume-d7388cd5-14ed-4214-8524-3ead5b6e98bc" satisfied condition "success or failure" Feb 13 13:20:53.526: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d7388cd5-14ed-4214-8524-3ead5b6e98bc container client-container: STEP: delete the pod Feb 13 13:20:53.672: INFO: Waiting for pod downwardapi-volume-d7388cd5-14ed-4214-8524-3ead5b6e98bc to disappear Feb 13 13:20:53.686: INFO: Pod downwardapi-volume-d7388cd5-14ed-4214-8524-3ead5b6e98bc no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:20:53.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2473" for this suite. Feb 13 13:20:59.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:20:59.904: INFO: namespace downward-api-2473 deletion completed in 6.194669111s • [SLOW TEST:14.587 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:20:59.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 13 13:21:00.149: INFO: Number of nodes with available pods: 0 Feb 13 13:21:00.149: INFO: Node iruya-node is running more than one daemon pod Feb 13 13:21:01.164: INFO: Number of nodes with available pods: 0 Feb 13 13:21:01.164: INFO: Node iruya-node is running more than one daemon pod Feb 13 13:21:02.164: INFO: Number of nodes with available pods: 0 Feb 13 13:21:02.164: INFO: Node iruya-node is running more than one daemon pod Feb 13 13:21:03.170: INFO: Number of nodes with available pods: 0 Feb 13 13:21:03.170: INFO: Node iruya-node is running more than one daemon pod Feb 13 13:21:04.242: INFO: Number of nodes with available pods: 0 Feb 13 13:21:04.242: INFO: Node iruya-node is running more than one daemon pod Feb 13 13:21:05.166: INFO: Number of nodes with available pods: 0 Feb 13 13:21:05.166: INFO: Node iruya-node is running more than one daemon pod Feb 13 13:21:06.163: INFO: Number of nodes with available pods: 0 Feb 13 13:21:06.164: INFO: Node iruya-node is running more than one daemon pod Feb 13 13:21:07.588: INFO: Number of nodes with available pods: 0 Feb 13 13:21:07.588: INFO: Node iruya-node is running more than one daemon pod Feb 13 13:21:08.167: INFO: Number of nodes with available pods: 1 Feb 13 13:21:08.167: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 13 13:21:09.232: INFO: Number of nodes with available pods: 1 Feb 13 13:21:09.232: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 13 13:21:10.165: INFO: Number of nodes with available pods: 1 Feb 13 13:21:10.165: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 13 13:21:11.166: INFO: Number of nodes with available pods: 2 Feb 13 13:21:11.166: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Feb 13 13:21:11.256: INFO: Number of nodes with available pods: 2 Feb 13 13:21:11.256: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3823, will wait for the garbage collector to delete the pods Feb 13 13:21:12.451: INFO: Deleting DaemonSet.extensions daemon-set took: 109.740722ms Feb 13 13:21:13.252: INFO: Terminating DaemonSet.extensions daemon-set pods took: 800.958589ms Feb 13 13:21:19.863: INFO: Number of nodes with available pods: 0 Feb 13 13:21:19.863: INFO: Number of running nodes: 0, number of available pods: 0 Feb 13 13:21:19.872: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3823/daemonsets","resourceVersion":"24196910"},"items":null} Feb 13 13:21:19.876: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3823/pods","resourceVersion":"24196910"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:21:19.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3823" for this suite. Feb 13 13:21:25.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:21:26.092: INFO: namespace daemonsets-3823 deletion completed in 6.186586302s • [SLOW TEST:26.188 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:21:26.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:21:26.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6708" for this suite. Feb 13 13:21:32.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:21:32.514: INFO: namespace kubelet-test-6708 deletion completed in 6.201031566s • [SLOW TEST:6.422 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:21:32.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Feb 13 13:21:32.661: INFO: Waiting up to 5m0s for pod "pod-74542ee6-9b9d-4367-87e8-6ebf72e03ad2" in namespace "emptydir-8976" to be "success or failure" Feb 13 13:21:32.666: INFO: Pod "pod-74542ee6-9b9d-4367-87e8-6ebf72e03ad2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.749289ms Feb 13 13:21:34.671: INFO: Pod "pod-74542ee6-9b9d-4367-87e8-6ebf72e03ad2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010678496s Feb 13 13:21:36.681: INFO: Pod "pod-74542ee6-9b9d-4367-87e8-6ebf72e03ad2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020609849s Feb 13 13:21:38.696: INFO: Pod "pod-74542ee6-9b9d-4367-87e8-6ebf72e03ad2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035581681s Feb 13 13:21:40.707: INFO: Pod "pod-74542ee6-9b9d-4367-87e8-6ebf72e03ad2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046171196s STEP: Saw pod success Feb 13 13:21:40.707: INFO: Pod "pod-74542ee6-9b9d-4367-87e8-6ebf72e03ad2" satisfied condition "success or failure" Feb 13 13:21:40.725: INFO: Trying to get logs from node iruya-node pod pod-74542ee6-9b9d-4367-87e8-6ebf72e03ad2 container test-container: STEP: delete the pod Feb 13 13:21:40.954: INFO: Waiting for pod pod-74542ee6-9b9d-4367-87e8-6ebf72e03ad2 to disappear Feb 13 13:21:40.978: INFO: Pod pod-74542ee6-9b9d-4367-87e8-6ebf72e03ad2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:21:40.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8976" for this suite. Feb 13 13:21:47.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:21:47.148: INFO: namespace emptydir-8976 deletion completed in 6.163133771s • [SLOW TEST:14.633 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:21:47.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-5628 I0213 13:21:47.254522 8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5628, replica count: 1 I0213 13:21:48.305820 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0213 13:21:49.306284 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0213 13:21:50.306922 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0213 13:21:51.307493 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0213 13:21:52.308414 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0213 13:21:53.309051 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0213 13:21:54.309471 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0213 13:21:55.309932 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 13 13:21:55.477: INFO: Created: latency-svc-rd459 Feb 13 13:21:55.564: INFO: Got endpoints: latency-svc-rd459 [153.713891ms] Feb 13 13:21:55.640: INFO: Created: latency-svc-q552h Feb 13 13:21:55.661: INFO: Got endpoints: latency-svc-q552h [96.666097ms] Feb 13 13:21:55.755: INFO: Created: latency-svc-7wz5x Feb 13 13:21:55.772: INFO: Got endpoints: latency-svc-7wz5x [205.521416ms] Feb 13 13:21:55.828: INFO: Created: latency-svc-nvzfw Feb 13 13:21:55.828: INFO: Got endpoints: latency-svc-nvzfw [262.079303ms] Feb 13 13:21:55.980: INFO: Created: latency-svc-s92z4 Feb 13 13:21:55.986: INFO: Got endpoints: latency-svc-s92z4 [421.177608ms] Feb 13 13:21:56.060: INFO: Created: latency-svc-bvfnl Feb 13 13:21:56.157: INFO: Got endpoints: latency-svc-bvfnl [592.530046ms] Feb 13 13:21:56.204: INFO: Created: latency-svc-b7fwm Feb 13 13:21:56.212: INFO: Got endpoints: latency-svc-b7fwm [646.611569ms] Feb 13 13:21:56.251: INFO: Created: latency-svc-4hjxv Feb 13 13:21:56.339: INFO: Got endpoints: latency-svc-4hjxv [772.826411ms] Feb 13 13:21:56.357: INFO: Created: latency-svc-vf82d Feb 13 13:21:56.371: INFO: Got endpoints: latency-svc-vf82d [804.787926ms] Feb 13 13:21:56.405: INFO: Created: latency-svc-cvzmd Feb 13 13:21:56.417: INFO: Got endpoints: latency-svc-cvzmd [850.919502ms] Feb 13 13:21:56.557: INFO: Created: latency-svc-zlnnc Feb 13 13:21:56.573: INFO: Got endpoints: latency-svc-zlnnc [1.006709708s] Feb 13 13:21:56.700: INFO: Created: latency-svc-f76hn Feb 13 13:21:56.724: INFO: Got endpoints: latency-svc-f76hn [1.158592793s] Feb 13 13:21:56.761: INFO: Created: latency-svc-bhcxf Feb 13 13:21:56.862: INFO: Got endpoints: latency-svc-bhcxf [1.295741967s] Feb 13 13:21:56.865: INFO: Created: latency-svc-qxblg Feb 13 13:21:56.887: INFO: Got endpoints: latency-svc-qxblg [1.321278521s] Feb 13 13:21:56.922: INFO: Created: latency-svc-9zxmm Feb 13 13:21:56.930: INFO: Got endpoints: latency-svc-9zxmm [1.364041412s] Feb 13 13:21:57.065: INFO: Created: latency-svc-l45pj Feb 13 13:21:57.072: INFO: Got endpoints: latency-svc-l45pj [1.506550363s] Feb 13 13:21:57.157: INFO: Created: latency-svc-hn9xl Feb 13 13:21:57.230: INFO: Got endpoints: latency-svc-hn9xl [1.568465562s] Feb 13 13:21:57.239: INFO: Created: latency-svc-grcqb Feb 13 13:21:57.241: INFO: Got endpoints: latency-svc-grcqb [1.468713626s] Feb 13 13:21:57.293: INFO: Created: latency-svc-59d4r Feb 13 13:21:57.297: INFO: Got endpoints: latency-svc-59d4r [1.469502158s] Feb 13 13:21:57.418: INFO: Created: latency-svc-ll9kj Feb 13 13:21:57.428: INFO: Got endpoints: latency-svc-ll9kj [1.442030838s] Feb 13 13:21:57.469: INFO: Created: latency-svc-pjzmp Feb 13 13:21:57.476: INFO: Got endpoints: latency-svc-pjzmp [1.318001671s] Feb 13 13:21:57.607: INFO: Created: latency-svc-rqf84 Feb 13 13:21:57.616: INFO: Got endpoints: latency-svc-rqf84 [1.403297706s] Feb 13 13:21:57.695: INFO: Created: latency-svc-pt7t5 Feb 13 13:21:57.804: INFO: Got endpoints: latency-svc-pt7t5 [1.464455569s] Feb 13 13:21:57.833: INFO: Created: latency-svc-nkj9j Feb 13 13:21:57.842: INFO: Got endpoints: latency-svc-nkj9j [1.471281704s] Feb 13 13:21:58.081: INFO: Created: latency-svc-w6vmc Feb 13 13:21:58.121: INFO: Got endpoints: latency-svc-w6vmc [1.704331437s] Feb 13 13:21:58.174: INFO: Created: latency-svc-zgzxs Feb 13 13:21:58.301: INFO: Got endpoints: latency-svc-zgzxs [1.728487477s] Feb 13 13:21:58.400: INFO: Created: latency-svc-mgwpt Feb 13 13:21:58.516: INFO: Got endpoints: latency-svc-mgwpt [1.79161771s] Feb 13 13:21:58.561: INFO: Created: latency-svc-nmtl9 Feb 13 13:21:58.579: INFO: Got endpoints: latency-svc-nmtl9 [1.716940448s] Feb 13 13:21:58.792: INFO: Created: latency-svc-dwmdt Feb 13 13:21:58.828: INFO: Got endpoints: latency-svc-dwmdt [1.940499836s] Feb 13 13:21:58.894: INFO: Created: latency-svc-cjkrb Feb 13 13:21:59.007: INFO: Got endpoints: latency-svc-cjkrb [2.07615553s] Feb 13 13:21:59.039: INFO: Created: latency-svc-9br6x Feb 13 13:21:59.061: INFO: Got endpoints: latency-svc-9br6x [1.98965417s] Feb 13 13:21:59.106: INFO: Created: latency-svc-5fj8w Feb 13 13:21:59.194: INFO: Got endpoints: latency-svc-5fj8w [1.963538133s] Feb 13 13:21:59.233: INFO: Created: latency-svc-kxrcf Feb 13 13:21:59.241: INFO: Got endpoints: latency-svc-kxrcf [2.000387579s] Feb 13 13:21:59.283: INFO: Created: latency-svc-9bsg6 Feb 13 13:21:59.349: INFO: Got endpoints: latency-svc-9bsg6 [2.051789339s] Feb 13 13:21:59.383: INFO: Created: latency-svc-s4r6k Feb 13 13:21:59.388: INFO: Got endpoints: latency-svc-s4r6k [1.959853942s] Feb 13 13:21:59.420: INFO: Created: latency-svc-9pwrg Feb 13 13:21:59.443: INFO: Got endpoints: latency-svc-9pwrg [1.967286041s] Feb 13 13:21:59.539: INFO: Created: latency-svc-b75j7 Feb 13 13:21:59.558: INFO: Got endpoints: latency-svc-b75j7 [1.941622724s] Feb 13 13:21:59.590: INFO: Created: latency-svc-kcgkb Feb 13 13:21:59.601: INFO: Got endpoints: latency-svc-kcgkb [1.79729377s] Feb 13 13:21:59.714: INFO: Created: latency-svc-j8ng2 Feb 13 13:21:59.719: INFO: Got endpoints: latency-svc-j8ng2 [1.876903734s] Feb 13 13:21:59.764: INFO: Created: latency-svc-l797b Feb 13 13:22:00.026: INFO: Got endpoints: latency-svc-l797b [1.904386645s] Feb 13 13:22:00.056: INFO: Created: latency-svc-pbd42 Feb 13 13:22:00.094: INFO: Got endpoints: latency-svc-pbd42 [1.792083362s] Feb 13 13:22:00.237: INFO: Created: latency-svc-mqwg8 Feb 13 13:22:00.301: INFO: Got endpoints: latency-svc-mqwg8 [1.784332627s] Feb 13 13:22:00.395: INFO: Created: latency-svc-49vw9 Feb 13 13:22:00.399: INFO: Got endpoints: latency-svc-49vw9 [1.81963392s] Feb 13 13:22:00.464: INFO: Created: latency-svc-4x7kn Feb 13 13:22:00.464: INFO: Got endpoints: latency-svc-4x7kn [1.635914821s] Feb 13 13:22:00.573: INFO: Created: latency-svc-pdkct Feb 13 13:22:00.589: INFO: Got endpoints: latency-svc-pdkct [1.581723625s] Feb 13 13:22:00.650: INFO: Created: latency-svc-4trxf Feb 13 13:22:00.650: INFO: Got endpoints: latency-svc-4trxf [185.785789ms] Feb 13 13:22:00.748: INFO: Created: latency-svc-2q4ll Feb 13 13:22:00.757: INFO: Got endpoints: latency-svc-2q4ll [1.69520718s] Feb 13 13:22:00.846: INFO: Created: latency-svc-8hplt Feb 13 13:22:00.978: INFO: Got endpoints: latency-svc-8hplt [1.784162459s] Feb 13 13:22:01.012: INFO: Created: latency-svc-z97mk Feb 13 13:22:01.018: INFO: Got endpoints: latency-svc-z97mk [1.776181765s] Feb 13 13:22:01.066: INFO: Created: latency-svc-rjngm Feb 13 13:22:01.072: INFO: Got endpoints: latency-svc-rjngm [1.722221454s] Feb 13 13:22:01.177: INFO: Created: latency-svc-4sgwc Feb 13 13:22:01.207: INFO: Got endpoints: latency-svc-4sgwc [1.817852713s] Feb 13 13:22:01.213: INFO: Created: latency-svc-m6lg6 Feb 13 13:22:01.222: INFO: Got endpoints: latency-svc-m6lg6 [1.779109155s] Feb 13 13:22:01.359: INFO: Created: latency-svc-xddt2 Feb 13 13:22:01.387: INFO: Got endpoints: latency-svc-xddt2 [1.829227474s] Feb 13 13:22:01.501: INFO: Created: latency-svc-7g5m8 Feb 13 13:22:01.518: INFO: Got endpoints: latency-svc-7g5m8 [1.916678679s] Feb 13 13:22:01.593: INFO: Created: latency-svc-jr268 Feb 13 13:22:01.690: INFO: Got endpoints: latency-svc-jr268 [1.970546404s] Feb 13 13:22:01.744: INFO: Created: latency-svc-6lvlt Feb 13 13:22:01.847: INFO: Got endpoints: latency-svc-6lvlt [1.821242899s] Feb 13 13:22:01.853: INFO: Created: latency-svc-tbjmp Feb 13 13:22:01.863: INFO: Got endpoints: latency-svc-tbjmp [1.769470592s] Feb 13 13:22:02.074: INFO: Created: latency-svc-7nd24 Feb 13 13:22:02.125: INFO: Got endpoints: latency-svc-7nd24 [1.823237349s] Feb 13 13:22:02.130: INFO: Created: latency-svc-s2lmn Feb 13 13:22:02.151: INFO: Got endpoints: latency-svc-s2lmn [1.751474049s] Feb 13 13:22:02.251: INFO: Created: latency-svc-gs28c Feb 13 13:22:02.259: INFO: Got endpoints: latency-svc-gs28c [1.670793435s] Feb 13 13:22:02.393: INFO: Created: latency-svc-wrbh5 Feb 13 13:22:02.441: INFO: Got endpoints: latency-svc-wrbh5 [1.790795271s] Feb 13 13:22:02.452: INFO: Created: latency-svc-m62xq Feb 13 13:22:02.459: INFO: Got endpoints: latency-svc-m62xq [1.702084673s] Feb 13 13:22:02.556: INFO: Created: latency-svc-95g52 Feb 13 13:22:02.571: INFO: Got endpoints: latency-svc-95g52 [1.592915716s] Feb 13 13:22:02.638: INFO: Created: latency-svc-ctgq5 Feb 13 13:22:02.698: INFO: Got endpoints: latency-svc-ctgq5 [1.680782605s] Feb 13 13:22:02.741: INFO: Created: latency-svc-t7brt Feb 13 13:22:02.755: INFO: Got endpoints: latency-svc-t7brt [1.682458925s] Feb 13 13:22:02.793: INFO: Created: latency-svc-572zb Feb 13 13:22:02.875: INFO: Got endpoints: latency-svc-572zb [1.668490531s] Feb 13 13:22:02.945: INFO: Created: latency-svc-lw65l Feb 13 13:22:03.022: INFO: Got endpoints: latency-svc-lw65l [1.799990854s] Feb 13 13:22:03.025: INFO: Created: latency-svc-wlcbg Feb 13 13:22:03.040: INFO: Got endpoints: latency-svc-wlcbg [1.652277245s] Feb 13 13:22:03.113: INFO: Created: latency-svc-pv2jh Feb 13 13:22:03.175: INFO: Got endpoints: latency-svc-pv2jh [1.656874406s] Feb 13 13:22:03.221: INFO: Created: latency-svc-sqlpk Feb 13 13:22:03.233: INFO: Got endpoints: latency-svc-sqlpk [1.542223386s] Feb 13 13:22:03.332: INFO: Created: latency-svc-9rzmq Feb 13 13:22:03.346: INFO: Got endpoints: latency-svc-9rzmq [1.49776139s] Feb 13 13:22:03.392: INFO: Created: latency-svc-pfndh Feb 13 13:22:03.397: INFO: Got endpoints: latency-svc-pfndh [1.533264266s] Feb 13 13:22:03.518: INFO: Created: latency-svc-wkj5h Feb 13 13:22:03.521: INFO: Got endpoints: latency-svc-wkj5h [1.395820107s] Feb 13 13:22:03.595: INFO: Created: latency-svc-qj7lx Feb 13 13:22:03.610: INFO: Got endpoints: latency-svc-qj7lx [1.458302083s] Feb 13 13:22:03.793: INFO: Created: latency-svc-pf2s7 Feb 13 13:22:03.804: INFO: Got endpoints: latency-svc-pf2s7 [1.543177703s] Feb 13 13:22:03.954: INFO: Created: latency-svc-w88cr Feb 13 13:22:04.046: INFO: Created: latency-svc-h5kmd Feb 13 13:22:04.046: INFO: Got endpoints: latency-svc-w88cr [1.604509192s] Feb 13 13:22:04.141: INFO: Got endpoints: latency-svc-h5kmd [1.681364803s] Feb 13 13:22:04.211: INFO: Created: latency-svc-gzgcm Feb 13 13:22:04.213: INFO: Got endpoints: latency-svc-gzgcm [1.640983927s] Feb 13 13:22:04.314: INFO: Created: latency-svc-wbmtc Feb 13 13:22:04.335: INFO: Got endpoints: latency-svc-wbmtc [1.63579696s] Feb 13 13:22:04.408: INFO: Created: latency-svc-jql5h Feb 13 13:22:04.458: INFO: Got endpoints: latency-svc-jql5h [1.703101154s] Feb 13 13:22:04.509: INFO: Created: latency-svc-fds6z Feb 13 13:22:04.523: INFO: Got endpoints: latency-svc-fds6z [1.647400255s] Feb 13 13:22:04.636: INFO: Created: latency-svc-ndmdn Feb 13 13:22:04.636: INFO: Got endpoints: latency-svc-ndmdn [1.613453049s] Feb 13 13:22:04.696: INFO: Created: latency-svc-cbls4 Feb 13 13:22:04.711: INFO: Got endpoints: latency-svc-cbls4 [1.671139575s] Feb 13 13:22:04.779: INFO: Created: latency-svc-xxvc5 Feb 13 13:22:04.785: INFO: Got endpoints: latency-svc-xxvc5 [1.609433533s] Feb 13 13:22:04.870: INFO: Created: latency-svc-n7tr4 Feb 13 13:22:04.993: INFO: Got endpoints: latency-svc-n7tr4 [1.759777713s] Feb 13 13:22:05.032: INFO: Created: latency-svc-62rgt Feb 13 13:22:05.036: INFO: Got endpoints: latency-svc-62rgt [1.690511706s] Feb 13 13:22:05.143: INFO: Created: latency-svc-qpvft Feb 13 13:22:05.159: INFO: Got endpoints: latency-svc-qpvft [1.762353972s] Feb 13 13:22:05.234: INFO: Created: latency-svc-nzfhd Feb 13 13:22:05.235: INFO: Got endpoints: latency-svc-nzfhd [1.71421651s] Feb 13 13:22:05.344: INFO: Created: latency-svc-vkw7g Feb 13 13:22:05.359: INFO: Got endpoints: latency-svc-vkw7g [1.749016338s] Feb 13 13:22:05.652: INFO: Created: latency-svc-dzdmk Feb 13 13:22:05.676: INFO: Got endpoints: latency-svc-dzdmk [1.872075088s] Feb 13 13:22:05.783: INFO: Created: latency-svc-lw4sg Feb 13 13:22:05.809: INFO: Got endpoints: latency-svc-lw4sg [1.763345423s] Feb 13 13:22:05.892: INFO: Created: latency-svc-cn6l5 Feb 13 13:22:05.977: INFO: Got endpoints: latency-svc-cn6l5 [1.835361672s] Feb 13 13:22:06.059: INFO: Created: latency-svc-f6zvx Feb 13 13:22:06.194: INFO: Got endpoints: latency-svc-f6zvx [1.981274093s] Feb 13 13:22:06.257: INFO: Created: latency-svc-pzkz6 Feb 13 13:22:06.261: INFO: Got endpoints: latency-svc-pzkz6 [1.925847438s] Feb 13 13:22:06.373: INFO: Created: latency-svc-kx98g Feb 13 13:22:06.394: INFO: Got endpoints: latency-svc-kx98g [1.935198456s] Feb 13 13:22:06.439: INFO: Created: latency-svc-czc2w Feb 13 13:22:06.563: INFO: Got endpoints: latency-svc-czc2w [2.039631699s] Feb 13 13:22:06.592: INFO: Created: latency-svc-vfb2z Feb 13 13:22:06.618: INFO: Got endpoints: latency-svc-vfb2z [1.981736242s] Feb 13 13:22:06.745: INFO: Created: latency-svc-d4fmb Feb 13 13:22:06.797: INFO: Got endpoints: latency-svc-d4fmb [2.085576888s] Feb 13 13:22:06.803: INFO: Created: latency-svc-22xh4 Feb 13 13:22:06.807: INFO: Got endpoints: latency-svc-22xh4 [2.021480992s] Feb 13 13:22:06.933: INFO: Created: latency-svc-zqtbg Feb 13 13:22:06.942: INFO: Got endpoints: latency-svc-zqtbg [1.949047416s] Feb 13 13:22:06.994: INFO: Created: latency-svc-mlnzk Feb 13 13:22:06.995: INFO: Got endpoints: latency-svc-mlnzk [1.95827347s] Feb 13 13:22:07.108: INFO: Created: latency-svc-w2cxx Feb 13 13:22:07.113: INFO: Got endpoints: latency-svc-w2cxx [1.953564438s] Feb 13 13:22:07.171: INFO: Created: latency-svc-w9pwj Feb 13 13:22:07.180: INFO: Got endpoints: latency-svc-w9pwj [1.944756228s] Feb 13 13:22:07.281: INFO: Created: latency-svc-vcbsc Feb 13 13:22:07.295: INFO: Got endpoints: latency-svc-vcbsc [1.935059379s] Feb 13 13:22:07.373: INFO: Created: latency-svc-9b8t6 Feb 13 13:22:07.460: INFO: Got endpoints: latency-svc-9b8t6 [1.784021873s] Feb 13 13:22:07.492: INFO: Created: latency-svc-s76dq Feb 13 13:22:07.497: INFO: Got endpoints: latency-svc-s76dq [1.687227828s] Feb 13 13:22:07.547: INFO: Created: latency-svc-t8vjz Feb 13 13:22:07.550: INFO: Got endpoints: latency-svc-t8vjz [1.572678972s] Feb 13 13:22:07.674: INFO: Created: latency-svc-wbpvx Feb 13 13:22:07.684: INFO: Got endpoints: latency-svc-wbpvx [1.489388609s] Feb 13 13:22:07.750: INFO: Created: latency-svc-mbk7b Feb 13 13:22:07.757: INFO: Got endpoints: latency-svc-mbk7b [1.496272277s] Feb 13 13:22:07.905: INFO: Created: latency-svc-5w4qq Feb 13 13:22:07.913: INFO: Got endpoints: latency-svc-5w4qq [1.519598153s] Feb 13 13:22:08.003: INFO: Created: latency-svc-n8r4d Feb 13 13:22:08.063: INFO: Got endpoints: latency-svc-n8r4d [1.499990211s] Feb 13 13:22:08.104: INFO: Created: latency-svc-h5lqm Feb 13 13:22:08.117: INFO: Got endpoints: latency-svc-h5lqm [1.497965979s] Feb 13 13:22:08.311: INFO: Created: latency-svc-5r9hv Feb 13 13:22:08.325: INFO: Got endpoints: latency-svc-5r9hv [1.527925209s] Feb 13 13:22:08.419: INFO: Created: latency-svc-nvwrv Feb 13 13:22:08.595: INFO: Got endpoints: latency-svc-nvwrv [1.787913605s] Feb 13 13:22:08.606: INFO: Created: latency-svc-9kgvd Feb 13 13:22:08.611: INFO: Got endpoints: latency-svc-9kgvd [1.669363381s] Feb 13 13:22:08.680: INFO: Created: latency-svc-cpdrn Feb 13 13:22:08.828: INFO: Got endpoints: latency-svc-cpdrn [1.833232309s] Feb 13 13:22:08.896: INFO: Created: latency-svc-x6clz Feb 13 13:22:08.907: INFO: Got endpoints: latency-svc-x6clz [1.793769296s] Feb 13 13:22:09.007: INFO: Created: latency-svc-fv4d2 Feb 13 13:22:09.015: INFO: Got endpoints: latency-svc-fv4d2 [1.835120489s] Feb 13 13:22:09.080: INFO: Created: latency-svc-j6zhf Feb 13 13:22:09.096: INFO: Got endpoints: latency-svc-j6zhf [1.800895806s] Feb 13 13:22:09.253: INFO: Created: latency-svc-qfpwd Feb 13 13:22:09.267: INFO: Got endpoints: latency-svc-qfpwd [1.806697819s] Feb 13 13:22:09.306: INFO: Created: latency-svc-f4mjc Feb 13 13:22:09.391: INFO: Got endpoints: latency-svc-f4mjc [1.893771348s] Feb 13 13:22:09.418: INFO: Created: latency-svc-j995g Feb 13 13:22:09.429: INFO: Got endpoints: latency-svc-j995g [1.878969809s] Feb 13 13:22:09.475: INFO: Created: latency-svc-kp992 Feb 13 13:22:09.483: INFO: Got endpoints: latency-svc-kp992 [1.798955184s] Feb 13 13:22:09.636: INFO: Created: latency-svc-4sjwl Feb 13 13:22:09.653: INFO: Got endpoints: latency-svc-4sjwl [1.895872132s] Feb 13 13:22:09.715: INFO: Created: latency-svc-jt4xp Feb 13 13:22:09.841: INFO: Got endpoints: latency-svc-jt4xp [1.927521267s] Feb 13 13:22:09.879: INFO: Created: latency-svc-qk9bq Feb 13 13:22:09.941: INFO: Got endpoints: latency-svc-qk9bq [1.877163993s] Feb 13 13:22:09.943: INFO: Created: latency-svc-pnpf8 Feb 13 13:22:10.032: INFO: Got endpoints: latency-svc-pnpf8 [1.915538856s] Feb 13 13:22:10.068: INFO: Created: latency-svc-hh6nl Feb 13 13:22:10.072: INFO: Got endpoints: latency-svc-hh6nl [1.746677926s] Feb 13 13:22:10.132: INFO: Created: latency-svc-h8d5b Feb 13 13:22:10.273: INFO: Got endpoints: latency-svc-h8d5b [1.677596918s] Feb 13 13:22:10.333: INFO: Created: latency-svc-l6qxz Feb 13 13:22:10.351: INFO: Got endpoints: latency-svc-l6qxz [1.73923723s] Feb 13 13:22:10.491: INFO: Created: latency-svc-wn8nb Feb 13 13:22:10.535: INFO: Got endpoints: latency-svc-wn8nb [1.706996039s] Feb 13 13:22:10.662: INFO: Created: latency-svc-p5kgd Feb 13 13:22:10.687: INFO: Got endpoints: latency-svc-p5kgd [1.779529108s] Feb 13 13:22:10.741: INFO: Created: latency-svc-5qwt7 Feb 13 13:22:10.830: INFO: Got endpoints: latency-svc-5qwt7 [1.814485463s] Feb 13 13:22:10.840: INFO: Created: latency-svc-5l7tc Feb 13 13:22:10.852: INFO: Got endpoints: latency-svc-5l7tc [1.755941114s] Feb 13 13:22:11.012: INFO: Created: latency-svc-jcp4s Feb 13 13:22:11.028: INFO: Got endpoints: latency-svc-jcp4s [1.760693052s] Feb 13 13:22:11.217: INFO: Created: latency-svc-n9wmp Feb 13 13:22:11.224: INFO: Got endpoints: latency-svc-n9wmp [1.83294899s] Feb 13 13:22:11.279: INFO: Created: latency-svc-ndtcp Feb 13 13:22:11.280: INFO: Got endpoints: latency-svc-ndtcp [1.850580314s] Feb 13 13:22:11.403: INFO: Created: latency-svc-7skd7 Feb 13 13:22:11.412: INFO: Got endpoints: latency-svc-7skd7 [1.928964443s] Feb 13 13:22:11.468: INFO: Created: latency-svc-jh2f6 Feb 13 13:22:11.476: INFO: Got endpoints: latency-svc-jh2f6 [1.822592746s] Feb 13 13:22:11.599: INFO: Created: latency-svc-jfvzs Feb 13 13:22:11.601: INFO: Got endpoints: latency-svc-jfvzs [1.75943019s] Feb 13 13:22:11.670: INFO: Created: latency-svc-d44z6 Feb 13 13:22:11.798: INFO: Got endpoints: latency-svc-d44z6 [1.856322982s] Feb 13 13:22:11.832: INFO: Created: latency-svc-9z8f4 Feb 13 13:22:11.837: INFO: Got endpoints: latency-svc-9z8f4 [1.8039253s] Feb 13 13:22:11.986: INFO: Created: latency-svc-9dmzq Feb 13 13:22:11.995: INFO: Got endpoints: latency-svc-9dmzq [1.923358987s] Feb 13 13:22:12.055: INFO: Created: latency-svc-pftng Feb 13 13:22:12.065: INFO: Got endpoints: latency-svc-pftng [1.79203995s] Feb 13 13:22:12.243: INFO: Created: latency-svc-7dspt Feb 13 13:22:12.456: INFO: Got endpoints: latency-svc-7dspt [2.105124787s] Feb 13 13:22:12.477: INFO: Created: latency-svc-pw5w9 Feb 13 13:22:12.496: INFO: Got endpoints: latency-svc-pw5w9 [1.959549334s] Feb 13 13:22:12.557: INFO: Created: latency-svc-nxnl9 Feb 13 13:22:12.613: INFO: Got endpoints: latency-svc-nxnl9 [1.926107513s] Feb 13 13:22:12.652: INFO: Created: latency-svc-v42bl Feb 13 13:22:12.667: INFO: Got endpoints: latency-svc-v42bl [1.837118448s] Feb 13 13:22:12.695: INFO: Created: latency-svc-dnglm Feb 13 13:22:12.785: INFO: Got endpoints: latency-svc-dnglm [1.932355484s] Feb 13 13:22:12.790: INFO: Created: latency-svc-8nkx6 Feb 13 13:22:12.795: INFO: Got endpoints: latency-svc-8nkx6 [1.766210811s] Feb 13 13:22:12.865: INFO: Created: latency-svc-bgcq6 Feb 13 13:22:12.965: INFO: Got endpoints: latency-svc-bgcq6 [1.741022735s] Feb 13 13:22:12.966: INFO: Created: latency-svc-vqpvn Feb 13 13:22:12.973: INFO: Got endpoints: latency-svc-vqpvn [1.693226578s] Feb 13 13:22:13.049: INFO: Created: latency-svc-scj7b Feb 13 13:22:13.061: INFO: Got endpoints: latency-svc-scj7b [1.648881508s] Feb 13 13:22:13.157: INFO: Created: latency-svc-dsg2v Feb 13 13:22:13.163: INFO: Got endpoints: latency-svc-dsg2v [1.686811092s] Feb 13 13:22:13.302: INFO: Created: latency-svc-jl9dn Feb 13 13:22:13.305: INFO: Got endpoints: latency-svc-jl9dn [1.704436116s] Feb 13 13:22:13.350: INFO: Created: latency-svc-pfb5t Feb 13 13:22:13.358: INFO: Got endpoints: latency-svc-pfb5t [1.559950253s] Feb 13 13:22:13.473: INFO: Created: latency-svc-b5bh6 Feb 13 13:22:13.479: INFO: Got endpoints: latency-svc-b5bh6 [1.641772134s] Feb 13 13:22:13.529: INFO: Created: latency-svc-qzj5v Feb 13 13:22:13.531: INFO: Got endpoints: latency-svc-qzj5v [1.535510425s] Feb 13 13:22:13.647: INFO: Created: latency-svc-xzwhh Feb 13 13:22:13.648: INFO: Got endpoints: latency-svc-xzwhh [1.582680926s] Feb 13 13:22:13.706: INFO: Created: latency-svc-9zlr4 Feb 13 13:22:13.725: INFO: Got endpoints: latency-svc-9zlr4 [1.268390873s] Feb 13 13:22:13.843: INFO: Created: latency-svc-wskwp Feb 13 13:22:13.873: INFO: Got endpoints: latency-svc-wskwp [1.377282559s] Feb 13 13:22:14.011: INFO: Created: latency-svc-9p4wh Feb 13 13:22:14.064: INFO: Got endpoints: latency-svc-9p4wh [1.45085613s] Feb 13 13:22:14.081: INFO: Created: latency-svc-ncbpv Feb 13 13:22:14.090: INFO: Got endpoints: latency-svc-ncbpv [1.422821777s] Feb 13 13:22:14.156: INFO: Created: latency-svc-226lg Feb 13 13:22:14.238: INFO: Got endpoints: latency-svc-226lg [1.452442644s] Feb 13 13:22:14.240: INFO: Created: latency-svc-hm8k7 Feb 13 13:22:14.243: INFO: Got endpoints: latency-svc-hm8k7 [1.447425698s] Feb 13 13:22:14.371: INFO: Created: latency-svc-vhjgl Feb 13 13:22:14.389: INFO: Got endpoints: latency-svc-vhjgl [1.423627344s] Feb 13 13:22:14.492: INFO: Created: latency-svc-gv9qw Feb 13 13:22:14.497: INFO: Got endpoints: latency-svc-gv9qw [1.523471376s] Feb 13 13:22:14.660: INFO: Created: latency-svc-8gpxt Feb 13 13:22:14.663: INFO: Got endpoints: latency-svc-8gpxt [1.601225045s] Feb 13 13:22:14.743: INFO: Created: latency-svc-4b8n4 Feb 13 13:22:14.791: INFO: Got endpoints: latency-svc-4b8n4 [1.627532076s] Feb 13 13:22:14.850: INFO: Created: latency-svc-ndl6x Feb 13 13:22:14.867: INFO: Got endpoints: latency-svc-ndl6x [1.561628189s] Feb 13 13:22:15.116: INFO: Created: latency-svc-m72ss Feb 13 13:22:15.162: INFO: Got endpoints: latency-svc-m72ss [1.804180778s] Feb 13 13:22:15.193: INFO: Created: latency-svc-crdhm Feb 13 13:22:15.197: INFO: Got endpoints: latency-svc-crdhm [1.71795531s] Feb 13 13:22:15.279: INFO: Created: latency-svc-bc5pb Feb 13 13:22:15.285: INFO: Got endpoints: latency-svc-bc5pb [1.753629819s] Feb 13 13:22:15.347: INFO: Created: latency-svc-m6drp Feb 13 13:22:15.351: INFO: Got endpoints: latency-svc-m6drp [1.702661083s] Feb 13 13:22:15.449: INFO: Created: latency-svc-bg8x2 Feb 13 13:22:15.450: INFO: Got endpoints: latency-svc-bg8x2 [1.72413358s] Feb 13 13:22:15.515: INFO: Created: latency-svc-9xnnz Feb 13 13:22:15.520: INFO: Got endpoints: latency-svc-9xnnz [1.646103123s] Feb 13 13:22:15.625: INFO: Created: latency-svc-spz5d Feb 13 13:22:15.639: INFO: Got endpoints: latency-svc-spz5d [1.573766089s] Feb 13 13:22:15.704: INFO: Created: latency-svc-cpghm Feb 13 13:22:15.763: INFO: Got endpoints: latency-svc-cpghm [1.67276765s] Feb 13 13:22:15.813: INFO: Created: latency-svc-9dc62 Feb 13 13:22:15.820: INFO: Got endpoints: latency-svc-9dc62 [1.581832976s] Feb 13 13:22:15.930: INFO: Created: latency-svc-xdrmh Feb 13 13:22:15.945: INFO: Got endpoints: latency-svc-xdrmh [1.702693041s] Feb 13 13:22:15.989: INFO: Created: latency-svc-bg2lb Feb 13 13:22:16.000: INFO: Got endpoints: latency-svc-bg2lb [1.611099128s] Feb 13 13:22:16.079: INFO: Created: latency-svc-7jszf Feb 13 13:22:16.087: INFO: Got endpoints: latency-svc-7jszf [1.590222193s] Feb 13 13:22:16.153: INFO: Created: latency-svc-qm7vc Feb 13 13:22:16.163: INFO: Got endpoints: latency-svc-qm7vc [1.500275133s] Feb 13 13:22:16.309: INFO: Created: latency-svc-9cz8f Feb 13 13:22:16.354: INFO: Got endpoints: latency-svc-9cz8f [1.562295105s] Feb 13 13:22:16.475: INFO: Created: latency-svc-rq2px Feb 13 13:22:16.485: INFO: Got endpoints: latency-svc-rq2px [1.61776124s] Feb 13 13:22:16.534: INFO: Created: latency-svc-l84jn Feb 13 13:22:16.550: INFO: Got endpoints: latency-svc-l84jn [1.387088757s] Feb 13 13:22:16.646: INFO: Created: latency-svc-knkr5 Feb 13 13:22:16.647: INFO: Got endpoints: latency-svc-knkr5 [1.44949464s] Feb 13 13:22:16.695: INFO: Created: latency-svc-mssz6 Feb 13 13:22:16.697: INFO: Got endpoints: latency-svc-mssz6 [1.412184194s] Feb 13 13:22:16.795: INFO: Created: latency-svc-5527g Feb 13 13:22:16.799: INFO: Got endpoints: latency-svc-5527g [1.447563225s] Feb 13 13:22:16.873: INFO: Created: latency-svc-vf9l5 Feb 13 13:22:16.965: INFO: Got endpoints: latency-svc-vf9l5 [1.514560419s] Feb 13 13:22:16.997: INFO: Created: latency-svc-tg7mm Feb 13 13:22:17.005: INFO: Got endpoints: latency-svc-tg7mm [1.484287008s] Feb 13 13:22:17.062: INFO: Created: latency-svc-hmjdx Feb 13 13:22:17.148: INFO: Got endpoints: latency-svc-hmjdx [1.509313435s] Feb 13 13:22:17.189: INFO: Created: latency-svc-5v5kk Feb 13 13:22:17.204: INFO: Got endpoints: latency-svc-5v5kk [1.440802644s] Feb 13 13:22:17.244: INFO: Created: latency-svc-vjb4d Feb 13 13:22:17.312: INFO: Got endpoints: latency-svc-vjb4d [1.491974967s] Feb 13 13:22:17.351: INFO: Created: latency-svc-z88g4 Feb 13 13:22:17.372: INFO: Got endpoints: latency-svc-z88g4 [1.426145515s] Feb 13 13:22:17.479: INFO: Created: latency-svc-wnvcf Feb 13 13:22:17.488: INFO: Got endpoints: latency-svc-wnvcf [1.487634496s] Feb 13 13:22:17.552: INFO: Created: latency-svc-vcxlh Feb 13 13:22:17.556: INFO: Got endpoints: latency-svc-vcxlh [1.468801862s] Feb 13 13:22:17.659: INFO: Created: latency-svc-k4bd4 Feb 13 13:22:17.678: INFO: Got endpoints: latency-svc-k4bd4 [1.514017893s] Feb 13 13:22:17.726: INFO: Created: latency-svc-hmjxw Feb 13 13:22:17.794: INFO: Got endpoints: latency-svc-hmjxw [1.440056925s] Feb 13 13:22:17.874: INFO: Created: latency-svc-4lq7j Feb 13 13:22:17.958: INFO: Got endpoints: latency-svc-4lq7j [1.472217771s] Feb 13 13:22:17.990: INFO: Created: latency-svc-t7v6n Feb 13 13:22:17.995: INFO: Got endpoints: latency-svc-t7v6n [1.444859153s] Feb 13 13:22:17.996: INFO: Latencies: [96.666097ms 185.785789ms 205.521416ms 262.079303ms 421.177608ms 592.530046ms 646.611569ms 772.826411ms 804.787926ms 850.919502ms 1.006709708s 1.158592793s 1.268390873s 1.295741967s 1.318001671s 1.321278521s 1.364041412s 1.377282559s 1.387088757s 1.395820107s 1.403297706s 1.412184194s 1.422821777s 1.423627344s 1.426145515s 1.440056925s 1.440802644s 1.442030838s 1.444859153s 1.447425698s 1.447563225s 1.44949464s 1.45085613s 1.452442644s 1.458302083s 1.464455569s 1.468713626s 1.468801862s 1.469502158s 1.471281704s 1.472217771s 1.484287008s 1.487634496s 1.489388609s 1.491974967s 1.496272277s 1.49776139s 1.497965979s 1.499990211s 1.500275133s 1.506550363s 1.509313435s 1.514017893s 1.514560419s 1.519598153s 1.523471376s 1.527925209s 1.533264266s 1.535510425s 1.542223386s 1.543177703s 1.559950253s 1.561628189s 1.562295105s 1.568465562s 1.572678972s 1.573766089s 1.581723625s 1.581832976s 1.582680926s 1.590222193s 1.592915716s 1.601225045s 1.604509192s 1.609433533s 1.611099128s 1.613453049s 1.61776124s 1.627532076s 1.63579696s 1.635914821s 1.640983927s 1.641772134s 1.646103123s 1.647400255s 1.648881508s 1.652277245s 1.656874406s 1.668490531s 1.669363381s 1.670793435s 1.671139575s 1.67276765s 1.677596918s 1.680782605s 1.681364803s 1.682458925s 1.686811092s 1.687227828s 1.690511706s 1.693226578s 1.69520718s 1.702084673s 1.702661083s 1.702693041s 1.703101154s 1.704331437s 1.704436116s 1.706996039s 1.71421651s 1.716940448s 1.71795531s 1.722221454s 1.72413358s 1.728487477s 1.73923723s 1.741022735s 1.746677926s 1.749016338s 1.751474049s 1.753629819s 1.755941114s 1.75943019s 1.759777713s 1.760693052s 1.762353972s 1.763345423s 1.766210811s 1.769470592s 1.776181765s 1.779109155s 1.779529108s 1.784021873s 1.784162459s 1.784332627s 1.787913605s 1.790795271s 1.79161771s 1.79203995s 1.792083362s 1.793769296s 1.79729377s 1.798955184s 1.799990854s 1.800895806s 1.8039253s 1.804180778s 1.806697819s 1.814485463s 1.817852713s 1.81963392s 1.821242899s 1.822592746s 1.823237349s 1.829227474s 1.83294899s 1.833232309s 1.835120489s 1.835361672s 1.837118448s 1.850580314s 1.856322982s 1.872075088s 1.876903734s 1.877163993s 1.878969809s 1.893771348s 1.895872132s 1.904386645s 1.915538856s 1.916678679s 1.923358987s 1.925847438s 1.926107513s 1.927521267s 1.928964443s 1.932355484s 1.935059379s 1.935198456s 1.940499836s 1.941622724s 1.944756228s 1.949047416s 1.953564438s 1.95827347s 1.959549334s 1.959853942s 1.963538133s 1.967286041s 1.970546404s 1.981274093s 1.981736242s 1.98965417s 2.000387579s 2.021480992s 2.039631699s 2.051789339s 2.07615553s 2.085576888s 2.105124787s] Feb 13 13:22:17.996: INFO: 50 %ile: 1.693226578s Feb 13 13:22:17.996: INFO: 90 %ile: 1.941622724s Feb 13 13:22:17.996: INFO: 99 %ile: 2.085576888s Feb 13 13:22:17.996: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:22:17.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-5628" for this suite. Feb 13 13:23:16.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:23:16.133: INFO: namespace svc-latency-5628 deletion completed in 58.127277991s • [SLOW TEST:88.985 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:23:16.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Feb 13 13:23:16.201: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1265" to be "success or failure" Feb 13 13:23:16.256: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 55.426021ms Feb 13 13:23:18.266: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065046748s Feb 13 13:23:20.272: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071609912s Feb 13 13:23:22.282: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081208258s Feb 13 13:23:24.296: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095129454s Feb 13 13:23:26.305: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.104081342s STEP: Saw pod success Feb 13 13:23:26.305: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Feb 13 13:23:26.309: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: STEP: delete the pod Feb 13 13:23:26.374: INFO: Waiting for pod pod-host-path-test to disappear Feb 13 13:23:26.415: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:23:26.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-1265" for this suite. Feb 13 13:23:32.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:23:32.583: INFO: namespace hostpath-1265 deletion completed in 6.158138328s • [SLOW TEST:16.450 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:23:32.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Feb 13 13:23:40.710: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-520dd87f-301d-4f45-a256-d10fd163987a,GenerateName:,Namespace:events-6140,SelfLink:/api/v1/namespaces/events-6140/pods/send-events-520dd87f-301d-4f45-a256-d10fd163987a,UID:46854fc5-7355-43c5-ab20-6f785deb2711,ResourceVersion:24198431,Generation:0,CreationTimestamp:2020-02-13 13:23:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 671413443,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-x5b9r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x5b9r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-x5b9r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019173f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019174b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:23:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:23:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:23:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:23:32 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-13 13:23:32 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-13 13:23:38 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://07be067fc09656889f46a59c0ba5b6d3587a37bce37c1597d470741d28b71644}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Feb 13 13:23:42.723: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Feb 13 13:23:44.733: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:23:44.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6140" for this suite. Feb 13 13:24:22.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:24:23.050: INFO: namespace events-6140 deletion completed in 38.27009431s • [SLOW TEST:50.467 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:24:23.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 13 13:24:23.174: INFO: Creating deployment "test-recreate-deployment" Feb 13 13:24:23.207: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Feb 13 13:24:23.310: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Feb 13 13:24:25.323: INFO: Waiting deployment "test-recreate-deployment" to complete Feb 13 13:24:25.328: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717197063, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717197063, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717197063, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717197063, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 13:24:27.342: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717197063, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717197063, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717197063, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717197063, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 13:24:29.338: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717197063, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717197063, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717197063, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717197063, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 13:24:31.338: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Feb 13 13:24:31.353: INFO: Updating deployment test-recreate-deployment Feb 13 13:24:31.353: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 13 13:24:31.732: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-8047,SelfLink:/apis/apps/v1/namespaces/deployment-8047/deployments/test-recreate-deployment,UID:ed68a630-0ac6-49eb-9cb1-4bbeb7341550,ResourceVersion:24198556,Generation:2,CreationTimestamp:2020-02-13 13:24:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-13 13:24:31 +0000 UTC 2020-02-13 13:24:31 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-13 13:24:31 +0000 UTC 2020-02-13 13:24:23 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Feb 13 13:24:31.839: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-8047,SelfLink:/apis/apps/v1/namespaces/deployment-8047/replicasets/test-recreate-deployment-5c8c9cc69d,UID:c8529a1c-df8d-4707-8ad5-27dcab1a0fb2,ResourceVersion:24198555,Generation:1,CreationTimestamp:2020-02-13 13:24:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment ed68a630-0ac6-49eb-9cb1-4bbeb7341550 0xc0029da5a7 0xc0029da5a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 13 13:24:31.839: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Feb 13 13:24:31.840: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-8047,SelfLink:/apis/apps/v1/namespaces/deployment-8047/replicasets/test-recreate-deployment-6df85df6b9,UID:2c144b05-9f6b-4bdf-8205-91ad7a9cf3b0,ResourceVersion:24198545,Generation:2,CreationTimestamp:2020-02-13 13:24:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment ed68a630-0ac6-49eb-9cb1-4bbeb7341550 0xc0029da677 0xc0029da678}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 13 13:24:31.850: INFO: Pod "test-recreate-deployment-5c8c9cc69d-wclpj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-wclpj,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-8047,SelfLink:/api/v1/namespaces/deployment-8047/pods/test-recreate-deployment-5c8c9cc69d-wclpj,UID:e1818055-8f26-457d-9279-f1134e90ba3a,ResourceVersion:24198558,Generation:0,CreationTimestamp:2020-02-13 13:24:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d c8529a1c-df8d-4707-8ad5-27dcab1a0fb2 0xc0029daf67 0xc0029daf68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xhtr5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhtr5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xhtr5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029dafe0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029db000}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:24:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:24:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:24:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:24:31 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-13 13:24:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:24:31.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8047" for this suite. Feb 13 13:24:40.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:24:40.747: INFO: namespace deployment-8047 deletion completed in 8.882671923s • [SLOW TEST:17.696 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:24:40.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 13 13:24:41.088: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b86d4e91-f2ec-44bf-9356-59ec04b3a1bc" in namespace "projected-7503" to be "success or failure" Feb 13 13:24:41.115: INFO: Pod "downwardapi-volume-b86d4e91-f2ec-44bf-9356-59ec04b3a1bc": Phase="Pending", Reason="", readiness=false. Elapsed: 27.184617ms Feb 13 13:24:43.123: INFO: Pod "downwardapi-volume-b86d4e91-f2ec-44bf-9356-59ec04b3a1bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034958534s Feb 13 13:24:45.132: INFO: Pod "downwardapi-volume-b86d4e91-f2ec-44bf-9356-59ec04b3a1bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044491357s Feb 13 13:24:47.144: INFO: Pod "downwardapi-volume-b86d4e91-f2ec-44bf-9356-59ec04b3a1bc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056505378s Feb 13 13:24:49.155: INFO: Pod "downwardapi-volume-b86d4e91-f2ec-44bf-9356-59ec04b3a1bc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066925552s Feb 13 13:24:51.166: INFO: Pod "downwardapi-volume-b86d4e91-f2ec-44bf-9356-59ec04b3a1bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.077693221s STEP: Saw pod success Feb 13 13:24:51.166: INFO: Pod "downwardapi-volume-b86d4e91-f2ec-44bf-9356-59ec04b3a1bc" satisfied condition "success or failure" Feb 13 13:24:51.169: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b86d4e91-f2ec-44bf-9356-59ec04b3a1bc container client-container: STEP: delete the pod Feb 13 13:24:51.367: INFO: Waiting for pod downwardapi-volume-b86d4e91-f2ec-44bf-9356-59ec04b3a1bc to disappear Feb 13 13:24:51.376: INFO: Pod downwardapi-volume-b86d4e91-f2ec-44bf-9356-59ec04b3a1bc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:24:51.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7503" for this suite. Feb 13 13:24:57.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:24:57.571: INFO: namespace projected-7503 deletion completed in 6.188724447s • [SLOW TEST:16.822 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:24:57.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Feb 13 13:24:58.288: INFO: created pod pod-service-account-defaultsa Feb 13 13:24:58.288: INFO: pod pod-service-account-defaultsa service account token volume mount: true Feb 13 13:24:58.389: INFO: created pod pod-service-account-mountsa Feb 13 13:24:58.389: INFO: pod pod-service-account-mountsa service account token volume mount: true Feb 13 13:24:58.431: INFO: created pod pod-service-account-nomountsa Feb 13 13:24:58.432: INFO: pod pod-service-account-nomountsa service account token volume mount: false Feb 13 13:24:58.457: INFO: created pod pod-service-account-defaultsa-mountspec Feb 13 13:24:58.457: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Feb 13 13:24:58.477: INFO: created pod pod-service-account-mountsa-mountspec Feb 13 13:24:58.478: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Feb 13 13:24:58.548: INFO: created pod pod-service-account-nomountsa-mountspec Feb 13 13:24:58.548: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Feb 13 13:24:58.587: INFO: created pod pod-service-account-defaultsa-nomountspec Feb 13 13:24:58.587: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Feb 13 13:24:58.597: INFO: created pod pod-service-account-mountsa-nomountspec Feb 13 13:24:58.597: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Feb 13 13:24:58.628: INFO: created pod pod-service-account-nomountsa-nomountspec Feb 13 13:24:58.629: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:24:58.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-967" for this suite. Feb 13 13:25:39.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:25:39.876: INFO: namespace svcaccounts-967 deletion completed in 41.143120824s • [SLOW TEST:42.305 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:25:39.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-7ce8fb59-4803-4118-8d78-4611921c074d STEP: Creating a pod to test consume configMaps Feb 13 13:25:40.008: INFO: Waiting up to 5m0s for pod "pod-configmaps-c8aebfb8-eca9-4f0c-92a9-b5d1c5ca262e" in namespace "configmap-6191" to be "success or failure" Feb 13 13:25:40.039: INFO: Pod "pod-configmaps-c8aebfb8-eca9-4f0c-92a9-b5d1c5ca262e": Phase="Pending", Reason="", readiness=false. Elapsed: 31.195873ms Feb 13 13:25:42.404: INFO: Pod "pod-configmaps-c8aebfb8-eca9-4f0c-92a9-b5d1c5ca262e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.396131205s Feb 13 13:25:44.414: INFO: Pod "pod-configmaps-c8aebfb8-eca9-4f0c-92a9-b5d1c5ca262e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.405734675s Feb 13 13:25:46.652: INFO: Pod "pod-configmaps-c8aebfb8-eca9-4f0c-92a9-b5d1c5ca262e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.644488525s Feb 13 13:25:48.666: INFO: Pod "pod-configmaps-c8aebfb8-eca9-4f0c-92a9-b5d1c5ca262e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.658319796s Feb 13 13:25:50.675: INFO: Pod "pod-configmaps-c8aebfb8-eca9-4f0c-92a9-b5d1c5ca262e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.667368893s STEP: Saw pod success Feb 13 13:25:50.675: INFO: Pod "pod-configmaps-c8aebfb8-eca9-4f0c-92a9-b5d1c5ca262e" satisfied condition "success or failure" Feb 13 13:25:50.682: INFO: Trying to get logs from node iruya-node pod pod-configmaps-c8aebfb8-eca9-4f0c-92a9-b5d1c5ca262e container configmap-volume-test: STEP: delete the pod Feb 13 13:25:50.756: INFO: Waiting for pod pod-configmaps-c8aebfb8-eca9-4f0c-92a9-b5d1c5ca262e to disappear Feb 13 13:25:50.841: INFO: Pod pod-configmaps-c8aebfb8-eca9-4f0c-92a9-b5d1c5ca262e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:25:50.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6191" for this suite. Feb 13 13:25:56.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:25:57.020: INFO: namespace configmap-6191 deletion completed in 6.1711773s • [SLOW TEST:17.143 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:25:57.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-8f494902-27cb-43f8-a676-96c7f61d3b2f STEP: Creating a pod to test consume secrets Feb 13 13:25:57.393: INFO: Waiting up to 5m0s for pod "pod-secrets-89598928-27ad-4db9-90c4-da0fa598be64" in namespace "secrets-5987" to be "success or failure" Feb 13 13:25:57.404: INFO: Pod "pod-secrets-89598928-27ad-4db9-90c4-da0fa598be64": Phase="Pending", Reason="", readiness=false. Elapsed: 10.512827ms Feb 13 13:25:59.416: INFO: Pod "pod-secrets-89598928-27ad-4db9-90c4-da0fa598be64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022465255s Feb 13 13:26:01.427: INFO: Pod "pod-secrets-89598928-27ad-4db9-90c4-da0fa598be64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03346277s Feb 13 13:26:03.440: INFO: Pod "pod-secrets-89598928-27ad-4db9-90c4-da0fa598be64": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047229591s Feb 13 13:26:05.448: INFO: Pod "pod-secrets-89598928-27ad-4db9-90c4-da0fa598be64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054629825s STEP: Saw pod success Feb 13 13:26:05.448: INFO: Pod "pod-secrets-89598928-27ad-4db9-90c4-da0fa598be64" satisfied condition "success or failure" Feb 13 13:26:05.452: INFO: Trying to get logs from node iruya-node pod pod-secrets-89598928-27ad-4db9-90c4-da0fa598be64 container secret-volume-test: STEP: delete the pod Feb 13 13:26:05.511: INFO: Waiting for pod pod-secrets-89598928-27ad-4db9-90c4-da0fa598be64 to disappear Feb 13 13:26:05.528: INFO: Pod pod-secrets-89598928-27ad-4db9-90c4-da0fa598be64 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:26:05.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5987" for this suite. Feb 13 13:26:11.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:26:11.813: INFO: namespace secrets-5987 deletion completed in 6.202712096s STEP: Destroying namespace "secret-namespace-6833" for this suite. Feb 13 13:26:17.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:26:18.012: INFO: namespace secret-namespace-6833 deletion completed in 6.199218589s • [SLOW TEST:20.993 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:26:18.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 13 13:26:18.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7323' Feb 13 13:26:20.620: INFO: stderr: "" Feb 13 13:26:20.620: INFO: stdout: "replicationcontroller/redis-master created\n" Feb 13 13:26:20.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7323' Feb 13 13:26:21.200: INFO: stderr: "" Feb 13 13:26:21.200: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Feb 13 13:26:22.215: INFO: Selector matched 1 pods for map[app:redis] Feb 13 13:26:22.215: INFO: Found 0 / 1 Feb 13 13:26:23.210: INFO: Selector matched 1 pods for map[app:redis] Feb 13 13:26:23.210: INFO: Found 0 / 1 Feb 13 13:26:24.209: INFO: Selector matched 1 pods for map[app:redis] Feb 13 13:26:24.209: INFO: Found 0 / 1 Feb 13 13:26:25.210: INFO: Selector matched 1 pods for map[app:redis] Feb 13 13:26:25.211: INFO: Found 0 / 1 Feb 13 13:26:26.208: INFO: Selector matched 1 pods for map[app:redis] Feb 13 13:26:26.208: INFO: Found 0 / 1 Feb 13 13:26:27.218: INFO: Selector matched 1 pods for map[app:redis] Feb 13 13:26:27.218: INFO: Found 0 / 1 Feb 13 13:26:28.212: INFO: Selector matched 1 pods for map[app:redis] Feb 13 13:26:28.212: INFO: Found 1 / 1 Feb 13 13:26:28.212: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 13 13:26:28.218: INFO: Selector matched 1 pods for map[app:redis] Feb 13 13:26:28.218: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 13 13:26:28.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-q6mhh --namespace=kubectl-7323' Feb 13 13:26:28.409: INFO: stderr: "" Feb 13 13:26:28.409: INFO: stdout: "Name: redis-master-q6mhh\nNamespace: kubectl-7323\nPriority: 0\nNode: iruya-node/10.96.3.65\nStart Time: Thu, 13 Feb 2020 13:26:20 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: docker://9a8576bdb9339afa19bc6c72d1c1dc9621f5de29883f4cfc3176192fe2d04e41\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 13 Feb 2020 13:26:27 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-qq8dw (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-qq8dw:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-qq8dw\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 8s default-scheduler Successfully assigned kubectl-7323/redis-master-q6mhh to iruya-node\n Normal Pulled 4s kubelet, iruya-node Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-node Created container redis-master\n Normal Started 1s kubelet, iruya-node Started container redis-master\n" Feb 13 13:26:28.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-7323' Feb 13 13:26:28.585: INFO: stderr: "" Feb 13 13:26:28.586: INFO: stdout: "Name: redis-master\nNamespace: kubectl-7323\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 8s replication-controller Created pod: redis-master-q6mhh\n" Feb 13 13:26:28.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-7323' Feb 13 13:26:28.713: INFO: stderr: "" Feb 13 13:26:28.713: INFO: stdout: "Name: redis-master\nNamespace: kubectl-7323\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.107.49.85\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.44.0.1:6379\nSession Affinity: None\nEvents: \n" Feb 13 13:26:28.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node' Feb 13 13:26:28.873: INFO: stderr: "" Feb 13 13:26:28.873: INFO: stdout: "Name: iruya-node\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-node\n kubernetes.io/os=linux\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 04 Aug 2019 09:01:39 +0000\nTaints: \nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Sat, 12 Oct 2019 11:56:49 +0000 Sat, 12 Oct 2019 11:56:49 +0000 WeaveIsUp Weave pod has set this\n MemoryPressure False Thu, 13 Feb 2020 13:25:45 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 13 Feb 2020 13:25:45 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 13 Feb 2020 13:25:45 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 13 Feb 2020 13:25:45 +0000 Sun, 04 Aug 2019 09:02:19 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.96.3.65\n Hostname: iruya-node\nCapacity:\n cpu: 4\n ephemeral-storage: 20145724Ki\n hugepages-2Mi: 0\n memory: 4039076Ki\n pods: 110\nAllocatable:\n cpu: 4\n ephemeral-storage: 18566299208\n hugepages-2Mi: 0\n memory: 3936676Ki\n pods: 110\nSystem Info:\n Machine ID: f573dcf04d6f4a87856a35d266a2fa7a\n System UUID: F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID: 8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version: 4.15.0-52-generic\n OS Image: Ubuntu 18.04.2 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.9.7\n Kubelet Version: v1.15.1\n Kube-Proxy Version: v1.15.1\nPodCIDR: 10.96.1.0/24\nNon-terminated Pods: (3 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system kube-proxy-976zl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 193d\n kube-system weave-net-rlp57 20m (0%) 0 (0%) 0 (0%) 0 (0%) 124d\n kubectl-7323 redis-master-q6mhh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 20m (0%) 0 (0%)\n memory 0 (0%) 0 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Feb 13 13:26:28.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-7323' Feb 13 13:26:29.004: INFO: stderr: "" Feb 13 13:26:29.004: INFO: stdout: "Name: kubectl-7323\nLabels: e2e-framework=kubectl\n e2e-run=ca3a3677-8b5b-42db-ad91-9cc60f12a6da\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:26:29.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7323" for this suite. Feb 13 13:26:51.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:26:51.145: INFO: namespace kubectl-7323 deletion completed in 22.134988849s • [SLOW TEST:33.130 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:26:51.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:26:57.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1050" for this suite. Feb 13 13:27:03.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:27:03.705: INFO: namespace namespaces-1050 deletion completed in 6.170531411s STEP: Destroying namespace "nsdeletetest-7656" for this suite. Feb 13 13:27:03.709: INFO: Namespace nsdeletetest-7656 was already deleted STEP: Destroying namespace "nsdeletetest-239" for this suite. Feb 13 13:27:09.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:27:09.897: INFO: namespace nsdeletetest-239 deletion completed in 6.188437049s • [SLOW TEST:18.752 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:27:09.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-7429 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-7429 STEP: Creating statefulset with conflicting port in namespace statefulset-7429 STEP: Waiting until pod test-pod will start running in namespace statefulset-7429 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7429 Feb 13 13:27:20.096: INFO: Observed stateful pod in namespace: statefulset-7429, name: ss-0, uid: f924e91d-18fd-4089-9c36-ccb3303bb1f1, status phase: Pending. Waiting for statefulset controller to delete. Feb 13 13:32:20.096: INFO: Pod ss-0 expected to be re-created at least once [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 13 13:32:20.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po ss-0 --namespace=statefulset-7429' Feb 13 13:32:20.268: INFO: stderr: "" Feb 13 13:32:20.268: INFO: stdout: "Name: ss-0\nNamespace: statefulset-7429\nPriority: 0\nNode: iruya-node/\nLabels: baz=blah\n controller-revision-hash=ss-6f98bdb9c4\n foo=bar\n statefulset.kubernetes.io/pod-name=ss-0\nAnnotations: \nStatus: Pending\nIP: \nControlled By: StatefulSet/ss\nContainers:\n nginx:\n Image: docker.io/library/nginx:1.14-alpine\n Port: 21017/TCP\n Host Port: 21017/TCP\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-fxrjl (ro)\nVolumes:\n default-token-fxrjl:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-fxrjl\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Warning PodFitsHostPorts 5m8s kubelet, iruya-node Predicate PodFitsHostPorts failed\n" Feb 13 13:32:20.268: INFO: Output of kubectl describe ss-0: Name: ss-0 Namespace: statefulset-7429 Priority: 0 Node: iruya-node/ Labels: baz=blah controller-revision-hash=ss-6f98bdb9c4 foo=bar statefulset.kubernetes.io/pod-name=ss-0 Annotations: Status: Pending IP: Controlled By: StatefulSet/ss Containers: nginx: Image: docker.io/library/nginx:1.14-alpine Port: 21017/TCP Host Port: 21017/TCP Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-fxrjl (ro) Volumes: default-token-fxrjl: Type: Secret (a volume populated by a Secret) SecretName: default-token-fxrjl Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning PodFitsHostPorts 5m8s kubelet, iruya-node Predicate PodFitsHostPorts failed Feb 13 13:32:20.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs ss-0 --namespace=statefulset-7429 --tail=100' Feb 13 13:32:20.513: INFO: rc: 1 Feb 13 13:32:20.514: INFO: Last 100 log lines of ss-0: Feb 13 13:32:20.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po test-pod --namespace=statefulset-7429' Feb 13 13:32:20.634: INFO: stderr: "" Feb 13 13:32:20.634: INFO: stdout: "Name: test-pod\nNamespace: statefulset-7429\nPriority: 0\nNode: iruya-node/10.96.3.65\nStart Time: Thu, 13 Feb 2020 13:27:10 +0000\nLabels: \nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nContainers:\n nginx:\n Container ID: docker://bfd36a0b982ff83cf6243dca473689f372a65fcbbb697e62ada8c61f6e32bad7\n Image: docker.io/library/nginx:1.14-alpine\n Image ID: docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\n Port: 21017/TCP\n Host Port: 21017/TCP\n State: Running\n Started: Thu, 13 Feb 2020 13:27:18 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-fxrjl (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-fxrjl:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-fxrjl\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Pulled 5m5s kubelet, iruya-node Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\n Normal Created 5m2s kubelet, iruya-node Created container nginx\n Normal Started 5m2s kubelet, iruya-node Started container nginx\n" Feb 13 13:32:20.634: INFO: Output of kubectl describe test-pod: Name: test-pod Namespace: statefulset-7429 Priority: 0 Node: iruya-node/10.96.3.65 Start Time: Thu, 13 Feb 2020 13:27:10 +0000 Labels: Annotations: Status: Running IP: 10.44.0.1 Containers: nginx: Container ID: docker://bfd36a0b982ff83cf6243dca473689f372a65fcbbb697e62ada8c61f6e32bad7 Image: docker.io/library/nginx:1.14-alpine Image ID: docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 Port: 21017/TCP Host Port: 21017/TCP State: Running Started: Thu, 13 Feb 2020 13:27:18 +0000 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-fxrjl (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-fxrjl: Type: Secret (a volume populated by a Secret) SecretName: default-token-fxrjl Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulled 5m5s kubelet, iruya-node Container image "docker.io/library/nginx:1.14-alpine" already present on machine Normal Created 5m2s kubelet, iruya-node Created container nginx Normal Started 5m2s kubelet, iruya-node Started container nginx Feb 13 13:32:20.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs test-pod --namespace=statefulset-7429 --tail=100' Feb 13 13:32:20.738: INFO: stderr: "" Feb 13 13:32:20.738: INFO: stdout: "" Feb 13 13:32:20.738: INFO: Last 100 log lines of test-pod: Feb 13 13:32:20.738: INFO: Deleting all statefulset in ns statefulset-7429 Feb 13 13:32:20.743: INFO: Scaling statefulset ss to 0 Feb 13 13:32:30.779: INFO: Waiting for statefulset status.replicas updated to 0 Feb 13 13:32:30.785: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Collecting events from namespace "statefulset-7429". STEP: Found 12 events. Feb 13 13:32:30.818: INFO: At 2020-02-13 13:27:10 +0000 UTC - event for ss: {statefulset-controller } RecreatingFailedPod: StatefulSet statefulset-7429/ss is recreating failed Pod ss-0 Feb 13 13:32:30.818: INFO: At 2020-02-13 13:27:10 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful Feb 13 13:32:30.818: INFO: At 2020-02-13 13:27:10 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful Feb 13 13:32:30.818: INFO: At 2020-02-13 13:27:10 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed Feb 13 13:32:30.818: INFO: At 2020-02-13 13:27:10 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed Feb 13 13:32:30.818: INFO: At 2020-02-13 13:27:11 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again. Feb 13 13:32:30.818: INFO: At 2020-02-13 13:27:11 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed Feb 13 13:32:30.818: INFO: At 2020-02-13 13:27:11 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed Feb 13 13:32:30.818: INFO: At 2020-02-13 13:27:12 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed Feb 13 13:32:30.818: INFO: At 2020-02-13 13:27:15 +0000 UTC - event for test-pod: {kubelet iruya-node} Pulled: Container image "docker.io/library/nginx:1.14-alpine" already present on machine Feb 13 13:32:30.818: INFO: At 2020-02-13 13:27:18 +0000 UTC - event for test-pod: {kubelet iruya-node} Created: Created container nginx Feb 13 13:32:30.818: INFO: At 2020-02-13 13:27:18 +0000 UTC - event for test-pod: {kubelet iruya-node} Started: Started container nginx Feb 13 13:32:30.828: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 13:32:30.828: INFO: test-pod iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:27:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:27:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:27:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:27:10 +0000 UTC }] Feb 13 13:32:30.828: INFO: Feb 13 13:32:30.842: INFO: Logging node info for node iruya-node Feb 13 13:32:30.850: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-node,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-node,UID:b2aa273d-23ea-4c86-9e2f-72569e3392bd,ResourceVersion:24199459,Generation:0,CreationTimestamp:2019-08-04 09:01:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-node,kubernetes.io/os: linux,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.96.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-10-12 11:56:49 +0000 UTC 2019-10-12 11:56:49 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2020-02-13 13:31:46 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-02-13 13:31:46 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-02-13 13:31:46 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-02-13 13:31:46 +0000 UTC 2019-08-04 09:02:19 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.3.65} {Hostname iruya-node}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f573dcf04d6f4a87856a35d266a2fa7a,SystemUUID:F573DCF0-4D6F-4A87-856A-35D266A2FA7A,BootID:8baf4beb-8391-43e6-b17b-b1e184b5370a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.15.1,KubeProxyVersion:v1.15.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7 k8s.gcr.io/etcd:3.3.10] 258116302} {[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15] 246640776} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine] 126894770} {[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine] 123781643} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:08186f4897488e96cb098dd8d1d931af9a6ea718bb8737bf44bb76e42075f0ce k8s.gcr.io/kube-proxy:v1.15.1] 82408284} {[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10] 61365829} {[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6] 57345321} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine] 29331594} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 27413498} {[aquasec/kube-bench@sha256:33d50ec2fdc6644ffa70b088af1a9932f16d6bb9391a9f73045c8c6b4f73f4e4 aquasec/kube-bench:latest] 21536876} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0] 11443478} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 6757579} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 5829944} {[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest] 5496756} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e busybox:latest] 1219782} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472} {[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest] 239840}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} Feb 13 13:32:30.852: INFO: Logging kubelet events for node iruya-node Feb 13 13:32:30.859: INFO: Logging pods the kubelet thinks is on node iruya-node Feb 13 13:32:30.876: INFO: weave-net-rlp57 started at 2019-10-12 11:56:39 +0000 UTC (0+2 container statuses recorded) Feb 13 13:32:30.876: INFO: Container weave ready: true, restart count 0 Feb 13 13:32:30.876: INFO: Container weave-npc ready: true, restart count 0 Feb 13 13:32:30.876: INFO: kube-bench-j7kcs started at 2020-02-11 06:42:30 +0000 UTC (0+1 container statuses recorded) Feb 13 13:32:30.876: INFO: Container kube-bench ready: false, restart count 0 Feb 13 13:32:30.876: INFO: test-pod started at 2020-02-13 13:27:10 +0000 UTC (0+1 container statuses recorded) Feb 13 13:32:30.876: INFO: Container nginx ready: true, restart count 0 Feb 13 13:32:30.876: INFO: kube-proxy-976zl started at 2019-08-04 09:01:39 +0000 UTC (0+1 container statuses recorded) Feb 13 13:32:30.876: INFO: Container kube-proxy ready: true, restart count 0 W0213 13:32:30.882825 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 13 13:32:31.010: INFO: Latency metrics for node iruya-node Feb 13 13:32:31.010: INFO: Logging node info for node iruya-server-sfge57q7djm7 Feb 13 13:32:31.017: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-server-sfge57q7djm7,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-server-sfge57q7djm7,UID:67f2a658-4743-4118-95e7-463a23bcd212,ResourceVersion:24199494,Generation:0,CreationTimestamp:2019-08-04 08:52:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-server-sfge57q7djm7,kubernetes.io/os: linux,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.96.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-08-04 08:53:00 +0000 UTC 2019-08-04 08:53:00 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2020-02-13 13:32:12 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-02-13 13:32:12 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-02-13 13:32:12 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-02-13 13:32:12 +0000 UTC 2019-08-04 08:53:09 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.2.216} {Hostname iruya-server-sfge57q7djm7}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:78bacef342604a51913cae58dd95802b,SystemUUID:78BACEF3-4260-4A51-913C-AE58DD95802B,BootID:db143d3a-01b3-4483-b23e-e72adff2b28d,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.15.1,KubeProxyVersion:v1.15.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7 k8s.gcr.io/etcd:3.3.10] 258116302} {[k8s.gcr.io/kube-apiserver@sha256:304a1c38707834062ee87df62ef329d52a8b9a3e70459565d0a396479073f54c k8s.gcr.io/kube-apiserver:v1.15.1] 206827454} {[k8s.gcr.io/kube-controller-manager@sha256:9abae95e428e228fe8f6d1630d55e79e018037460f3731312805c0f37471e4bf k8s.gcr.io/kube-controller-manager:v1.15.1] 158722622} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine] 126894770} {[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine] 123781643} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:08186f4897488e96cb098dd8d1d931af9a6ea718bb8737bf44bb76e42075f0ce k8s.gcr.io/kube-proxy:v1.15.1] 82408284} {[k8s.gcr.io/kube-scheduler@sha256:d0ee18a9593013fbc44b1920e4930f29b664b59a3958749763cb33b57e0e8956 k8s.gcr.io/kube-scheduler:v1.15.1] 81107582} {[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6] 57345321} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4 k8s.gcr.io/coredns:1.3.1] 40303560} {[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine] 29331594} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472} {[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest] 239840}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} Feb 13 13:32:31.018: INFO: Logging kubelet events for node iruya-server-sfge57q7djm7 Feb 13 13:32:31.030: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 Feb 13 13:32:31.042: INFO: etcd-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:38 +0000 UTC (0+1 container statuses recorded) Feb 13 13:32:31.042: INFO: Container etcd ready: true, restart count 0 Feb 13 13:32:31.042: INFO: weave-net-bzl4d started at 2019-08-04 08:52:37 +0000 UTC (0+2 container statuses recorded) Feb 13 13:32:31.042: INFO: Container weave ready: true, restart count 0 Feb 13 13:32:31.042: INFO: Container weave-npc ready: true, restart count 0 Feb 13 13:32:31.042: INFO: coredns-5c98db65d4-bm4gs started at 2019-08-04 08:53:12 +0000 UTC (0+1 container statuses recorded) Feb 13 13:32:31.042: INFO: Container coredns ready: true, restart count 0 Feb 13 13:32:31.042: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:42 +0000 UTC (0+1 container statuses recorded) Feb 13 13:32:31.042: INFO: Container kube-controller-manager ready: true, restart count 21 Feb 13 13:32:31.042: INFO: kube-proxy-58v95 started at 2019-08-04 08:52:37 +0000 UTC (0+1 container statuses recorded) Feb 13 13:32:31.042: INFO: Container kube-proxy ready: true, restart count 0 Feb 13 13:32:31.042: INFO: kube-apiserver-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:39 +0000 UTC (0+1 container statuses recorded) Feb 13 13:32:31.042: INFO: Container kube-apiserver ready: true, restart count 0 Feb 13 13:32:31.042: INFO: kube-scheduler-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:43 +0000 UTC (0+1 container statuses recorded) Feb 13 13:32:31.042: INFO: Container kube-scheduler ready: true, restart count 13 Feb 13 13:32:31.042: INFO: coredns-5c98db65d4-xx8w8 started at 2019-08-04 08:53:12 +0000 UTC (0+1 container statuses recorded) Feb 13 13:32:31.042: INFO: Container coredns ready: true, restart count 0 W0213 13:32:31.085180 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 13 13:32:31.147: INFO: Latency metrics for node iruya-server-sfge57q7djm7 Feb 13 13:32:31.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7429" for this suite. Feb 13 13:32:53.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:32:53.402: INFO: namespace statefulset-7429 deletion completed in 22.247165951s • Failure [343.504 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 13 13:32:20.096: Pod ss-0 expected to be re-created at least once /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:32:53.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 13 13:32:53.514: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Feb 13 13:32:53.528: INFO: Pod name sample-pod: Found 0 pods out of 1 Feb 13 13:32:58.546: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 13 13:33:04.565: INFO: Creating deployment "test-rolling-update-deployment" Feb 13 13:33:04.580: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Feb 13 13:33:04.625: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Feb 13 13:33:06.652: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Feb 13 13:33:06.657: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717197584, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717197584, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717197584, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717197584, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 13:33:08.667: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717197584, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717197584, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717197584, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717197584, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 13:33:10.665: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717197584, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717197584, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717197584, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717197584, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 13:33:12.721: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 13 13:33:12.738: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-6577,SelfLink:/apis/apps/v1/namespaces/deployment-6577/deployments/test-rolling-update-deployment,UID:9cba0947-4f84-4e1b-ba95-0926d4d2cf3e,ResourceVersion:24199650,Generation:1,CreationTimestamp:2020-02-13 13:33:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-13 13:33:04 +0000 UTC 2020-02-13 13:33:04 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-13 13:33:11 +0000 UTC 2020-02-13 13:33:04 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 13 13:33:12.741: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-6577,SelfLink:/apis/apps/v1/namespaces/deployment-6577/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:5afe8e44-d1ba-4a8b-9873-b5846631f780,ResourceVersion:24199639,Generation:1,CreationTimestamp:2020-02-13 13:33:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 9cba0947-4f84-4e1b-ba95-0926d4d2cf3e 0xc0023575b7 0xc0023575b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 13 13:33:12.741: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Feb 13 13:33:12.741: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-6577,SelfLink:/apis/apps/v1/namespaces/deployment-6577/replicasets/test-rolling-update-controller,UID:2ca9d117-9fb7-4d0d-b452-644ec90e6c80,ResourceVersion:24199648,Generation:2,CreationTimestamp:2020-02-13 13:32:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 9cba0947-4f84-4e1b-ba95-0926d4d2cf3e 0xc0023574d7 0xc0023574d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 13 13:33:12.745: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-x52h2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-x52h2,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-6577,SelfLink:/api/v1/namespaces/deployment-6577/pods/test-rolling-update-deployment-79f6b9d75c-x52h2,UID:7b15940a-036e-4da6-abb4-c7b3b4e8dab1,ResourceVersion:24199638,Generation:0,CreationTimestamp:2020-02-13 13:33:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 5afe8e44-d1ba-4a8b-9873-b5846631f780 0xc000bca2b7 0xc000bca2b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sknsm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sknsm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-sknsm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000bca330} {node.kubernetes.io/unreachable Exists NoExecute 0xc000bca350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:33:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:33:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:33:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:33:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-13 13:33:04 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-13 13:33:10 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://228e8d84421a621a64fb056ca855505c72f3995a5e826947692bafcb8b458d5a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:33:12.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6577" for this suite. Feb 13 13:33:20.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:33:20.876: INFO: namespace deployment-6577 deletion completed in 8.127601962s • [SLOW TEST:27.472 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:33:20.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 13 13:33:21.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2149' Feb 13 13:33:21.175: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 13 13:33:21.175: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Feb 13 13:33:21.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-2149' Feb 13 13:33:21.387: INFO: stderr: "" Feb 13 13:33:21.388: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:33:21.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2149" for this suite. Feb 13 13:33:27.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:33:27.559: INFO: namespace kubectl-2149 deletion completed in 6.165186409s • [SLOW TEST:6.683 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:33:27.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:33:35.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5858" for this suite. Feb 13 13:33:41.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:33:41.930: INFO: namespace emptydir-wrapper-5858 deletion completed in 6.184589706s • [SLOW TEST:14.370 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:33:41.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Feb 13 13:33:42.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6451' Feb 13 13:33:42.431: INFO: stderr: "" Feb 13 13:33:42.432: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 13 13:33:42.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6451' Feb 13 13:33:42.612: INFO: stderr: "" Feb 13 13:33:42.612: INFO: stdout: "update-demo-nautilus-g6t7n update-demo-nautilus-ncqkf " Feb 13 13:33:42.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g6t7n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6451' Feb 13 13:33:42.915: INFO: stderr: "" Feb 13 13:33:42.915: INFO: stdout: "" Feb 13 13:33:42.915: INFO: update-demo-nautilus-g6t7n is created but not running Feb 13 13:33:47.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6451' Feb 13 13:33:49.418: INFO: stderr: "" Feb 13 13:33:49.418: INFO: stdout: "update-demo-nautilus-g6t7n update-demo-nautilus-ncqkf " Feb 13 13:33:49.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g6t7n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6451' Feb 13 13:33:50.085: INFO: stderr: "" Feb 13 13:33:50.085: INFO: stdout: "" Feb 13 13:33:50.085: INFO: update-demo-nautilus-g6t7n is created but not running Feb 13 13:33:55.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6451' Feb 13 13:33:55.250: INFO: stderr: "" Feb 13 13:33:55.250: INFO: stdout: "update-demo-nautilus-g6t7n update-demo-nautilus-ncqkf " Feb 13 13:33:55.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g6t7n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6451' Feb 13 13:33:55.357: INFO: stderr: "" Feb 13 13:33:55.357: INFO: stdout: "true" Feb 13 13:33:55.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g6t7n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6451' Feb 13 13:33:55.449: INFO: stderr: "" Feb 13 13:33:55.449: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 13 13:33:55.449: INFO: validating pod update-demo-nautilus-g6t7n Feb 13 13:33:55.468: INFO: got data: { "image": "nautilus.jpg" } Feb 13 13:33:55.469: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 13 13:33:55.469: INFO: update-demo-nautilus-g6t7n is verified up and running Feb 13 13:33:55.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ncqkf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6451' Feb 13 13:33:55.569: INFO: stderr: "" Feb 13 13:33:55.569: INFO: stdout: "true" Feb 13 13:33:55.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ncqkf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6451' Feb 13 13:33:55.672: INFO: stderr: "" Feb 13 13:33:55.672: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 13 13:33:55.672: INFO: validating pod update-demo-nautilus-ncqkf Feb 13 13:33:55.696: INFO: got data: { "image": "nautilus.jpg" } Feb 13 13:33:55.696: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 13 13:33:55.696: INFO: update-demo-nautilus-ncqkf is verified up and running STEP: scaling down the replication controller Feb 13 13:33:55.699: INFO: scanned /root for discovery docs: Feb 13 13:33:55.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-6451' Feb 13 13:33:56.900: INFO: stderr: "" Feb 13 13:33:56.900: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 13 13:33:56.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6451' Feb 13 13:33:57.137: INFO: stderr: "" Feb 13 13:33:57.137: INFO: stdout: "update-demo-nautilus-g6t7n update-demo-nautilus-ncqkf " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 13 13:34:02.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6451' Feb 13 13:34:02.260: INFO: stderr: "" Feb 13 13:34:02.260: INFO: stdout: "update-demo-nautilus-g6t7n " Feb 13 13:34:02.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g6t7n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6451' Feb 13 13:34:02.343: INFO: stderr: "" Feb 13 13:34:02.344: INFO: stdout: "true" Feb 13 13:34:02.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g6t7n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6451' Feb 13 13:34:02.433: INFO: stderr: "" Feb 13 13:34:02.433: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 13 13:34:02.433: INFO: validating pod update-demo-nautilus-g6t7n Feb 13 13:34:02.437: INFO: got data: { "image": "nautilus.jpg" } Feb 13 13:34:02.437: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 13 13:34:02.437: INFO: update-demo-nautilus-g6t7n is verified up and running STEP: scaling up the replication controller Feb 13 13:34:02.439: INFO: scanned /root for discovery docs: Feb 13 13:34:02.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-6451' Feb 13 13:34:03.636: INFO: stderr: "" Feb 13 13:34:03.636: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 13 13:34:03.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6451' Feb 13 13:34:03.805: INFO: stderr: "" Feb 13 13:34:03.806: INFO: stdout: "update-demo-nautilus-bbvcx update-demo-nautilus-g6t7n " Feb 13 13:34:03.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bbvcx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6451' Feb 13 13:34:03.971: INFO: stderr: "" Feb 13 13:34:03.971: INFO: stdout: "" Feb 13 13:34:03.971: INFO: update-demo-nautilus-bbvcx is created but not running Feb 13 13:34:08.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6451' Feb 13 13:34:09.149: INFO: stderr: "" Feb 13 13:34:09.150: INFO: stdout: "update-demo-nautilus-bbvcx update-demo-nautilus-g6t7n " Feb 13 13:34:09.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bbvcx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6451' Feb 13 13:34:09.266: INFO: stderr: "" Feb 13 13:34:09.266: INFO: stdout: "" Feb 13 13:34:09.266: INFO: update-demo-nautilus-bbvcx is created but not running Feb 13 13:34:14.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6451' Feb 13 13:34:14.470: INFO: stderr: "" Feb 13 13:34:14.470: INFO: stdout: "update-demo-nautilus-bbvcx update-demo-nautilus-g6t7n " Feb 13 13:34:14.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bbvcx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6451' Feb 13 13:34:14.610: INFO: stderr: "" Feb 13 13:34:14.610: INFO: stdout: "true" Feb 13 13:34:14.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bbvcx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6451' Feb 13 13:34:14.719: INFO: stderr: "" Feb 13 13:34:14.719: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 13 13:34:14.719: INFO: validating pod update-demo-nautilus-bbvcx Feb 13 13:34:14.733: INFO: got data: { "image": "nautilus.jpg" } Feb 13 13:34:14.733: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 13 13:34:14.733: INFO: update-demo-nautilus-bbvcx is verified up and running Feb 13 13:34:14.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g6t7n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6451' Feb 13 13:34:14.839: INFO: stderr: "" Feb 13 13:34:14.839: INFO: stdout: "true" Feb 13 13:34:14.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g6t7n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6451' Feb 13 13:34:14.953: INFO: stderr: "" Feb 13 13:34:14.953: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 13 13:34:14.953: INFO: validating pod update-demo-nautilus-g6t7n Feb 13 13:34:14.957: INFO: got data: { "image": "nautilus.jpg" } Feb 13 13:34:14.957: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 13 13:34:14.957: INFO: update-demo-nautilus-g6t7n is verified up and running STEP: using delete to clean up resources Feb 13 13:34:14.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6451' Feb 13 13:34:15.082: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 13 13:34:15.083: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 13 13:34:15.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6451' Feb 13 13:34:15.204: INFO: stderr: "No resources found.\n" Feb 13 13:34:15.204: INFO: stdout: "" Feb 13 13:34:15.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6451 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 13 13:34:15.326: INFO: stderr: "" Feb 13 13:34:15.327: INFO: stdout: "update-demo-nautilus-bbvcx\nupdate-demo-nautilus-g6t7n\n" Feb 13 13:34:15.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6451' Feb 13 13:34:16.990: INFO: stderr: "No resources found.\n" Feb 13 13:34:16.990: INFO: stdout: "" Feb 13 13:34:16.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6451 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 13 13:34:17.264: INFO: stderr: "" Feb 13 13:34:17.264: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:34:17.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6451" for this suite. Feb 13 13:34:41.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:34:41.466: INFO: namespace kubectl-6451 deletion completed in 24.187649347s • [SLOW TEST:59.535 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:34:41.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:34:50.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4254" for this suite. Feb 13 13:35:12.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:35:12.992: INFO: namespace replication-controller-4254 deletion completed in 22.214353s • [SLOW TEST:31.525 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:35:12.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-1f68c8de-59ab-4c51-a3fa-6e5884cd2cc3 STEP: Creating a pod to test consume secrets Feb 13 13:35:13.061: INFO: Waiting up to 5m0s for pod "pod-secrets-4c530bca-d651-4f45-abb0-77b86eda2788" in namespace "secrets-3331" to be "success or failure" Feb 13 13:35:13.115: INFO: Pod "pod-secrets-4c530bca-d651-4f45-abb0-77b86eda2788": Phase="Pending", Reason="", readiness=false. Elapsed: 53.957393ms Feb 13 13:35:15.127: INFO: Pod "pod-secrets-4c530bca-d651-4f45-abb0-77b86eda2788": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066060354s Feb 13 13:35:17.141: INFO: Pod "pod-secrets-4c530bca-d651-4f45-abb0-77b86eda2788": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080705271s Feb 13 13:35:19.152: INFO: Pod "pod-secrets-4c530bca-d651-4f45-abb0-77b86eda2788": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091248563s Feb 13 13:35:21.160: INFO: Pod "pod-secrets-4c530bca-d651-4f45-abb0-77b86eda2788": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.099560633s STEP: Saw pod success Feb 13 13:35:21.160: INFO: Pod "pod-secrets-4c530bca-d651-4f45-abb0-77b86eda2788" satisfied condition "success or failure" Feb 13 13:35:21.165: INFO: Trying to get logs from node iruya-node pod pod-secrets-4c530bca-d651-4f45-abb0-77b86eda2788 container secret-volume-test: STEP: delete the pod Feb 13 13:35:23.893: INFO: Waiting for pod pod-secrets-4c530bca-d651-4f45-abb0-77b86eda2788 to disappear Feb 13 13:35:23.905: INFO: Pod pod-secrets-4c530bca-d651-4f45-abb0-77b86eda2788 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:35:23.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3331" for this suite. Feb 13 13:35:30.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:35:30.240: INFO: namespace secrets-3331 deletion completed in 6.312490397s • [SLOW TEST:17.248 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:35:30.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 13 13:35:49.540: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 13 13:35:49.557: INFO: Pod pod-with-prestop-exec-hook still exists Feb 13 13:35:51.557: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 13 13:35:51.567: INFO: Pod pod-with-prestop-exec-hook still exists Feb 13 13:35:53.558: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 13 13:35:53.568: INFO: Pod pod-with-prestop-exec-hook still exists Feb 13 13:35:55.557: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 13 13:35:55.565: INFO: Pod pod-with-prestop-exec-hook still exists Feb 13 13:35:57.557: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 13 13:35:57.572: INFO: Pod pod-with-prestop-exec-hook still exists Feb 13 13:35:59.558: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 13 13:35:59.569: INFO: Pod pod-with-prestop-exec-hook still exists Feb 13 13:36:01.558: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 13 13:36:01.573: INFO: Pod pod-with-prestop-exec-hook still exists Feb 13 13:36:03.558: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 13 13:36:03.567: INFO: Pod pod-with-prestop-exec-hook still exists Feb 13 13:36:05.558: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 13 13:36:05.568: INFO: Pod pod-with-prestop-exec-hook still exists Feb 13 13:36:07.558: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 13 13:36:07.567: INFO: Pod pod-with-prestop-exec-hook still exists Feb 13 13:36:09.558: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 13 13:36:09.567: INFO: Pod pod-with-prestop-exec-hook still exists Feb 13 13:36:11.558: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 13 13:36:11.568: INFO: Pod pod-with-prestop-exec-hook still exists Feb 13 13:36:13.558: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 13 13:36:13.574: INFO: Pod pod-with-prestop-exec-hook still exists Feb 13 13:36:15.558: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 13 13:36:15.568: INFO: Pod pod-with-prestop-exec-hook still exists Feb 13 13:36:17.558: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 13 13:36:17.567: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:36:17.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6581" for this suite. Feb 13 13:36:47.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:36:47.809: INFO: namespace container-lifecycle-hook-6581 deletion completed in 30.191664168s • [SLOW TEST:77.568 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:36:47.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-6e37dc8d-9d09-4bfe-907a-f0f6be2904bb STEP: Creating a pod to test consume secrets Feb 13 13:36:47.957: INFO: Waiting up to 5m0s for pod "pod-secrets-622fcf7b-d1eb-4c9f-a3a2-eeb0566d6e5c" in namespace "secrets-8490" to be "success or failure" Feb 13 13:36:47.964: INFO: Pod "pod-secrets-622fcf7b-d1eb-4c9f-a3a2-eeb0566d6e5c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.59056ms Feb 13 13:36:49.980: INFO: Pod "pod-secrets-622fcf7b-d1eb-4c9f-a3a2-eeb0566d6e5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022560739s Feb 13 13:36:51.986: INFO: Pod "pod-secrets-622fcf7b-d1eb-4c9f-a3a2-eeb0566d6e5c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028724164s Feb 13 13:36:53.999: INFO: Pod "pod-secrets-622fcf7b-d1eb-4c9f-a3a2-eeb0566d6e5c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041519037s Feb 13 13:36:56.007: INFO: Pod "pod-secrets-622fcf7b-d1eb-4c9f-a3a2-eeb0566d6e5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050102101s STEP: Saw pod success Feb 13 13:36:56.008: INFO: Pod "pod-secrets-622fcf7b-d1eb-4c9f-a3a2-eeb0566d6e5c" satisfied condition "success or failure" Feb 13 13:36:56.012: INFO: Trying to get logs from node iruya-node pod pod-secrets-622fcf7b-d1eb-4c9f-a3a2-eeb0566d6e5c container secret-volume-test: STEP: delete the pod Feb 13 13:36:56.108: INFO: Waiting for pod pod-secrets-622fcf7b-d1eb-4c9f-a3a2-eeb0566d6e5c to disappear Feb 13 13:36:56.187: INFO: Pod pod-secrets-622fcf7b-d1eb-4c9f-a3a2-eeb0566d6e5c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:36:56.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8490" for this suite. Feb 13 13:37:02.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:37:02.312: INFO: namespace secrets-8490 deletion completed in 6.119121765s • [SLOW TEST:14.503 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:37:02.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-85416850-32d5-4433-978e-f6f5db56f0c6 STEP: Creating secret with name s-test-opt-upd-81ac56d6-d764-4488-980e-665a64318e7d STEP: Creating the pod STEP: Deleting secret s-test-opt-del-85416850-32d5-4433-978e-f6f5db56f0c6 STEP: Updating secret s-test-opt-upd-81ac56d6-d764-4488-980e-665a64318e7d STEP: Creating secret with name s-test-opt-create-6f78bd9a-102d-4349-bf70-b8334fafce13 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:37:14.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5563" for this suite. Feb 13 13:37:36.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:37:36.920: INFO: namespace secrets-5563 deletion completed in 22.169170249s • [SLOW TEST:34.607 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:37:36.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-8982 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Feb 13 13:37:37.007: INFO: Found 0 stateful pods, waiting for 3 Feb 13 13:37:47.018: INFO: Found 2 stateful pods, waiting for 3 Feb 13 13:37:57.023: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 13 13:37:57.024: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 13 13:37:57.024: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 13 13:38:07.019: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 13 13:38:07.019: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 13 13:38:07.019: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Feb 13 13:38:07.068: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Feb 13 13:38:17.127: INFO: Updating stateful set ss2 Feb 13 13:38:17.138: INFO: Waiting for Pod statefulset-8982/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Feb 13 13:38:27.343: INFO: Found 2 stateful pods, waiting for 3 Feb 13 13:38:37.368: INFO: Found 2 stateful pods, waiting for 3 Feb 13 13:38:47.355: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 13 13:38:47.355: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 13 13:38:47.355: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Feb 13 13:38:57.433: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 13 13:38:57.434: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 13 13:38:57.434: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Feb 13 13:38:57.471: INFO: Updating stateful set ss2 Feb 13 13:38:57.490: INFO: Waiting for Pod statefulset-8982/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 13 13:39:07.560: INFO: Updating stateful set ss2 Feb 13 13:39:07.580: INFO: Waiting for StatefulSet statefulset-8982/ss2 to complete update Feb 13 13:39:07.580: INFO: Waiting for Pod statefulset-8982/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 13 13:39:17.602: INFO: Waiting for StatefulSet statefulset-8982/ss2 to complete update Feb 13 13:39:17.602: INFO: Waiting for Pod statefulset-8982/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 13 13:39:27.603: INFO: Waiting for StatefulSet statefulset-8982/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 13 13:39:37.602: INFO: Deleting all statefulset in ns statefulset-8982 Feb 13 13:39:37.608: INFO: Scaling statefulset ss2 to 0 Feb 13 13:40:07.651: INFO: Waiting for statefulset status.replicas updated to 0 Feb 13 13:40:07.661: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:40:07.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8982" for this suite. Feb 13 13:40:15.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:40:15.916: INFO: namespace statefulset-8982 deletion completed in 8.177919465s • [SLOW TEST:158.995 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:40:15.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Feb 13 13:40:16.002: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:40:33.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6421" for this suite. Feb 13 13:40:55.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:40:55.459: INFO: namespace init-container-6421 deletion completed in 22.151335672s • [SLOW TEST:39.542 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:40:55.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Feb 13 13:41:02.703: INFO: 0 pods remaining Feb 13 13:41:02.703: INFO: 0 pods has nil DeletionTimestamp Feb 13 13:41:02.703: INFO: STEP: Gathering metrics W0213 13:41:03.379136 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 13 13:41:03.379: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:41:03.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8082" for this suite. Feb 13 13:41:15.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:41:15.767: INFO: namespace gc-8082 deletion completed in 12.384700797s • [SLOW TEST:20.307 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:41:15.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 13 13:41:24.022: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:41:24.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8505" for this suite. Feb 13 13:41:30.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:41:30.266: INFO: namespace container-runtime-8505 deletion completed in 6.163582594s • [SLOW TEST:14.497 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:41:30.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-64c4b038-4f62-42f0-b0d2-a5c50450edbd STEP: Creating a pod to test consume configMaps Feb 13 13:41:30.369: INFO: Waiting up to 5m0s for pod "pod-configmaps-29a3b141-8f38-4995-bb19-3c6d17625086" in namespace "configmap-3435" to be "success or failure" Feb 13 13:41:30.380: INFO: Pod "pod-configmaps-29a3b141-8f38-4995-bb19-3c6d17625086": Phase="Pending", Reason="", readiness=false. Elapsed: 10.288398ms Feb 13 13:41:32.393: INFO: Pod "pod-configmaps-29a3b141-8f38-4995-bb19-3c6d17625086": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023559212s Feb 13 13:41:34.406: INFO: Pod "pod-configmaps-29a3b141-8f38-4995-bb19-3c6d17625086": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036125527s Feb 13 13:41:36.419: INFO: Pod "pod-configmaps-29a3b141-8f38-4995-bb19-3c6d17625086": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049153678s Feb 13 13:41:38.430: INFO: Pod "pod-configmaps-29a3b141-8f38-4995-bb19-3c6d17625086": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060345654s STEP: Saw pod success Feb 13 13:41:38.430: INFO: Pod "pod-configmaps-29a3b141-8f38-4995-bb19-3c6d17625086" satisfied condition "success or failure" Feb 13 13:41:38.435: INFO: Trying to get logs from node iruya-node pod pod-configmaps-29a3b141-8f38-4995-bb19-3c6d17625086 container configmap-volume-test: STEP: delete the pod Feb 13 13:41:38.542: INFO: Waiting for pod pod-configmaps-29a3b141-8f38-4995-bb19-3c6d17625086 to disappear Feb 13 13:41:38.630: INFO: Pod pod-configmaps-29a3b141-8f38-4995-bb19-3c6d17625086 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:41:38.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3435" for this suite. Feb 13 13:41:44.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:41:44.792: INFO: namespace configmap-3435 deletion completed in 6.14690533s • [SLOW TEST:14.525 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:41:44.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-d59d4a87-1d7b-4fdf-9e5b-6daefdf10241 STEP: Creating a pod to test consume configMaps Feb 13 13:41:44.972: INFO: Waiting up to 5m0s for pod "pod-configmaps-115f55ae-dbb4-467a-be5d-9a6a82c627b7" in namespace "configmap-8205" to be "success or failure" Feb 13 13:41:44.984: INFO: Pod "pod-configmaps-115f55ae-dbb4-467a-be5d-9a6a82c627b7": Phase="Pending", Reason="", readiness=false. Elapsed: 11.181526ms Feb 13 13:41:46.990: INFO: Pod "pod-configmaps-115f55ae-dbb4-467a-be5d-9a6a82c627b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017560374s Feb 13 13:41:49.002: INFO: Pod "pod-configmaps-115f55ae-dbb4-467a-be5d-9a6a82c627b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02961756s Feb 13 13:41:51.014: INFO: Pod "pod-configmaps-115f55ae-dbb4-467a-be5d-9a6a82c627b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041442707s Feb 13 13:41:53.022: INFO: Pod "pod-configmaps-115f55ae-dbb4-467a-be5d-9a6a82c627b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050125308s STEP: Saw pod success Feb 13 13:41:53.023: INFO: Pod "pod-configmaps-115f55ae-dbb4-467a-be5d-9a6a82c627b7" satisfied condition "success or failure" Feb 13 13:41:53.034: INFO: Trying to get logs from node iruya-node pod pod-configmaps-115f55ae-dbb4-467a-be5d-9a6a82c627b7 container configmap-volume-test: STEP: delete the pod Feb 13 13:41:53.106: INFO: Waiting for pod pod-configmaps-115f55ae-dbb4-467a-be5d-9a6a82c627b7 to disappear Feb 13 13:41:53.121: INFO: Pod pod-configmaps-115f55ae-dbb4-467a-be5d-9a6a82c627b7 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:41:53.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8205" for this suite. Feb 13 13:41:59.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:41:59.348: INFO: namespace configmap-8205 deletion completed in 6.217102323s • [SLOW TEST:14.556 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:41:59.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-e15c4228-9d73-4445-b9e9-3be50cfa6479 STEP: Creating a pod to test consume secrets Feb 13 13:41:59.500: INFO: Waiting up to 5m0s for pod "pod-secrets-7187b38e-76db-452f-9219-5d15d98db8f8" in namespace "secrets-8372" to be "success or failure" Feb 13 13:41:59.514: INFO: Pod "pod-secrets-7187b38e-76db-452f-9219-5d15d98db8f8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.373381ms Feb 13 13:42:01.526: INFO: Pod "pod-secrets-7187b38e-76db-452f-9219-5d15d98db8f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02578245s Feb 13 13:42:03.533: INFO: Pod "pod-secrets-7187b38e-76db-452f-9219-5d15d98db8f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033492087s Feb 13 13:42:05.549: INFO: Pod "pod-secrets-7187b38e-76db-452f-9219-5d15d98db8f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049406195s Feb 13 13:42:07.559: INFO: Pod "pod-secrets-7187b38e-76db-452f-9219-5d15d98db8f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059589878s STEP: Saw pod success Feb 13 13:42:07.560: INFO: Pod "pod-secrets-7187b38e-76db-452f-9219-5d15d98db8f8" satisfied condition "success or failure" Feb 13 13:42:07.563: INFO: Trying to get logs from node iruya-node pod pod-secrets-7187b38e-76db-452f-9219-5d15d98db8f8 container secret-volume-test: STEP: delete the pod Feb 13 13:42:07.637: INFO: Waiting for pod pod-secrets-7187b38e-76db-452f-9219-5d15d98db8f8 to disappear Feb 13 13:42:07.693: INFO: Pod pod-secrets-7187b38e-76db-452f-9219-5d15d98db8f8 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:42:07.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8372" for this suite. Feb 13 13:42:13.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:42:13.915: INFO: namespace secrets-8372 deletion completed in 6.213195916s • [SLOW TEST:14.567 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:42:13.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Feb 13 13:42:14.034: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 13 13:42:14.051: INFO: Waiting for terminating namespaces to be deleted... Feb 13 13:42:14.054: INFO: Logging pods the kubelet thinks is on node iruya-node before test Feb 13 13:42:14.070: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Feb 13 13:42:14.070: INFO: Container weave ready: true, restart count 0 Feb 13 13:42:14.070: INFO: Container weave-npc ready: true, restart count 0 Feb 13 13:42:14.070: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded) Feb 13 13:42:14.070: INFO: Container kube-bench ready: false, restart count 0 Feb 13 13:42:14.070: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Feb 13 13:42:14.070: INFO: Container kube-proxy ready: true, restart count 0 Feb 13 13:42:14.070: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Feb 13 13:42:14.088: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Feb 13 13:42:14.088: INFO: Container etcd ready: true, restart count 0 Feb 13 13:42:14.088: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Feb 13 13:42:14.088: INFO: Container weave ready: true, restart count 0 Feb 13 13:42:14.088: INFO: Container weave-npc ready: true, restart count 0 Feb 13 13:42:14.088: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 13 13:42:14.088: INFO: Container coredns ready: true, restart count 0 Feb 13 13:42:14.088: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Feb 13 13:42:14.088: INFO: Container kube-controller-manager ready: true, restart count 21 Feb 13 13:42:14.088: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Feb 13 13:42:14.088: INFO: Container kube-proxy ready: true, restart count 0 Feb 13 13:42:14.088: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Feb 13 13:42:14.088: INFO: Container kube-apiserver ready: true, restart count 0 Feb 13 13:42:14.088: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Feb 13 13:42:14.088: INFO: Container kube-scheduler ready: true, restart count 13 Feb 13 13:42:14.088: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 13 13:42:14.088: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-node STEP: verifying the node has the label node iruya-server-sfge57q7djm7 Feb 13 13:42:14.805: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Feb 13 13:42:14.805: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Feb 13 13:42:14.805: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Feb 13 13:42:14.805: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7 Feb 13 13:42:14.805: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7 Feb 13 13:42:14.805: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Feb 13 13:42:14.805: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node Feb 13 13:42:14.805: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Feb 13 13:42:14.805: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7 Feb 13 13:42:14.805: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-77e8cad2-1fd5-4c4a-80c9-6626eac2b809.15f2fa08d3504dcb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7622/filler-pod-77e8cad2-1fd5-4c4a-80c9-6626eac2b809 to iruya-server-sfge57q7djm7] STEP: Considering event: Type = [Normal], Name = [filler-pod-77e8cad2-1fd5-4c4a-80c9-6626eac2b809.15f2fa09f959d290], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-77e8cad2-1fd5-4c4a-80c9-6626eac2b809.15f2fa0acf7c003e], Reason = [Created], Message = [Created container filler-pod-77e8cad2-1fd5-4c4a-80c9-6626eac2b809] STEP: Considering event: Type = [Normal], Name = [filler-pod-77e8cad2-1fd5-4c4a-80c9-6626eac2b809.15f2fa0af43640c3], Reason = [Started], Message = [Started container filler-pod-77e8cad2-1fd5-4c4a-80c9-6626eac2b809] STEP: Considering event: Type = [Normal], Name = [filler-pod-9956a831-87f9-4dfe-b9c4-62213c279021.15f2fa08d21684b1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7622/filler-pod-9956a831-87f9-4dfe-b9c4-62213c279021 to iruya-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-9956a831-87f9-4dfe-b9c4-62213c279021.15f2fa09eef64bc3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-9956a831-87f9-4dfe-b9c4-62213c279021.15f2fa0ab7ba0ae9], Reason = [Created], Message = [Created container filler-pod-9956a831-87f9-4dfe-b9c4-62213c279021] STEP: Considering event: Type = [Normal], Name = [filler-pod-9956a831-87f9-4dfe-b9c4-62213c279021.15f2fa0ae898eab5], Reason = [Started], Message = [Started container filler-pod-9956a831-87f9-4dfe-b9c4-62213c279021] STEP: Considering event: Type = [Warning], Name = [additional-pod.15f2fa0ba079d0b6], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node iruya-node STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-server-sfge57q7djm7 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:42:28.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7622" for this suite. Feb 13 13:42:34.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:42:34.423: INFO: namespace sched-pred-7622 deletion completed in 6.195518806s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:20.507 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:42:34.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 13 13:42:35.955: INFO: Waiting up to 5m0s for pod "downwardapi-volume-feb92394-9a70-4ad9-8cc0-60903d95fccd" in namespace "projected-9954" to be "success or failure" Feb 13 13:42:35.985: INFO: Pod "downwardapi-volume-feb92394-9a70-4ad9-8cc0-60903d95fccd": Phase="Pending", Reason="", readiness=false. Elapsed: 29.465806ms Feb 13 13:42:38.252: INFO: Pod "downwardapi-volume-feb92394-9a70-4ad9-8cc0-60903d95fccd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.296338859s Feb 13 13:42:40.270: INFO: Pod "downwardapi-volume-feb92394-9a70-4ad9-8cc0-60903d95fccd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314774193s Feb 13 13:42:42.276: INFO: Pod "downwardapi-volume-feb92394-9a70-4ad9-8cc0-60903d95fccd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.321251745s Feb 13 13:42:44.290: INFO: Pod "downwardapi-volume-feb92394-9a70-4ad9-8cc0-60903d95fccd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.334705997s Feb 13 13:42:46.302: INFO: Pod "downwardapi-volume-feb92394-9a70-4ad9-8cc0-60903d95fccd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.346690745s STEP: Saw pod success Feb 13 13:42:46.302: INFO: Pod "downwardapi-volume-feb92394-9a70-4ad9-8cc0-60903d95fccd" satisfied condition "success or failure" Feb 13 13:42:46.307: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-feb92394-9a70-4ad9-8cc0-60903d95fccd container client-container: STEP: delete the pod Feb 13 13:42:46.375: INFO: Waiting for pod downwardapi-volume-feb92394-9a70-4ad9-8cc0-60903d95fccd to disappear Feb 13 13:42:46.489: INFO: Pod downwardapi-volume-feb92394-9a70-4ad9-8cc0-60903d95fccd no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:42:46.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9954" for this suite. Feb 13 13:42:52.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:42:52.668: INFO: namespace projected-9954 deletion completed in 6.159119118s • [SLOW TEST:18.244 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:42:52.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-aa629eef-729b-48a3-8959-fae7f233d0c9 STEP: Creating secret with name s-test-opt-upd-bac93c8b-d9bf-4c3a-9957-fe06d73f4dc1 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-aa629eef-729b-48a3-8959-fae7f233d0c9 STEP: Updating secret s-test-opt-upd-bac93c8b-d9bf-4c3a-9957-fe06d73f4dc1 STEP: Creating secret with name s-test-opt-create-f297f2da-6009-4d79-b643-145d0819d665 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:43:07.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3215" for this suite. Feb 13 13:43:29.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:43:29.361: INFO: namespace projected-3215 deletion completed in 22.186076757s • [SLOW TEST:36.691 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:43:29.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:43:29.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9644" for this suite. Feb 13 13:43:35.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:43:35.649: INFO: namespace services-9644 deletion completed in 6.166577598s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.288 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:43:35.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Feb 13 13:43:45.792: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Feb 13 13:44:00.976: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:44:00.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3796" for this suite. Feb 13 13:44:07.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:44:07.122: INFO: namespace pods-3796 deletion completed in 6.128252824s • [SLOW TEST:31.472 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:44:07.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 13 13:44:17.382: INFO: Waiting up to 5m0s for pod "client-envvars-3fecd48d-8043-4dea-badf-57d91ffbf187" in namespace "pods-8808" to be "success or failure" Feb 13 13:44:17.398: INFO: Pod "client-envvars-3fecd48d-8043-4dea-badf-57d91ffbf187": Phase="Pending", Reason="", readiness=false. Elapsed: 15.120165ms Feb 13 13:44:19.409: INFO: Pod "client-envvars-3fecd48d-8043-4dea-badf-57d91ffbf187": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025873265s Feb 13 13:44:21.419: INFO: Pod "client-envvars-3fecd48d-8043-4dea-badf-57d91ffbf187": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036703641s Feb 13 13:44:23.433: INFO: Pod "client-envvars-3fecd48d-8043-4dea-badf-57d91ffbf187": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049815765s Feb 13 13:44:25.443: INFO: Pod "client-envvars-3fecd48d-8043-4dea-badf-57d91ffbf187": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060417267s Feb 13 13:44:27.460: INFO: Pod "client-envvars-3fecd48d-8043-4dea-badf-57d91ffbf187": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.077132595s STEP: Saw pod success Feb 13 13:44:27.460: INFO: Pod "client-envvars-3fecd48d-8043-4dea-badf-57d91ffbf187" satisfied condition "success or failure" Feb 13 13:44:27.470: INFO: Trying to get logs from node iruya-node pod client-envvars-3fecd48d-8043-4dea-badf-57d91ffbf187 container env3cont: STEP: delete the pod Feb 13 13:44:27.546: INFO: Waiting for pod client-envvars-3fecd48d-8043-4dea-badf-57d91ffbf187 to disappear Feb 13 13:44:27.554: INFO: Pod client-envvars-3fecd48d-8043-4dea-badf-57d91ffbf187 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:44:27.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8808" for this suite. Feb 13 13:45:19.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:45:19.810: INFO: namespace pods-8808 deletion completed in 52.188594393s • [SLOW TEST:72.689 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:45:19.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 13 13:45:19.948: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:45:28.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3091" for this suite. Feb 13 13:46:20.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:46:20.588: INFO: namespace pods-3091 deletion completed in 52.173520987s • [SLOW TEST:60.777 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:46:20.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 13 13:46:21.549: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Feb 13 13:46:26.566: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 13 13:46:30.595: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 13 13:46:30.716: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-5452,SelfLink:/apis/apps/v1/namespaces/deployment-5452/deployments/test-cleanup-deployment,UID:ac691625-fe34-48f0-a8fe-c084a4431177,ResourceVersion:24201773,Generation:1,CreationTimestamp:2020-02-13 13:46:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Feb 13 13:46:30.762: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-5452,SelfLink:/apis/apps/v1/namespaces/deployment-5452/replicasets/test-cleanup-deployment-55bbcbc84c,UID:65d70ddc-1c48-43e8-a42b-eb570c694be6,ResourceVersion:24201781,Generation:1,CreationTimestamp:2020-02-13 13:46:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment ac691625-fe34-48f0-a8fe-c084a4431177 0xc0024cd697 0xc0024cd698}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 13 13:46:30.762: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Feb 13 13:46:30.763: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-5452,SelfLink:/apis/apps/v1/namespaces/deployment-5452/replicasets/test-cleanup-controller,UID:d12b73cc-d05b-467f-897f-d40409329cde,ResourceVersion:24201774,Generation:1,CreationTimestamp:2020-02-13 13:46:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment ac691625-fe34-48f0-a8fe-c084a4431177 0xc0024cd5c7 0xc0024cd5c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 13 13:46:30.804: INFO: Pod "test-cleanup-controller-tkmwq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-tkmwq,GenerateName:test-cleanup-controller-,Namespace:deployment-5452,SelfLink:/api/v1/namespaces/deployment-5452/pods/test-cleanup-controller-tkmwq,UID:f236c95f-d798-45cd-8834-63aeea1a36e1,ResourceVersion:24201770,Generation:0,CreationTimestamp:2020-02-13 13:46:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller d12b73cc-d05b-467f-897f-d40409329cde 0xc0024cdf77 0xc0024cdf78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zmc8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zmc8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-zmc8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024cdff0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b30010}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:46:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:46:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:46:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:46:21 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-13 13:46:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-13 13:46:28 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://fe7c5d9d2e50b8f09c5b27ea9851fd3d63453fef502ad51ec1cd6329ab05b557}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 13 13:46:30.805: INFO: Pod "test-cleanup-deployment-55bbcbc84c-czw7z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-czw7z,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-5452,SelfLink:/api/v1/namespaces/deployment-5452/pods/test-cleanup-deployment-55bbcbc84c-czw7z,UID:114b39eb-56a2-45a4-98d2-61ccec24a68a,ResourceVersion:24201779,Generation:0,CreationTimestamp:2020-02-13 13:46:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 65d70ddc-1c48-43e8-a42b-eb570c694be6 0xc002b300f7 0xc002b300f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zmc8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zmc8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-zmc8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b30170} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b301a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:46:30 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:46:30.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5452" for this suite. Feb 13 13:46:39.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:46:39.128: INFO: namespace deployment-5452 deletion completed in 8.242662698s • [SLOW TEST:18.539 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:46:39.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Feb 13 13:46:39.335: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 13 13:46:39.351: INFO: Waiting for terminating namespaces to be deleted... Feb 13 13:46:39.358: INFO: Logging pods the kubelet thinks is on node iruya-node before test Feb 13 13:46:39.390: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded) Feb 13 13:46:39.390: INFO: Container kube-bench ready: false, restart count 0 Feb 13 13:46:39.390: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Feb 13 13:46:39.390: INFO: Container kube-proxy ready: true, restart count 0 Feb 13 13:46:39.390: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Feb 13 13:46:39.390: INFO: Container weave ready: true, restart count 0 Feb 13 13:46:39.390: INFO: Container weave-npc ready: true, restart count 0 Feb 13 13:46:39.390: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Feb 13 13:46:39.466: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Feb 13 13:46:39.466: INFO: Container kube-controller-manager ready: true, restart count 21 Feb 13 13:46:39.466: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Feb 13 13:46:39.466: INFO: Container kube-proxy ready: true, restart count 0 Feb 13 13:46:39.466: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Feb 13 13:46:39.466: INFO: Container kube-apiserver ready: true, restart count 0 Feb 13 13:46:39.466: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Feb 13 13:46:39.466: INFO: Container kube-scheduler ready: true, restart count 13 Feb 13 13:46:39.466: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 13 13:46:39.466: INFO: Container coredns ready: true, restart count 0 Feb 13 13:46:39.466: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Feb 13 13:46:39.466: INFO: Container etcd ready: true, restart count 0 Feb 13 13:46:39.466: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Feb 13 13:46:39.466: INFO: Container weave ready: true, restart count 0 Feb 13 13:46:39.466: INFO: Container weave-npc ready: true, restart count 0 Feb 13 13:46:39.466: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 13 13:46:39.466: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-19291174-f6e1-4140-99d1-65e7e1737e5d 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-19291174-f6e1-4140-99d1-65e7e1737e5d off the node iruya-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-19291174-f6e1-4140-99d1-65e7e1737e5d [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:46:59.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7405" for this suite. Feb 13 13:47:19.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:47:19.977: INFO: namespace sched-pred-7405 deletion completed in 20.165746309s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:40.848 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:47:19.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Feb 13 13:47:20.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Feb 13 13:47:20.297: INFO: stderr: "" Feb 13 13:47:20.297: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:47:20.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1989" for this suite. Feb 13 13:47:26.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:47:26.504: INFO: namespace kubectl-1989 deletion completed in 6.19508625s • [SLOW TEST:6.526 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:47:26.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 13 13:47:26.651: INFO: Waiting up to 5m0s for pod "pod-4cca625d-743b-4beb-8103-63e36a533073" in namespace "emptydir-4445" to be "success or failure" Feb 13 13:47:26.673: INFO: Pod "pod-4cca625d-743b-4beb-8103-63e36a533073": Phase="Pending", Reason="", readiness=false. Elapsed: 22.588059ms Feb 13 13:47:28.685: INFO: Pod "pod-4cca625d-743b-4beb-8103-63e36a533073": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033898439s Feb 13 13:47:30.783: INFO: Pod "pod-4cca625d-743b-4beb-8103-63e36a533073": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132090652s Feb 13 13:47:32.796: INFO: Pod "pod-4cca625d-743b-4beb-8103-63e36a533073": Phase="Pending", Reason="", readiness=false. Elapsed: 6.14537892s Feb 13 13:47:34.825: INFO: Pod "pod-4cca625d-743b-4beb-8103-63e36a533073": Phase="Pending", Reason="", readiness=false. Elapsed: 8.174198262s Feb 13 13:47:36.835: INFO: Pod "pod-4cca625d-743b-4beb-8103-63e36a533073": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.184735362s STEP: Saw pod success Feb 13 13:47:36.836: INFO: Pod "pod-4cca625d-743b-4beb-8103-63e36a533073" satisfied condition "success or failure" Feb 13 13:47:36.841: INFO: Trying to get logs from node iruya-node pod pod-4cca625d-743b-4beb-8103-63e36a533073 container test-container: STEP: delete the pod Feb 13 13:47:37.139: INFO: Waiting for pod pod-4cca625d-743b-4beb-8103-63e36a533073 to disappear Feb 13 13:47:37.154: INFO: Pod pod-4cca625d-743b-4beb-8103-63e36a533073 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:47:37.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4445" for this suite. Feb 13 13:47:43.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:47:43.445: INFO: namespace emptydir-4445 deletion completed in 6.257506785s • [SLOW TEST:16.938 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:47:43.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Feb 13 13:47:51.593: INFO: Pod pod-hostip-3172c0a2-5ca2-4d8c-a464-a8775e80fb59 has hostIP: 10.96.3.65 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:47:51.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3044" for this suite. Feb 13 13:48:13.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:48:13.952: INFO: namespace pods-3044 deletion completed in 22.352236262s • [SLOW TEST:30.506 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:48:13.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Feb 13 13:48:14.112: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5947,SelfLink:/api/v1/namespaces/watch-5947/configmaps/e2e-watch-test-watch-closed,UID:608e8176-030c-4230-899b-ebc6182417a1,ResourceVersion:24202046,Generation:0,CreationTimestamp:2020-02-13 13:48:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 13 13:48:14.112: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5947,SelfLink:/api/v1/namespaces/watch-5947/configmaps/e2e-watch-test-watch-closed,UID:608e8176-030c-4230-899b-ebc6182417a1,ResourceVersion:24202047,Generation:0,CreationTimestamp:2020-02-13 13:48:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Feb 13 13:48:14.156: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5947,SelfLink:/api/v1/namespaces/watch-5947/configmaps/e2e-watch-test-watch-closed,UID:608e8176-030c-4230-899b-ebc6182417a1,ResourceVersion:24202048,Generation:0,CreationTimestamp:2020-02-13 13:48:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 13 13:48:14.156: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5947,SelfLink:/api/v1/namespaces/watch-5947/configmaps/e2e-watch-test-watch-closed,UID:608e8176-030c-4230-899b-ebc6182417a1,ResourceVersion:24202050,Generation:0,CreationTimestamp:2020-02-13 13:48:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:48:14.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5947" for this suite. Feb 13 13:48:20.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:48:20.326: INFO: namespace watch-5947 deletion completed in 6.160239818s • [SLOW TEST:6.372 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:48:20.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 13 13:48:28.588: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:48:28.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3499" for this suite. Feb 13 13:48:34.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:48:34.786: INFO: namespace container-runtime-3499 deletion completed in 6.160751526s • [SLOW TEST:14.459 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:48:34.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-c5qc STEP: Creating a pod to test atomic-volume-subpath Feb 13 13:48:35.118: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-c5qc" in namespace "subpath-3825" to be "success or failure" Feb 13 13:48:35.133: INFO: Pod "pod-subpath-test-configmap-c5qc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.878392ms Feb 13 13:48:37.140: INFO: Pod "pod-subpath-test-configmap-c5qc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02165824s Feb 13 13:48:39.168: INFO: Pod "pod-subpath-test-configmap-c5qc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049633413s Feb 13 13:48:41.176: INFO: Pod "pod-subpath-test-configmap-c5qc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05734332s Feb 13 13:48:43.188: INFO: Pod "pod-subpath-test-configmap-c5qc": Phase="Running", Reason="", readiness=true. Elapsed: 8.069883249s Feb 13 13:48:45.200: INFO: Pod "pod-subpath-test-configmap-c5qc": Phase="Running", Reason="", readiness=true. Elapsed: 10.081958205s Feb 13 13:48:47.208: INFO: Pod "pod-subpath-test-configmap-c5qc": Phase="Running", Reason="", readiness=true. Elapsed: 12.089088895s Feb 13 13:48:49.215: INFO: Pod "pod-subpath-test-configmap-c5qc": Phase="Running", Reason="", readiness=true. Elapsed: 14.096229655s Feb 13 13:48:51.221: INFO: Pod "pod-subpath-test-configmap-c5qc": Phase="Running", Reason="", readiness=true. Elapsed: 16.102354735s Feb 13 13:48:53.235: INFO: Pod "pod-subpath-test-configmap-c5qc": Phase="Running", Reason="", readiness=true. Elapsed: 18.11669613s Feb 13 13:48:55.248: INFO: Pod "pod-subpath-test-configmap-c5qc": Phase="Running", Reason="", readiness=true. Elapsed: 20.129181225s Feb 13 13:48:57.258: INFO: Pod "pod-subpath-test-configmap-c5qc": Phase="Running", Reason="", readiness=true. Elapsed: 22.139228857s Feb 13 13:48:59.268: INFO: Pod "pod-subpath-test-configmap-c5qc": Phase="Running", Reason="", readiness=true. Elapsed: 24.149559172s Feb 13 13:49:01.278: INFO: Pod "pod-subpath-test-configmap-c5qc": Phase="Running", Reason="", readiness=true. Elapsed: 26.159378514s Feb 13 13:49:03.287: INFO: Pod "pod-subpath-test-configmap-c5qc": Phase="Running", Reason="", readiness=true. Elapsed: 28.168775034s Feb 13 13:49:05.296: INFO: Pod "pod-subpath-test-configmap-c5qc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.177286511s STEP: Saw pod success Feb 13 13:49:05.296: INFO: Pod "pod-subpath-test-configmap-c5qc" satisfied condition "success or failure" Feb 13 13:49:05.300: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-c5qc container test-container-subpath-configmap-c5qc: STEP: delete the pod Feb 13 13:49:05.361: INFO: Waiting for pod pod-subpath-test-configmap-c5qc to disappear Feb 13 13:49:05.496: INFO: Pod pod-subpath-test-configmap-c5qc no longer exists STEP: Deleting pod pod-subpath-test-configmap-c5qc Feb 13 13:49:05.496: INFO: Deleting pod "pod-subpath-test-configmap-c5qc" in namespace "subpath-3825" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:49:05.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3825" for this suite. Feb 13 13:49:11.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:49:11.694: INFO: namespace subpath-3825 deletion completed in 6.178706515s • [SLOW TEST:36.907 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:49:11.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2012 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-2012 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2012 Feb 13 13:49:12.042: INFO: Found 0 stateful pods, waiting for 1 Feb 13 13:49:22.052: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Feb 13 13:49:22.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 13 13:49:25.193: INFO: stderr: "I0213 13:49:24.742341 1121 log.go:172] (0xc00011ae70) (0xc0004e0960) Create stream\nI0213 13:49:24.742598 1121 log.go:172] (0xc00011ae70) (0xc0004e0960) Stream added, broadcasting: 1\nI0213 13:49:24.756495 1121 log.go:172] (0xc00011ae70) Reply frame received for 1\nI0213 13:49:24.756565 1121 log.go:172] (0xc00011ae70) (0xc000918000) Create stream\nI0213 13:49:24.756579 1121 log.go:172] (0xc00011ae70) (0xc000918000) Stream added, broadcasting: 3\nI0213 13:49:24.759178 1121 log.go:172] (0xc00011ae70) Reply frame received for 3\nI0213 13:49:24.759237 1121 log.go:172] (0xc00011ae70) (0xc00099c000) Create stream\nI0213 13:49:24.759262 1121 log.go:172] (0xc00011ae70) (0xc00099c000) Stream added, broadcasting: 5\nI0213 13:49:24.761679 1121 log.go:172] (0xc00011ae70) Reply frame received for 5\nI0213 13:49:24.951307 1121 log.go:172] (0xc00011ae70) Data frame received for 5\nI0213 13:49:24.951427 1121 log.go:172] (0xc00099c000) (5) Data frame handling\nI0213 13:49:24.951466 1121 log.go:172] (0xc00099c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0213 13:49:25.010058 1121 log.go:172] (0xc00011ae70) Data frame received for 3\nI0213 13:49:25.010178 1121 log.go:172] (0xc000918000) (3) Data frame handling\nI0213 13:49:25.010240 1121 log.go:172] (0xc000918000) (3) Data frame sent\nI0213 13:49:25.162319 1121 log.go:172] (0xc00011ae70) (0xc000918000) Stream removed, broadcasting: 3\nI0213 13:49:25.162695 1121 log.go:172] (0xc00011ae70) Data frame received for 1\nI0213 13:49:25.162736 1121 log.go:172] (0xc0004e0960) (1) Data frame handling\nI0213 13:49:25.162771 1121 log.go:172] (0xc0004e0960) (1) Data frame sent\nI0213 13:49:25.162790 1121 log.go:172] (0xc00011ae70) (0xc00099c000) Stream removed, broadcasting: 5\nI0213 13:49:25.162960 1121 log.go:172] (0xc00011ae70) (0xc0004e0960) Stream removed, broadcasting: 1\nI0213 13:49:25.162977 1121 log.go:172] (0xc00011ae70) Go away received\nI0213 13:49:25.164692 1121 log.go:172] (0xc00011ae70) (0xc0004e0960) Stream removed, broadcasting: 1\nI0213 13:49:25.164728 1121 log.go:172] (0xc00011ae70) (0xc000918000) Stream removed, broadcasting: 3\nI0213 13:49:25.164741 1121 log.go:172] (0xc00011ae70) (0xc00099c000) Stream removed, broadcasting: 5\n" Feb 13 13:49:25.194: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 13 13:49:25.194: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 13 13:49:25.260: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 13 13:49:35.276: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 13 13:49:35.277: INFO: Waiting for statefulset status.replicas updated to 0 Feb 13 13:49:35.302: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 13:49:35.302: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:12 +0000 UTC }] Feb 13 13:49:35.302: INFO: Feb 13 13:49:35.302: INFO: StatefulSet ss has not reached scale 3, at 1 Feb 13 13:49:36.317: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.986773386s Feb 13 13:49:37.517: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.972026466s Feb 13 13:49:38.649: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.771814275s Feb 13 13:49:39.679: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.639451896s Feb 13 13:49:40.721: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.610098606s Feb 13 13:49:42.040: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.567909776s Feb 13 13:49:45.712: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.249045044s STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2012 Feb 13 13:49:46.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:49:47.448: INFO: stderr: "I0213 13:49:47.109814 1157 log.go:172] (0xc00012b080) (0xc0004cc8c0) Create stream\nI0213 13:49:47.110056 1157 log.go:172] (0xc00012b080) (0xc0004cc8c0) Stream added, broadcasting: 1\nI0213 13:49:47.119072 1157 log.go:172] (0xc00012b080) Reply frame received for 1\nI0213 13:49:47.119177 1157 log.go:172] (0xc00012b080) (0xc0007ca000) Create stream\nI0213 13:49:47.119211 1157 log.go:172] (0xc00012b080) (0xc0007ca000) Stream added, broadcasting: 3\nI0213 13:49:47.121223 1157 log.go:172] (0xc00012b080) Reply frame received for 3\nI0213 13:49:47.121247 1157 log.go:172] (0xc00012b080) (0xc0007ca0a0) Create stream\nI0213 13:49:47.121273 1157 log.go:172] (0xc00012b080) (0xc0007ca0a0) Stream added, broadcasting: 5\nI0213 13:49:47.122708 1157 log.go:172] (0xc00012b080) Reply frame received for 5\nI0213 13:49:47.276998 1157 log.go:172] (0xc00012b080) Data frame received for 5\nI0213 13:49:47.277189 1157 log.go:172] (0xc0007ca0a0) (5) Data frame handling\nI0213 13:49:47.277242 1157 log.go:172] (0xc0007ca0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0213 13:49:47.277304 1157 log.go:172] (0xc00012b080) Data frame received for 3\nI0213 13:49:47.277324 1157 log.go:172] (0xc0007ca000) (3) Data frame handling\nI0213 13:49:47.277376 1157 log.go:172] (0xc0007ca000) (3) Data frame sent\nI0213 13:49:47.427582 1157 log.go:172] (0xc00012b080) Data frame received for 1\nI0213 13:49:47.427956 1157 log.go:172] (0xc00012b080) (0xc0007ca0a0) Stream removed, broadcasting: 5\nI0213 13:49:47.432485 1157 log.go:172] (0xc0004cc8c0) (1) Data frame handling\nI0213 13:49:47.432862 1157 log.go:172] (0xc0004cc8c0) (1) Data frame sent\nI0213 13:49:47.433246 1157 log.go:172] (0xc00012b080) (0xc0007ca000) Stream removed, broadcasting: 3\nI0213 13:49:47.433440 1157 log.go:172] (0xc00012b080) (0xc0004cc8c0) Stream removed, broadcasting: 1\nI0213 13:49:47.433604 1157 log.go:172] (0xc00012b080) Go away received\nI0213 13:49:47.439649 1157 log.go:172] (0xc00012b080) (0xc0004cc8c0) Stream removed, broadcasting: 1\nI0213 13:49:47.439717 1157 log.go:172] (0xc00012b080) (0xc0007ca000) Stream removed, broadcasting: 3\nI0213 13:49:47.439731 1157 log.go:172] (0xc00012b080) (0xc0007ca0a0) Stream removed, broadcasting: 5\n" Feb 13 13:49:47.449: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 13 13:49:47.449: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 13 13:49:47.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:49:48.120: INFO: stderr: "I0213 13:49:47.636000 1177 log.go:172] (0xc000a22000) (0xc000ad0140) Create stream\nI0213 13:49:47.636289 1177 log.go:172] (0xc000a22000) (0xc000ad0140) Stream added, broadcasting: 1\nI0213 13:49:47.641544 1177 log.go:172] (0xc000a22000) Reply frame received for 1\nI0213 13:49:47.641586 1177 log.go:172] (0xc000a22000) (0xc0006721e0) Create stream\nI0213 13:49:47.641630 1177 log.go:172] (0xc000a22000) (0xc0006721e0) Stream added, broadcasting: 3\nI0213 13:49:47.643475 1177 log.go:172] (0xc000a22000) Reply frame received for 3\nI0213 13:49:47.643521 1177 log.go:172] (0xc000a22000) (0xc000226000) Create stream\nI0213 13:49:47.643534 1177 log.go:172] (0xc000a22000) (0xc000226000) Stream added, broadcasting: 5\nI0213 13:49:47.644975 1177 log.go:172] (0xc000a22000) Reply frame received for 5\nI0213 13:49:47.788242 1177 log.go:172] (0xc000a22000) Data frame received for 5\nI0213 13:49:47.788311 1177 log.go:172] (0xc000226000) (5) Data frame handling\nI0213 13:49:47.788324 1177 log.go:172] (0xc000226000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0213 13:49:47.952890 1177 log.go:172] (0xc000a22000) Data frame received for 5\nI0213 13:49:47.953153 1177 log.go:172] (0xc000226000) (5) Data frame handling\nI0213 13:49:47.953226 1177 log.go:172] (0xc000226000) (5) Data frame sent\nI0213 13:49:47.953256 1177 log.go:172] (0xc000a22000) Data frame received for 5\nI0213 13:49:47.953285 1177 log.go:172] (0xc000226000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0213 13:49:47.953348 1177 log.go:172] (0xc000226000) (5) Data frame sent\nI0213 13:49:47.953364 1177 log.go:172] (0xc000a22000) Data frame received for 3\nI0213 13:49:47.953375 1177 log.go:172] (0xc0006721e0) (3) Data frame handling\nI0213 13:49:47.953392 1177 log.go:172] (0xc0006721e0) (3) Data frame sent\nI0213 13:49:48.108275 1177 log.go:172] (0xc000a22000) (0xc000226000) Stream removed, broadcasting: 5\nI0213 13:49:48.108422 1177 log.go:172] (0xc000a22000) Data frame received for 1\nI0213 13:49:48.108453 1177 log.go:172] (0xc000a22000) (0xc0006721e0) Stream removed, broadcasting: 3\nI0213 13:49:48.108636 1177 log.go:172] (0xc000ad0140) (1) Data frame handling\nI0213 13:49:48.108838 1177 log.go:172] (0xc000ad0140) (1) Data frame sent\nI0213 13:49:48.108872 1177 log.go:172] (0xc000a22000) (0xc000ad0140) Stream removed, broadcasting: 1\nI0213 13:49:48.108910 1177 log.go:172] (0xc000a22000) Go away received\nI0213 13:49:48.109732 1177 log.go:172] (0xc000a22000) (0xc000ad0140) Stream removed, broadcasting: 1\nI0213 13:49:48.109748 1177 log.go:172] (0xc000a22000) (0xc0006721e0) Stream removed, broadcasting: 3\nI0213 13:49:48.109755 1177 log.go:172] (0xc000a22000) (0xc000226000) Stream removed, broadcasting: 5\n" Feb 13 13:49:48.120: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 13 13:49:48.120: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 13 13:49:48.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:49:48.798: INFO: stderr: "I0213 13:49:48.370600 1197 log.go:172] (0xc0009a4370) (0xc0009125a0) Create stream\nI0213 13:49:48.370861 1197 log.go:172] (0xc0009a4370) (0xc0009125a0) Stream added, broadcasting: 1\nI0213 13:49:48.378927 1197 log.go:172] (0xc0009a4370) Reply frame received for 1\nI0213 13:49:48.378980 1197 log.go:172] (0xc0009a4370) (0xc0008f0000) Create stream\nI0213 13:49:48.378991 1197 log.go:172] (0xc0009a4370) (0xc0008f0000) Stream added, broadcasting: 3\nI0213 13:49:48.380159 1197 log.go:172] (0xc0009a4370) Reply frame received for 3\nI0213 13:49:48.380177 1197 log.go:172] (0xc0009a4370) (0xc0008f00a0) Create stream\nI0213 13:49:48.380186 1197 log.go:172] (0xc0009a4370) (0xc0008f00a0) Stream added, broadcasting: 5\nI0213 13:49:48.381409 1197 log.go:172] (0xc0009a4370) Reply frame received for 5\nI0213 13:49:48.604168 1197 log.go:172] (0xc0009a4370) Data frame received for 5\nI0213 13:49:48.604429 1197 log.go:172] (0xc0008f00a0) (5) Data frame handling\nI0213 13:49:48.604523 1197 log.go:172] (0xc0008f00a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\nI0213 13:49:48.607600 1197 log.go:172] (0xc0009a4370) Data frame received for 5\nI0213 13:49:48.607632 1197 log.go:172] (0xc0008f00a0) (5) Data frame handling\nI0213 13:49:48.607655 1197 log.go:172] (0xc0009a4370) Data frame received for 3\nI0213 13:49:48.607696 1197 log.go:172] (0xc0008f0000) (3) Data frame handling\nI0213 13:49:48.607716 1197 log.go:172] (0xc0008f00a0) (5) Data frame sent\nI0213 13:49:48.607735 1197 log.go:172] (0xc0008f0000) (3) Data frame sent\n+ true\nI0213 13:49:48.788990 1197 log.go:172] (0xc0009a4370) (0xc0008f0000) Stream removed, broadcasting: 3\nI0213 13:49:48.789328 1197 log.go:172] (0xc0009a4370) Data frame received for 1\nI0213 13:49:48.789582 1197 log.go:172] (0xc0009a4370) (0xc0008f00a0) Stream removed, broadcasting: 5\nI0213 13:49:48.789759 1197 log.go:172] (0xc0009125a0) (1) Data frame handling\nI0213 13:49:48.789791 1197 log.go:172] (0xc0009125a0) (1) Data frame sent\nI0213 13:49:48.789817 1197 log.go:172] (0xc0009a4370) (0xc0009125a0) Stream removed, broadcasting: 1\nI0213 13:49:48.789855 1197 log.go:172] (0xc0009a4370) Go away received\nI0213 13:49:48.791248 1197 log.go:172] (0xc0009a4370) (0xc0009125a0) Stream removed, broadcasting: 1\nI0213 13:49:48.791360 1197 log.go:172] (0xc0009a4370) (0xc0008f0000) Stream removed, broadcasting: 3\nI0213 13:49:48.791374 1197 log.go:172] (0xc0009a4370) (0xc0008f00a0) Stream removed, broadcasting: 5\n" Feb 13 13:49:48.798: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 13 13:49:48.798: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 13 13:49:48.805: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 13 13:49:48.806: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 13 13:49:48.806: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=false Feb 13 13:49:58.814: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 13 13:49:58.814: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 13 13:49:58.814: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Feb 13 13:49:58.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 13 13:49:59.288: INFO: stderr: "I0213 13:49:59.027073 1217 log.go:172] (0xc0009cc370) (0xc0009c66e0) Create stream\nI0213 13:49:59.027276 1217 log.go:172] (0xc0009cc370) (0xc0009c66e0) Stream added, broadcasting: 1\nI0213 13:49:59.033321 1217 log.go:172] (0xc0009cc370) Reply frame received for 1\nI0213 13:49:59.033355 1217 log.go:172] (0xc0009cc370) (0xc000798280) Create stream\nI0213 13:49:59.033362 1217 log.go:172] (0xc0009cc370) (0xc000798280) Stream added, broadcasting: 3\nI0213 13:49:59.034736 1217 log.go:172] (0xc0009cc370) Reply frame received for 3\nI0213 13:49:59.034759 1217 log.go:172] (0xc0009cc370) (0xc0009c6780) Create stream\nI0213 13:49:59.034765 1217 log.go:172] (0xc0009cc370) (0xc0009c6780) Stream added, broadcasting: 5\nI0213 13:49:59.037897 1217 log.go:172] (0xc0009cc370) Reply frame received for 5\nI0213 13:49:59.141845 1217 log.go:172] (0xc0009cc370) Data frame received for 5\nI0213 13:49:59.141921 1217 log.go:172] (0xc0009c6780) (5) Data frame handling\nI0213 13:49:59.141940 1217 log.go:172] (0xc0009c6780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0213 13:49:59.141961 1217 log.go:172] (0xc0009cc370) Data frame received for 3\nI0213 13:49:59.141973 1217 log.go:172] (0xc000798280) (3) Data frame handling\nI0213 13:49:59.141987 1217 log.go:172] (0xc000798280) (3) Data frame sent\nI0213 13:49:59.278001 1217 log.go:172] (0xc0009cc370) (0xc000798280) Stream removed, broadcasting: 3\nI0213 13:49:59.278272 1217 log.go:172] (0xc0009cc370) Data frame received for 1\nI0213 13:49:59.278360 1217 log.go:172] (0xc0009cc370) (0xc0009c6780) Stream removed, broadcasting: 5\nI0213 13:49:59.278450 1217 log.go:172] (0xc0009c66e0) (1) Data frame handling\nI0213 13:49:59.278482 1217 log.go:172] (0xc0009c66e0) (1) Data frame sent\nI0213 13:49:59.278512 1217 log.go:172] (0xc0009cc370) (0xc0009c66e0) Stream removed, broadcasting: 1\nI0213 13:49:59.278526 1217 log.go:172] (0xc0009cc370) Go away received\nI0213 13:49:59.279461 1217 log.go:172] (0xc0009cc370) (0xc0009c66e0) Stream removed, broadcasting: 1\nI0213 13:49:59.279477 1217 log.go:172] (0xc0009cc370) (0xc000798280) Stream removed, broadcasting: 3\nI0213 13:49:59.279483 1217 log.go:172] (0xc0009cc370) (0xc0009c6780) Stream removed, broadcasting: 5\n" Feb 13 13:49:59.288: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 13 13:49:59.288: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 13 13:49:59.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 13 13:49:59.653: INFO: stderr: "I0213 13:49:59.428514 1238 log.go:172] (0xc000116fd0) (0xc00063ebe0) Create stream\nI0213 13:49:59.428845 1238 log.go:172] (0xc000116fd0) (0xc00063ebe0) Stream added, broadcasting: 1\nI0213 13:49:59.435091 1238 log.go:172] (0xc000116fd0) Reply frame received for 1\nI0213 13:49:59.435256 1238 log.go:172] (0xc000116fd0) (0xc00063e320) Create stream\nI0213 13:49:59.435276 1238 log.go:172] (0xc000116fd0) (0xc00063e320) Stream added, broadcasting: 3\nI0213 13:49:59.437269 1238 log.go:172] (0xc000116fd0) Reply frame received for 3\nI0213 13:49:59.437306 1238 log.go:172] (0xc000116fd0) (0xc00021e000) Create stream\nI0213 13:49:59.437320 1238 log.go:172] (0xc000116fd0) (0xc00021e000) Stream added, broadcasting: 5\nI0213 13:49:59.438226 1238 log.go:172] (0xc000116fd0) Reply frame received for 5\nI0213 13:49:59.526664 1238 log.go:172] (0xc000116fd0) Data frame received for 5\nI0213 13:49:59.526743 1238 log.go:172] (0xc00021e000) (5) Data frame handling\nI0213 13:49:59.526771 1238 log.go:172] (0xc00021e000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0213 13:49:59.553830 1238 log.go:172] (0xc000116fd0) Data frame received for 3\nI0213 13:49:59.553882 1238 log.go:172] (0xc00063e320) (3) Data frame handling\nI0213 13:49:59.553900 1238 log.go:172] (0xc00063e320) (3) Data frame sent\nI0213 13:49:59.638939 1238 log.go:172] (0xc000116fd0) (0xc00063e320) Stream removed, broadcasting: 3\nI0213 13:49:59.639391 1238 log.go:172] (0xc000116fd0) Data frame received for 1\nI0213 13:49:59.639427 1238 log.go:172] (0xc00063ebe0) (1) Data frame handling\nI0213 13:49:59.639459 1238 log.go:172] (0xc00063ebe0) (1) Data frame sent\nI0213 13:49:59.639630 1238 log.go:172] (0xc000116fd0) (0xc00063ebe0) Stream removed, broadcasting: 1\nI0213 13:49:59.640902 1238 log.go:172] (0xc000116fd0) (0xc00021e000) Stream removed, broadcasting: 5\nI0213 13:49:59.640970 1238 log.go:172] (0xc000116fd0) (0xc00063ebe0) Stream removed, broadcasting: 1\nI0213 13:49:59.640991 1238 log.go:172] (0xc000116fd0) (0xc00063e320) Stream removed, broadcasting: 3\nI0213 13:49:59.641008 1238 log.go:172] (0xc000116fd0) (0xc00021e000) Stream removed, broadcasting: 5\n" Feb 13 13:49:59.654: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 13 13:49:59.654: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 13 13:49:59.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 13 13:50:00.209: INFO: stderr: "I0213 13:49:59.873328 1259 log.go:172] (0xc000a54420) (0xc0002c2820) Create stream\nI0213 13:49:59.873442 1259 log.go:172] (0xc000a54420) (0xc0002c2820) Stream added, broadcasting: 1\nI0213 13:49:59.890261 1259 log.go:172] (0xc000a54420) Reply frame received for 1\nI0213 13:49:59.890382 1259 log.go:172] (0xc000a54420) (0xc00020a3c0) Create stream\nI0213 13:49:59.890402 1259 log.go:172] (0xc000a54420) (0xc00020a3c0) Stream added, broadcasting: 3\nI0213 13:49:59.892457 1259 log.go:172] (0xc000a54420) Reply frame received for 3\nI0213 13:49:59.892482 1259 log.go:172] (0xc000a54420) (0xc00020a460) Create stream\nI0213 13:49:59.892492 1259 log.go:172] (0xc000a54420) (0xc00020a460) Stream added, broadcasting: 5\nI0213 13:49:59.893568 1259 log.go:172] (0xc000a54420) Reply frame received for 5\nI0213 13:50:00.007691 1259 log.go:172] (0xc000a54420) Data frame received for 5\nI0213 13:50:00.007821 1259 log.go:172] (0xc00020a460) (5) Data frame handling\nI0213 13:50:00.007845 1259 log.go:172] (0xc00020a460) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0213 13:50:00.055206 1259 log.go:172] (0xc000a54420) Data frame received for 3\nI0213 13:50:00.055373 1259 log.go:172] (0xc00020a3c0) (3) Data frame handling\nI0213 13:50:00.055399 1259 log.go:172] (0xc00020a3c0) (3) Data frame sent\nI0213 13:50:00.196075 1259 log.go:172] (0xc000a54420) (0xc00020a3c0) Stream removed, broadcasting: 3\nI0213 13:50:00.196385 1259 log.go:172] (0xc000a54420) Data frame received for 1\nI0213 13:50:00.196430 1259 log.go:172] (0xc0002c2820) (1) Data frame handling\nI0213 13:50:00.196458 1259 log.go:172] (0xc0002c2820) (1) Data frame sent\nI0213 13:50:00.196480 1259 log.go:172] (0xc000a54420) (0xc0002c2820) Stream removed, broadcasting: 1\nI0213 13:50:00.196865 1259 log.go:172] (0xc000a54420) (0xc00020a460) Stream removed, broadcasting: 5\nI0213 13:50:00.196929 1259 log.go:172] (0xc000a54420) Go away received\nI0213 13:50:00.198469 1259 log.go:172] (0xc000a54420) (0xc0002c2820) Stream removed, broadcasting: 1\nI0213 13:50:00.198480 1259 log.go:172] (0xc000a54420) (0xc00020a3c0) Stream removed, broadcasting: 3\nI0213 13:50:00.198486 1259 log.go:172] (0xc000a54420) (0xc00020a460) Stream removed, broadcasting: 5\n" Feb 13 13:50:00.209: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 13 13:50:00.209: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 13 13:50:00.209: INFO: Waiting for statefulset status.replicas updated to 0 Feb 13 13:50:00.223: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Feb 13 13:50:10.239: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 13 13:50:10.239: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 13 13:50:10.239: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 13 13:50:10.263: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 13:50:10.263: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:12 +0000 UTC }] Feb 13 13:50:10.263: INFO: ss-1 iruya-server-sfge57q7djm7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC }] Feb 13 13:50:10.263: INFO: ss-2 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC }] Feb 13 13:50:10.263: INFO: Feb 13 13:50:10.263: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 13 13:50:12.156: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 13:50:12.156: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:12 +0000 UTC }] Feb 13 13:50:12.156: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC }] Feb 13 13:50:12.156: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC }] Feb 13 13:50:12.156: INFO: Feb 13 13:50:12.156: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 13 13:50:13.168: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 13:50:13.168: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:12 +0000 UTC }] Feb 13 13:50:13.168: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC }] Feb 13 13:50:13.168: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC }] Feb 13 13:50:13.168: INFO: Feb 13 13:50:13.168: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 13 13:50:14.179: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 13:50:14.179: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:12 +0000 UTC }] Feb 13 13:50:14.179: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC }] Feb 13 13:50:14.180: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC }] Feb 13 13:50:14.180: INFO: Feb 13 13:50:14.180: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 13 13:50:15.192: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 13:50:15.192: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:12 +0000 UTC }] Feb 13 13:50:15.193: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC }] Feb 13 13:50:15.193: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC }] Feb 13 13:50:15.193: INFO: Feb 13 13:50:15.193: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 13 13:50:16.206: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 13:50:16.207: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:12 +0000 UTC }] Feb 13 13:50:16.207: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC }] Feb 13 13:50:16.207: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC }] Feb 13 13:50:16.207: INFO: Feb 13 13:50:16.207: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 13 13:50:17.216: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 13:50:17.216: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:12 +0000 UTC }] Feb 13 13:50:17.216: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC }] Feb 13 13:50:17.216: INFO: Feb 13 13:50:17.216: INFO: StatefulSet ss has not reached scale 0, at 2 Feb 13 13:50:18.224: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 13:50:18.224: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:12 +0000 UTC }] Feb 13 13:50:18.224: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC }] Feb 13 13:50:18.224: INFO: Feb 13 13:50:18.224: INFO: StatefulSet ss has not reached scale 0, at 2 Feb 13 13:50:19.239: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 13:50:19.239: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:12 +0000 UTC }] Feb 13 13:50:19.240: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC }] Feb 13 13:50:19.240: INFO: Feb 13 13:50:19.240: INFO: StatefulSet ss has not reached scale 0, at 2 Feb 13 13:50:20.253: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 13:50:20.253: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:12 +0000 UTC }] Feb 13 13:50:20.253: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:50:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:49:35 +0000 UTC }] Feb 13 13:50:20.253: INFO: Feb 13 13:50:20.253: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2012 Feb 13 13:50:21.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:50:21.489: INFO: rc: 1 Feb 13 13:50:21.489: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc003224ea0 exit status 1 true [0xc000f3e280 0xc000f3e298 0xc000f3e2b0] [0xc000f3e280 0xc000f3e298 0xc000f3e2b0] [0xc000f3e290 0xc000f3e2a8] [0xba6c50 0xba6c50] 0xc00257d440 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Feb 13 13:50:31.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:50:31.670: INFO: rc: 1 Feb 13 13:50:31.670: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001842150 exit status 1 true [0xc0019643f8 0xc001964410 0xc001964428] [0xc0019643f8 0xc001964410 0xc001964428] [0xc001964408 0xc001964420] [0xba6c50 0xba6c50] 0xc002c5fb60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 13:50:41.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:50:41.894: INFO: rc: 1 Feb 13 13:50:41.894: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002487cb0 exit status 1 true [0xc0025d0568 0xc0025d0580 0xc0025d0598] [0xc0025d0568 0xc0025d0580 0xc0025d0598] [0xc0025d0578 0xc0025d0590] [0xba6c50 0xba6c50] 0xc001ba0cc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 13:50:51.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:50:52.076: INFO: rc: 1 Feb 13 13:50:52.076: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc003224fc0 exit status 1 true [0xc000f3e2b8 0xc000f3e2d0 0xc000f3e2e8] [0xc000f3e2b8 0xc000f3e2d0 0xc000f3e2e8] [0xc000f3e2c8 0xc000f3e2e0] [0xba6c50 0xba6c50] 0xc00257d8c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 13:51:02.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:51:02.296: INFO: rc: 1 Feb 13 13:51:02.296: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001842270 exit status 1 true [0xc001964430 0xc001964448 0xc001964460] [0xc001964430 0xc001964448 0xc001964460] [0xc001964440 0xc001964458] [0xba6c50 0xba6c50] 0xc002c5fe60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 13:51:12.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:51:12.491: INFO: rc: 1 Feb 13 13:51:12.491: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002d343f0 exit status 1 true [0xc000728e48 0xc000728fe0 0xc000729140] [0xc000728e48 0xc000728fe0 0xc000729140] [0xc000728f60 0xc000729128] [0xba6c50 0xba6c50] 0xc0031d4d20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 13:51:22.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:51:22.649: INFO: rc: 1 Feb 13 13:51:22.649: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002cc2090 exit status 1 true [0xc0007351a0 0xc000735330 0xc000735438] [0xc0007351a0 0xc000735330 0xc000735438] [0xc0007352b0 0xc000735418] [0xba6c50 0xba6c50] 0xc001ee22a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 13:51:32.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:51:32.826: INFO: rc: 1 Feb 13 13:51:32.826: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002cc21b0 exit status 1 true [0xc000735448 0xc000735490 0xc0007355b8] [0xc000735448 0xc000735490 0xc0007355b8] [0xc000735460 0xc000735570] [0xba6c50 0xba6c50] 0xc001ee27e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 13:51:42.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:51:43.003: INFO: rc: 1 Feb 13 13:51:43.004: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001da6090 exit status 1 true [0xc000186138 0xc000187d38 0xc000187e88] [0xc000186138 0xc000187d38 0xc000187e88] [0xc000186148 0xc000187e00] [0xba6c50 0xba6c50] 0xc00228a480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 13:51:53.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:51:53.161: INFO: rc: 1 Feb 13 13:51:53.161: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014000c0 exit status 1 true [0xc000546068 0xc000546198 0xc000546528] [0xc000546068 0xc000546198 0xc000546528] [0xc0005460f8 0xc0005463d0] [0xba6c50 0xba6c50] 0xc0025a02a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 13:52:03.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:52:03.305: INFO: rc: 1 Feb 13 13:52:03.305: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001da61b0 exit status 1 true [0xc000187f40 0xc000187fe0 0xc001964008] [0xc000187f40 0xc000187fe0 0xc001964008] [0xc000187fc8 0xc001964000] [0xba6c50 0xba6c50] 0xc00228aa80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 13:52:13.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:52:13.477: INFO: rc: 1 Feb 13 13:52:13.477: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001400180 exit status 1 true [0xc000546580 0xc000546710 0xc000546c28] [0xc000546580 0xc000546710 0xc000546c28] [0xc000546668 0xc000546b80] [0xba6c50 0xba6c50] 0xc0025a0660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 13:52:23.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:52:23.687: INFO: rc: 1 Feb 13 13:52:23.687: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002cc2270 exit status 1 true [0xc000735608 0xc000735688 0xc000735790] [0xc000735608 0xc000735688 0xc000735790] [0xc000735678 0xc0007356d8] [0xba6c50 0xba6c50] 0xc001ee2ae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 13:52:33.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:52:33.843: INFO: rc: 1 Feb 13 13:52:33.844: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002cc2360 exit status 1 true [0xc0007357b0 0xc000735840 0xc000735858] [0xc0007357b0 0xc000735840 0xc000735858] [0xc000735800 0xc000735850] [0xba6c50 0xba6c50] 0xc001ee2de0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 13:52:43.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:52:44.012: INFO: rc: 1 Feb 13 13:52:44.012: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001da62d0 exit status 1 true [0xc001964010 0xc001964028 0xc001964040] [0xc001964010 0xc001964028 0xc001964040] [0xc001964020 0xc001964038] [0xba6c50 0xba6c50] 0xc00228afc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 13:52:54.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:52:54.237: INFO: rc: 1 Feb 13 13:52:54.238: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014002a0 exit status 1 true [0xc000546c60 0xc000546e10 0xc000547000] [0xc000546c60 0xc000546e10 0xc000547000] [0xc000546d28 0xc000546fb0] [0xba6c50 0xba6c50] 0xc0025a0a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 13:53:04.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:53:04.414: INFO: rc: 1 Feb 13 13:53:04.415: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002cc2660 exit status 1 true [0xc000735898 0xc000735988 0xc000735a60] [0xc000735898 0xc000735988 0xc000735a60] [0xc000735960 0xc000735a40] [0xba6c50 0xba6c50] 0xc002c64120 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 13:53:14.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:53:14.664: INFO: rc: 1 Feb 13 13:53:14.665: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002cc20c0 exit status 1 true [0xc000186140 0xc000187db0 0xc000187f40] [0xc000186140 0xc000187db0 0xc000187f40] [0xc000187d38 0xc000187e88] [0xba6c50 0xba6c50] 0xc001ee22a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 13:53:24.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:53:24.802: INFO: rc: 1 Feb 13 13:53:24.803: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001400090 exit status 1 true [0xc0007351a0 0xc000735330 0xc000735438] [0xc0007351a0 0xc000735330 0xc000735438] [0xc0007352b0 0xc000735418] [0xba6c50 0xba6c50] 0xc002c643c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 13:53:34.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:53:34.974: INFO: rc: 1 Feb 13 13:53:34.975: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002cc2180 exit status 1 true [0xc000187f88 0xc000187ff0 0xc0005460f8] [0xc000187f88 0xc000187ff0 0xc0005460f8] [0xc000187fe0 0xc000546070] [0xba6c50 0xba6c50] 0xc001ee27e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 13:53:44.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:53:45.126: INFO: rc: 1 Feb 13 13:53:45.126: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014001e0 exit status 1 true [0xc000735448 0xc000735490 0xc0007355b8] [0xc000735448 0xc000735490 0xc0007355b8] [0xc000735460 0xc000735570] [0xba6c50 0xba6c50] 0xc002c64a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 13:53:55.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:53:55.304: INFO: rc: 1 Feb 13 13:53:55.304: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001400300 exit status 1 true [0xc000735608 0xc000735688 0xc000735790] [0xc000735608 0xc000735688 0xc000735790] [0xc000735678 0xc0007356d8] [0xba6c50 0xba6c50] 0xc002c64e40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 13:54:05.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:54:05.469: INFO: rc: 1 Feb 13 13:54:05.469: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0006ca0f0 exit status 1 true [0xc001964000 0xc001964018 0xc001964030] [0xc001964000 0xc001964018 0xc001964030] [0xc001964010 0xc001964028] [0xba6c50 0xba6c50] 0xc0025a02a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 13:54:15.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:54:15.649: INFO: rc: 1 Feb 13 13:54:15.650: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0006ca2d0 exit status 1 true [0xc001964038 0xc001964050 0xc001964068] [0xc001964038 0xc001964050 0xc001964068] [0xc001964048 0xc001964060] [0xba6c50 0xba6c50] 0xc0025a0660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 13:54:25.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:54:25.828: INFO: rc: 1 Feb 13 13:54:25.829: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0006ca3c0 exit status 1 true [0xc001964070 0xc001964088 0xc0019640a0] [0xc001964070 0xc001964088 0xc0019640a0] [0xc001964080 0xc001964098] [0xba6c50 0xba6c50] 0xc0025a0a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 13:54:35.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:54:36.027: INFO: rc: 1 Feb 13 13:54:36.028: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002cc22d0 exit status 1 true [0xc000546198 0xc000546528 0xc000546668] [0xc000546198 0xc000546528 0xc000546668] [0xc0005463d0 0xc0005465c0] [0xba6c50 0xba6c50] 0xc001ee2ae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 13:54:46.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:54:46.232: INFO: rc: 1 Feb 13 13:54:46.232: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001da6120 exit status 1 true [0xc000f3e000 0xc000f3e018 0xc000f3e030] [0xc000f3e000 0xc000f3e018 0xc000f3e030] [0xc000f3e010 0xc000f3e028] [0xba6c50 0xba6c50] 0xc00228a480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 13:54:56.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:54:56.411: INFO: rc: 1 Feb 13 13:54:56.411: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001da6210 exit status 1 true [0xc000f3e038 0xc000f3e050 0xc000f3e068] [0xc000f3e038 0xc000f3e050 0xc000f3e068] [0xc000f3e048 0xc000f3e060] [0xba6c50 0xba6c50] 0xc00228aa80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 13:55:06.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:55:06.632: INFO: rc: 1 Feb 13 13:55:06.632: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001da6300 exit status 1 true [0xc000f3e070 0xc000f3e088 0xc000f3e0a0] [0xc000f3e070 0xc000f3e088 0xc000f3e0a0] [0xc000f3e080 0xc000f3e098] [0xba6c50 0xba6c50] 0xc00228afc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 13:55:16.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:55:16.780: INFO: rc: 1 Feb 13 13:55:16.781: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002cc2090 exit status 1 true [0xc000186140 0xc000187db0 0xc000187f40] [0xc000186140 0xc000187db0 0xc000187f40] [0xc000187d38 0xc000187e88] [0xba6c50 0xba6c50] 0xc001ee22a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 13:55:26.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2012 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 13:55:26.973: INFO: rc: 1 Feb 13 13:55:26.974: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Feb 13 13:55:26.974: INFO: Scaling statefulset ss to 0 Feb 13 13:55:27.038: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 13 13:55:27.044: INFO: Deleting all statefulset in ns statefulset-2012 Feb 13 13:55:27.047: INFO: Scaling statefulset ss to 0 Feb 13 13:55:27.060: INFO: Waiting for statefulset status.replicas updated to 0 Feb 13 13:55:27.062: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 13 13:55:27.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2012" for this suite. Feb 13 13:55:33.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 13:55:33.317: INFO: namespace statefulset-2012 deletion completed in 6.201388725s • [SLOW TEST:381.623 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 13 13:55:33.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 13 13:55:33.439: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/:
alternatives.log
alternatives.l... (200; 13.955664ms)
Feb 13 13:55:33.448: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.599451ms)
Feb 13 13:55:33.456: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.178107ms)
Feb 13 13:55:33.462: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.949982ms)
Feb 13 13:55:33.468: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.95099ms)
Feb 13 13:55:33.475: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.498318ms)
Feb 13 13:55:33.481: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.068575ms)
Feb 13 13:55:33.486: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.333859ms)
Feb 13 13:55:33.493: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.553195ms)
Feb 13 13:55:33.506: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.200381ms)
Feb 13 13:55:33.516: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.633484ms)
Feb 13 13:55:33.564: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 47.869583ms)
Feb 13 13:55:33.571: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.367063ms)
Feb 13 13:55:33.579: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.787671ms)
Feb 13 13:55:33.584: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.305633ms)
Feb 13 13:55:33.592: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.795557ms)
Feb 13 13:55:33.601: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.629931ms)
Feb 13 13:55:33.608: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.075596ms)
Feb 13 13:55:33.620: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.729359ms)
Feb 13 13:55:33.631: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.934986ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 13:55:33.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-1064" for this suite.
Feb 13 13:55:39.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:55:39.865: INFO: namespace proxy-1064 deletion completed in 6.228949216s

• [SLOW TEST:6.547 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 13:55:39.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-9903f0cf-7a51-43e6-afc4-e3eb55c97634 in namespace container-probe-3944
Feb 13 13:55:47.998: INFO: Started pod liveness-9903f0cf-7a51-43e6-afc4-e3eb55c97634 in namespace container-probe-3944
STEP: checking the pod's current state and verifying that restartCount is present
Feb 13 13:55:48.003: INFO: Initial restart count of pod liveness-9903f0cf-7a51-43e6-afc4-e3eb55c97634 is 0
Feb 13 13:56:12.128: INFO: Restart count of pod container-probe-3944/liveness-9903f0cf-7a51-43e6-afc4-e3eb55c97634 is now 1 (24.124420244s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 13:56:12.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3944" for this suite.
Feb 13 13:56:18.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:56:18.387: INFO: namespace container-probe-3944 deletion completed in 6.177963663s

• [SLOW TEST:38.522 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 13:56:18.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Feb 13 13:56:18.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1211'
Feb 13 13:56:18.756: INFO: stderr: ""
Feb 13 13:56:18.756: INFO: stdout: "pod/pause created\n"
Feb 13 13:56:18.756: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Feb 13 13:56:18.756: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1211" to be "running and ready"
Feb 13 13:56:18.774: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 17.084321ms
Feb 13 13:56:20.781: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024226202s
Feb 13 13:56:22.788: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031756822s
Feb 13 13:56:24.798: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041756323s
Feb 13 13:56:26.816: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.059287371s
Feb 13 13:56:26.816: INFO: Pod "pause" satisfied condition "running and ready"
Feb 13 13:56:26.816: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Feb 13 13:56:26.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1211'
Feb 13 13:56:27.110: INFO: stderr: ""
Feb 13 13:56:27.110: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Feb 13 13:56:27.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1211'
Feb 13 13:56:27.624: INFO: stderr: ""
Feb 13 13:56:27.624: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Feb 13 13:56:27.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1211'
Feb 13 13:56:27.969: INFO: stderr: ""
Feb 13 13:56:27.969: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Feb 13 13:56:27.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1211'
Feb 13 13:56:28.104: INFO: stderr: ""
Feb 13 13:56:28.105: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          10s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Feb 13 13:56:28.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1211'
Feb 13 13:56:28.316: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 13 13:56:28.317: INFO: stdout: "pod \"pause\" force deleted\n"
Feb 13 13:56:28.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1211'
Feb 13 13:56:28.431: INFO: stderr: "No resources found.\n"
Feb 13 13:56:28.431: INFO: stdout: ""
Feb 13 13:56:28.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1211 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 13 13:56:28.547: INFO: stderr: ""
Feb 13 13:56:28.547: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 13:56:28.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1211" for this suite.
Feb 13 13:56:34.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:56:34.673: INFO: namespace kubectl-1211 deletion completed in 6.117140986s

• [SLOW TEST:16.286 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 13:56:34.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0213 13:56:44.827405       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 13 13:56:44.827: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 13:56:44.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8572" for this suite.
Feb 13 13:56:50.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:56:50.976: INFO: namespace gc-8572 deletion completed in 6.143092534s

• [SLOW TEST:16.302 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 13:56:50.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-87016b6a-d2fe-4c7e-8160-55e01af70263
STEP: Creating a pod to test consume configMaps
Feb 13 13:56:51.197: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f9bb733f-5c0a-4134-9567-8835baaf01a4" in namespace "projected-6223" to be "success or failure"
Feb 13 13:56:51.203: INFO: Pod "pod-projected-configmaps-f9bb733f-5c0a-4134-9567-8835baaf01a4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.639373ms
Feb 13 13:56:53.213: INFO: Pod "pod-projected-configmaps-f9bb733f-5c0a-4134-9567-8835baaf01a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015719468s
Feb 13 13:56:55.226: INFO: Pod "pod-projected-configmaps-f9bb733f-5c0a-4134-9567-8835baaf01a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028433495s
Feb 13 13:56:57.246: INFO: Pod "pod-projected-configmaps-f9bb733f-5c0a-4134-9567-8835baaf01a4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047993987s
Feb 13 13:56:59.259: INFO: Pod "pod-projected-configmaps-f9bb733f-5c0a-4134-9567-8835baaf01a4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061174572s
Feb 13 13:57:01.271: INFO: Pod "pod-projected-configmaps-f9bb733f-5c0a-4134-9567-8835baaf01a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072854573s
STEP: Saw pod success
Feb 13 13:57:01.271: INFO: Pod "pod-projected-configmaps-f9bb733f-5c0a-4134-9567-8835baaf01a4" satisfied condition "success or failure"
Feb 13 13:57:01.275: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-f9bb733f-5c0a-4134-9567-8835baaf01a4 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 13 13:57:01.385: INFO: Waiting for pod pod-projected-configmaps-f9bb733f-5c0a-4134-9567-8835baaf01a4 to disappear
Feb 13 13:57:01.397: INFO: Pod pod-projected-configmaps-f9bb733f-5c0a-4134-9567-8835baaf01a4 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 13:57:01.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6223" for this suite.
Feb 13 13:57:07.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:57:07.665: INFO: namespace projected-6223 deletion completed in 6.256981671s

• [SLOW TEST:16.688 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 13:57:07.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 13 13:57:07.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 13:57:15.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-812" for this suite.
Feb 13 13:58:08.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:58:08.116: INFO: namespace pods-812 deletion completed in 52.136308902s

• [SLOW TEST:60.451 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 13:58:08.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 13 13:58:08.305: INFO: Waiting up to 5m0s for pod "pod-8b90e7df-ae17-4162-9024-220734fdbaa9" in namespace "emptydir-4871" to be "success or failure"
Feb 13 13:58:08.333: INFO: Pod "pod-8b90e7df-ae17-4162-9024-220734fdbaa9": Phase="Pending", Reason="", readiness=false. Elapsed: 28.132745ms
Feb 13 13:58:10.369: INFO: Pod "pod-8b90e7df-ae17-4162-9024-220734fdbaa9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063983139s
Feb 13 13:58:12.373: INFO: Pod "pod-8b90e7df-ae17-4162-9024-220734fdbaa9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068349427s
Feb 13 13:58:14.384: INFO: Pod "pod-8b90e7df-ae17-4162-9024-220734fdbaa9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079167089s
Feb 13 13:58:16.949: INFO: Pod "pod-8b90e7df-ae17-4162-9024-220734fdbaa9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.64393009s
Feb 13 13:58:18.962: INFO: Pod "pod-8b90e7df-ae17-4162-9024-220734fdbaa9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.657365931s
STEP: Saw pod success
Feb 13 13:58:18.962: INFO: Pod "pod-8b90e7df-ae17-4162-9024-220734fdbaa9" satisfied condition "success or failure"
Feb 13 13:58:18.970: INFO: Trying to get logs from node iruya-node pod pod-8b90e7df-ae17-4162-9024-220734fdbaa9 container test-container: 
STEP: delete the pod
Feb 13 13:58:19.033: INFO: Waiting for pod pod-8b90e7df-ae17-4162-9024-220734fdbaa9 to disappear
Feb 13 13:58:19.037: INFO: Pod pod-8b90e7df-ae17-4162-9024-220734fdbaa9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 13:58:19.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4871" for this suite.
Feb 13 13:58:25.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:58:25.184: INFO: namespace emptydir-4871 deletion completed in 6.141737758s

• [SLOW TEST:17.067 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 13:58:25.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Feb 13 13:58:25.305: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 13:58:40.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6439" for this suite.
Feb 13 13:58:46.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:58:47.006: INFO: namespace pods-6439 deletion completed in 6.151181683s

• [SLOW TEST:21.821 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 13:58:47.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-c9d7d385-8ff6-4673-8b58-6b48efc65a5c
STEP: Creating a pod to test consume configMaps
Feb 13 13:58:47.169: INFO: Waiting up to 5m0s for pod "pod-configmaps-0fc29b98-b0de-42e4-8386-39f5ed63c4e0" in namespace "configmap-576" to be "success or failure"
Feb 13 13:58:47.191: INFO: Pod "pod-configmaps-0fc29b98-b0de-42e4-8386-39f5ed63c4e0": Phase="Pending", Reason="", readiness=false. Elapsed: 21.304813ms
Feb 13 13:58:49.200: INFO: Pod "pod-configmaps-0fc29b98-b0de-42e4-8386-39f5ed63c4e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030846813s
Feb 13 13:58:51.209: INFO: Pod "pod-configmaps-0fc29b98-b0de-42e4-8386-39f5ed63c4e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03942932s
Feb 13 13:58:53.220: INFO: Pod "pod-configmaps-0fc29b98-b0de-42e4-8386-39f5ed63c4e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05103182s
Feb 13 13:58:55.247: INFO: Pod "pod-configmaps-0fc29b98-b0de-42e4-8386-39f5ed63c4e0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077631687s
Feb 13 13:58:57.256: INFO: Pod "pod-configmaps-0fc29b98-b0de-42e4-8386-39f5ed63c4e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.086417652s
STEP: Saw pod success
Feb 13 13:58:57.256: INFO: Pod "pod-configmaps-0fc29b98-b0de-42e4-8386-39f5ed63c4e0" satisfied condition "success or failure"
Feb 13 13:58:57.261: INFO: Trying to get logs from node iruya-node pod pod-configmaps-0fc29b98-b0de-42e4-8386-39f5ed63c4e0 container configmap-volume-test: 
STEP: delete the pod
Feb 13 13:58:57.317: INFO: Waiting for pod pod-configmaps-0fc29b98-b0de-42e4-8386-39f5ed63c4e0 to disappear
Feb 13 13:58:57.322: INFO: Pod pod-configmaps-0fc29b98-b0de-42e4-8386-39f5ed63c4e0 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 13:58:57.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-576" for this suite.
Feb 13 13:59:03.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:59:03.570: INFO: namespace configmap-576 deletion completed in 6.242618185s

• [SLOW TEST:16.564 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 13:59:03.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Feb 13 13:59:03.679: INFO: Waiting up to 5m0s for pod "var-expansion-e215c084-59b1-457a-8147-743d8672cd4b" in namespace "var-expansion-8488" to be "success or failure"
Feb 13 13:59:03.711: INFO: Pod "var-expansion-e215c084-59b1-457a-8147-743d8672cd4b": Phase="Pending", Reason="", readiness=false. Elapsed: 31.992241ms
Feb 13 13:59:05.719: INFO: Pod "var-expansion-e215c084-59b1-457a-8147-743d8672cd4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039683375s
Feb 13 13:59:07.726: INFO: Pod "var-expansion-e215c084-59b1-457a-8147-743d8672cd4b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046641374s
Feb 13 13:59:09.736: INFO: Pod "var-expansion-e215c084-59b1-457a-8147-743d8672cd4b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056300185s
Feb 13 13:59:11.748: INFO: Pod "var-expansion-e215c084-59b1-457a-8147-743d8672cd4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06837762s
STEP: Saw pod success
Feb 13 13:59:11.748: INFO: Pod "var-expansion-e215c084-59b1-457a-8147-743d8672cd4b" satisfied condition "success or failure"
Feb 13 13:59:11.753: INFO: Trying to get logs from node iruya-node pod var-expansion-e215c084-59b1-457a-8147-743d8672cd4b container dapi-container: 
STEP: delete the pod
Feb 13 13:59:11.829: INFO: Waiting for pod var-expansion-e215c084-59b1-457a-8147-743d8672cd4b to disappear
Feb 13 13:59:11.857: INFO: Pod var-expansion-e215c084-59b1-457a-8147-743d8672cd4b no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 13:59:11.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-8488" for this suite.
Feb 13 13:59:17.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:59:18.033: INFO: namespace var-expansion-8488 deletion completed in 6.156293189s

• [SLOW TEST:14.463 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 13:59:18.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 13 13:59:18.154: INFO: Create a RollingUpdate DaemonSet
Feb 13 13:59:18.161: INFO: Check that daemon pods launch on every node of the cluster
Feb 13 13:59:18.185: INFO: Number of nodes with available pods: 0
Feb 13 13:59:18.185: INFO: Node iruya-node is running more than one daemon pod
Feb 13 13:59:19.411: INFO: Number of nodes with available pods: 0
Feb 13 13:59:19.411: INFO: Node iruya-node is running more than one daemon pod
Feb 13 13:59:20.195: INFO: Number of nodes with available pods: 0
Feb 13 13:59:20.195: INFO: Node iruya-node is running more than one daemon pod
Feb 13 13:59:21.207: INFO: Number of nodes with available pods: 0
Feb 13 13:59:21.207: INFO: Node iruya-node is running more than one daemon pod
Feb 13 13:59:22.207: INFO: Number of nodes with available pods: 0
Feb 13 13:59:22.207: INFO: Node iruya-node is running more than one daemon pod
Feb 13 13:59:24.514: INFO: Number of nodes with available pods: 0
Feb 13 13:59:24.514: INFO: Node iruya-node is running more than one daemon pod
Feb 13 13:59:25.705: INFO: Number of nodes with available pods: 0
Feb 13 13:59:25.705: INFO: Node iruya-node is running more than one daemon pod
Feb 13 13:59:26.200: INFO: Number of nodes with available pods: 0
Feb 13 13:59:26.200: INFO: Node iruya-node is running more than one daemon pod
Feb 13 13:59:27.220: INFO: Number of nodes with available pods: 0
Feb 13 13:59:27.220: INFO: Node iruya-node is running more than one daemon pod
Feb 13 13:59:28.237: INFO: Number of nodes with available pods: 1
Feb 13 13:59:28.237: INFO: Node iruya-node is running more than one daemon pod
Feb 13 13:59:29.220: INFO: Number of nodes with available pods: 2
Feb 13 13:59:29.220: INFO: Number of running nodes: 2, number of available pods: 2
Feb 13 13:59:29.220: INFO: Update the DaemonSet to trigger a rollout
Feb 13 13:59:29.235: INFO: Updating DaemonSet daemon-set
Feb 13 13:59:38.285: INFO: Roll back the DaemonSet before rollout is complete
Feb 13 13:59:38.295: INFO: Updating DaemonSet daemon-set
Feb 13 13:59:38.295: INFO: Make sure DaemonSet rollback is complete
Feb 13 13:59:38.309: INFO: Wrong image for pod: daemon-set-48r6g. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 13 13:59:38.309: INFO: Pod daemon-set-48r6g is not available
Feb 13 13:59:39.918: INFO: Wrong image for pod: daemon-set-48r6g. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 13 13:59:39.919: INFO: Pod daemon-set-48r6g is not available
Feb 13 13:59:40.786: INFO: Pod daemon-set-mrn7z is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8346, will wait for the garbage collector to delete the pods
Feb 13 13:59:41.432: INFO: Deleting DaemonSet.extensions daemon-set took: 564.372983ms
Feb 13 13:59:41.933: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.723762ms
Feb 13 13:59:50.651: INFO: Number of nodes with available pods: 0
Feb 13 13:59:50.652: INFO: Number of running nodes: 0, number of available pods: 0
Feb 13 13:59:50.664: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8346/daemonsets","resourceVersion":"24203520"},"items":null}

Feb 13 13:59:50.670: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8346/pods","resourceVersion":"24203520"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 13:59:50.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8346" for this suite.
Feb 13 13:59:56.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:59:56.913: INFO: namespace daemonsets-8346 deletion completed in 6.221179838s

• [SLOW TEST:38.879 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 13:59:56.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Feb 13 13:59:56.978: INFO: namespace kubectl-1082
Feb 13 13:59:56.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1082'
Feb 13 13:59:59.479: INFO: stderr: ""
Feb 13 13:59:59.479: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 13 14:00:00.566: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:00:00.566: INFO: Found 0 / 1
Feb 13 14:00:01.487: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:00:01.487: INFO: Found 0 / 1
Feb 13 14:00:02.491: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:00:02.491: INFO: Found 0 / 1
Feb 13 14:00:03.488: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:00:03.488: INFO: Found 0 / 1
Feb 13 14:00:04.490: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:00:04.490: INFO: Found 0 / 1
Feb 13 14:00:05.495: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:00:05.495: INFO: Found 0 / 1
Feb 13 14:00:06.494: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:00:06.494: INFO: Found 0 / 1
Feb 13 14:00:07.490: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:00:07.490: INFO: Found 0 / 1
Feb 13 14:00:08.490: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:00:08.491: INFO: Found 1 / 1
Feb 13 14:00:08.491: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 13 14:00:08.496: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:00:08.496: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 13 14:00:08.496: INFO: wait on redis-master startup in kubectl-1082 
Feb 13 14:00:08.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-8g2pn redis-master --namespace=kubectl-1082'
Feb 13 14:00:08.746: INFO: stderr: ""
Feb 13 14:00:08.746: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 13 Feb 14:00:07.151 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 13 Feb 14:00:07.152 # Server started, Redis version 3.2.12\n1:M 13 Feb 14:00:07.154 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 13 Feb 14:00:07.154 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Feb 13 14:00:08.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1082'
Feb 13 14:00:09.042: INFO: stderr: ""
Feb 13 14:00:09.042: INFO: stdout: "service/rm2 exposed\n"
Feb 13 14:00:09.054: INFO: Service rm2 in namespace kubectl-1082 found.
STEP: exposing service
Feb 13 14:00:11.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1082'
Feb 13 14:00:11.318: INFO: stderr: ""
Feb 13 14:00:11.318: INFO: stdout: "service/rm3 exposed\n"
Feb 13 14:00:11.325: INFO: Service rm3 in namespace kubectl-1082 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:00:13.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1082" for this suite.
Feb 13 14:00:37.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:00:37.560: INFO: namespace kubectl-1082 deletion completed in 24.212914032s

• [SLOW TEST:40.645 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:00:37.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 13 14:00:37.671: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eb4cdede-19a0-4097-bd5c-9e1701c4571b" in namespace "projected-5926" to be "success or failure"
Feb 13 14:00:37.712: INFO: Pod "downwardapi-volume-eb4cdede-19a0-4097-bd5c-9e1701c4571b": Phase="Pending", Reason="", readiness=false. Elapsed: 40.949727ms
Feb 13 14:00:39.721: INFO: Pod "downwardapi-volume-eb4cdede-19a0-4097-bd5c-9e1701c4571b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050139795s
Feb 13 14:00:41.797: INFO: Pod "downwardapi-volume-eb4cdede-19a0-4097-bd5c-9e1701c4571b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126369064s
Feb 13 14:00:43.809: INFO: Pod "downwardapi-volume-eb4cdede-19a0-4097-bd5c-9e1701c4571b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138740473s
Feb 13 14:00:45.824: INFO: Pod "downwardapi-volume-eb4cdede-19a0-4097-bd5c-9e1701c4571b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.153033322s
Feb 13 14:00:47.831: INFO: Pod "downwardapi-volume-eb4cdede-19a0-4097-bd5c-9e1701c4571b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.160283875s
STEP: Saw pod success
Feb 13 14:00:47.831: INFO: Pod "downwardapi-volume-eb4cdede-19a0-4097-bd5c-9e1701c4571b" satisfied condition "success or failure"
Feb 13 14:00:47.835: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-eb4cdede-19a0-4097-bd5c-9e1701c4571b container client-container: 
STEP: delete the pod
Feb 13 14:00:47.997: INFO: Waiting for pod downwardapi-volume-eb4cdede-19a0-4097-bd5c-9e1701c4571b to disappear
Feb 13 14:00:48.006: INFO: Pod downwardapi-volume-eb4cdede-19a0-4097-bd5c-9e1701c4571b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:00:48.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5926" for this suite.
Feb 13 14:00:54.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:00:54.200: INFO: namespace projected-5926 deletion completed in 6.187413129s

• [SLOW TEST:16.639 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:00:54.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 13 14:00:54.276: INFO: Waiting up to 5m0s for pod "pod-5abd49e7-8b3b-43ae-8b92-5f5a6f236a14" in namespace "emptydir-9367" to be "success or failure"
Feb 13 14:00:54.331: INFO: Pod "pod-5abd49e7-8b3b-43ae-8b92-5f5a6f236a14": Phase="Pending", Reason="", readiness=false. Elapsed: 55.003133ms
Feb 13 14:00:56.342: INFO: Pod "pod-5abd49e7-8b3b-43ae-8b92-5f5a6f236a14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066262876s
Feb 13 14:00:58.349: INFO: Pod "pod-5abd49e7-8b3b-43ae-8b92-5f5a6f236a14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073726375s
Feb 13 14:01:00.358: INFO: Pod "pod-5abd49e7-8b3b-43ae-8b92-5f5a6f236a14": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082699618s
Feb 13 14:01:02.368: INFO: Pod "pod-5abd49e7-8b3b-43ae-8b92-5f5a6f236a14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.092535874s
STEP: Saw pod success
Feb 13 14:01:02.368: INFO: Pod "pod-5abd49e7-8b3b-43ae-8b92-5f5a6f236a14" satisfied condition "success or failure"
Feb 13 14:01:02.371: INFO: Trying to get logs from node iruya-node pod pod-5abd49e7-8b3b-43ae-8b92-5f5a6f236a14 container test-container: 
STEP: delete the pod
Feb 13 14:01:02.451: INFO: Waiting for pod pod-5abd49e7-8b3b-43ae-8b92-5f5a6f236a14 to disappear
Feb 13 14:01:02.461: INFO: Pod pod-5abd49e7-8b3b-43ae-8b92-5f5a6f236a14 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:01:02.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9367" for this suite.
Feb 13 14:01:08.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:01:08.655: INFO: namespace emptydir-9367 deletion completed in 6.181156289s

• [SLOW TEST:14.455 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:01:08.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 13 14:01:08.818: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d8deb3df-66f5-492d-90f0-c50ec729b7e8" in namespace "projected-292" to be "success or failure"
Feb 13 14:01:08.855: INFO: Pod "downwardapi-volume-d8deb3df-66f5-492d-90f0-c50ec729b7e8": Phase="Pending", Reason="", readiness=false. Elapsed: 36.993816ms
Feb 13 14:01:10.937: INFO: Pod "downwardapi-volume-d8deb3df-66f5-492d-90f0-c50ec729b7e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119074132s
Feb 13 14:01:12.948: INFO: Pod "downwardapi-volume-d8deb3df-66f5-492d-90f0-c50ec729b7e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130245216s
Feb 13 14:01:14.992: INFO: Pod "downwardapi-volume-d8deb3df-66f5-492d-90f0-c50ec729b7e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.173924192s
Feb 13 14:01:17.033: INFO: Pod "downwardapi-volume-d8deb3df-66f5-492d-90f0-c50ec729b7e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.215338358s
STEP: Saw pod success
Feb 13 14:01:17.033: INFO: Pod "downwardapi-volume-d8deb3df-66f5-492d-90f0-c50ec729b7e8" satisfied condition "success or failure"
Feb 13 14:01:17.041: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d8deb3df-66f5-492d-90f0-c50ec729b7e8 container client-container: 
STEP: delete the pod
Feb 13 14:01:17.175: INFO: Waiting for pod downwardapi-volume-d8deb3df-66f5-492d-90f0-c50ec729b7e8 to disappear
Feb 13 14:01:17.180: INFO: Pod downwardapi-volume-d8deb3df-66f5-492d-90f0-c50ec729b7e8 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:01:17.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-292" for this suite.
Feb 13 14:01:23.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:01:23.349: INFO: namespace projected-292 deletion completed in 6.162374096s

• [SLOW TEST:14.693 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:01:23.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 13 14:01:23.926: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c9278ace-f569-4143-85b8-fd1861c1ec3d" in namespace "downward-api-2604" to be "success or failure"
Feb 13 14:01:24.003: INFO: Pod "downwardapi-volume-c9278ace-f569-4143-85b8-fd1861c1ec3d": Phase="Pending", Reason="", readiness=false. Elapsed: 76.656959ms
Feb 13 14:01:26.017: INFO: Pod "downwardapi-volume-c9278ace-f569-4143-85b8-fd1861c1ec3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091159277s
Feb 13 14:01:28.050: INFO: Pod "downwardapi-volume-c9278ace-f569-4143-85b8-fd1861c1ec3d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124336889s
Feb 13 14:01:30.063: INFO: Pod "downwardapi-volume-c9278ace-f569-4143-85b8-fd1861c1ec3d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136627855s
Feb 13 14:01:32.073: INFO: Pod "downwardapi-volume-c9278ace-f569-4143-85b8-fd1861c1ec3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.147098901s
STEP: Saw pod success
Feb 13 14:01:32.073: INFO: Pod "downwardapi-volume-c9278ace-f569-4143-85b8-fd1861c1ec3d" satisfied condition "success or failure"
Feb 13 14:01:32.077: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c9278ace-f569-4143-85b8-fd1861c1ec3d container client-container: 
STEP: delete the pod
Feb 13 14:01:32.152: INFO: Waiting for pod downwardapi-volume-c9278ace-f569-4143-85b8-fd1861c1ec3d to disappear
Feb 13 14:01:32.162: INFO: Pod downwardapi-volume-c9278ace-f569-4143-85b8-fd1861c1ec3d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:01:32.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2604" for this suite.
Feb 13 14:01:38.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:01:38.394: INFO: namespace downward-api-2604 deletion completed in 6.223394409s

• [SLOW TEST:15.045 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:01:38.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:01:48.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-259" for this suite.
Feb 13 14:02:32.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:02:32.707: INFO: namespace kubelet-test-259 deletion completed in 44.179900818s

• [SLOW TEST:54.313 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:02:32.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 13 14:02:32.837: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0cd0c54f-850a-47b4-b193-2e8aa2979b64" in namespace "downward-api-3638" to be "success or failure"
Feb 13 14:02:32.851: INFO: Pod "downwardapi-volume-0cd0c54f-850a-47b4-b193-2e8aa2979b64": Phase="Pending", Reason="", readiness=false. Elapsed: 14.826403ms
Feb 13 14:02:34.866: INFO: Pod "downwardapi-volume-0cd0c54f-850a-47b4-b193-2e8aa2979b64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029753235s
Feb 13 14:02:36.880: INFO: Pod "downwardapi-volume-0cd0c54f-850a-47b4-b193-2e8aa2979b64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043015156s
Feb 13 14:02:38.888: INFO: Pod "downwardapi-volume-0cd0c54f-850a-47b4-b193-2e8aa2979b64": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051252435s
Feb 13 14:02:40.903: INFO: Pod "downwardapi-volume-0cd0c54f-850a-47b4-b193-2e8aa2979b64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06629604s
STEP: Saw pod success
Feb 13 14:02:40.903: INFO: Pod "downwardapi-volume-0cd0c54f-850a-47b4-b193-2e8aa2979b64" satisfied condition "success or failure"
Feb 13 14:02:40.909: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-0cd0c54f-850a-47b4-b193-2e8aa2979b64 container client-container: 
STEP: delete the pod
Feb 13 14:02:40.972: INFO: Waiting for pod downwardapi-volume-0cd0c54f-850a-47b4-b193-2e8aa2979b64 to disappear
Feb 13 14:02:40.978: INFO: Pod downwardapi-volume-0cd0c54f-850a-47b4-b193-2e8aa2979b64 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:02:40.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3638" for this suite.
Feb 13 14:02:46.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:02:47.105: INFO: namespace downward-api-3638 deletion completed in 6.122170316s

• [SLOW TEST:14.397 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:02:47.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Feb 13 14:02:47.170: INFO: Waiting up to 5m0s for pod "client-containers-fae81446-7d94-4c49-b755-7b33529a398c" in namespace "containers-6715" to be "success or failure"
Feb 13 14:02:47.178: INFO: Pod "client-containers-fae81446-7d94-4c49-b755-7b33529a398c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.513329ms
Feb 13 14:02:49.185: INFO: Pod "client-containers-fae81446-7d94-4c49-b755-7b33529a398c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015682222s
Feb 13 14:02:51.193: INFO: Pod "client-containers-fae81446-7d94-4c49-b755-7b33529a398c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023505005s
Feb 13 14:02:53.203: INFO: Pod "client-containers-fae81446-7d94-4c49-b755-7b33529a398c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033584897s
Feb 13 14:02:55.211: INFO: Pod "client-containers-fae81446-7d94-4c49-b755-7b33529a398c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041219045s
STEP: Saw pod success
Feb 13 14:02:55.211: INFO: Pod "client-containers-fae81446-7d94-4c49-b755-7b33529a398c" satisfied condition "success or failure"
Feb 13 14:02:55.215: INFO: Trying to get logs from node iruya-node pod client-containers-fae81446-7d94-4c49-b755-7b33529a398c container test-container: 
STEP: delete the pod
Feb 13 14:02:55.277: INFO: Waiting for pod client-containers-fae81446-7d94-4c49-b755-7b33529a398c to disappear
Feb 13 14:02:55.422: INFO: Pod client-containers-fae81446-7d94-4c49-b755-7b33529a398c no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:02:55.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6715" for this suite.
Feb 13 14:03:01.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:03:01.631: INFO: namespace containers-6715 deletion completed in 6.19973665s

• [SLOW TEST:14.526 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:03:01.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-1bacd6e1-0e70-45af-b1b8-35d1e5cdae08
STEP: Creating a pod to test consume secrets
Feb 13 14:03:02.444: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-27b3d904-0136-43c0-83af-96e49d15ac96" in namespace "projected-4454" to be "success or failure"
Feb 13 14:03:02.466: INFO: Pod "pod-projected-secrets-27b3d904-0136-43c0-83af-96e49d15ac96": Phase="Pending", Reason="", readiness=false. Elapsed: 22.182841ms
Feb 13 14:03:04.478: INFO: Pod "pod-projected-secrets-27b3d904-0136-43c0-83af-96e49d15ac96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033858076s
Feb 13 14:03:06.493: INFO: Pod "pod-projected-secrets-27b3d904-0136-43c0-83af-96e49d15ac96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049053372s
Feb 13 14:03:08.506: INFO: Pod "pod-projected-secrets-27b3d904-0136-43c0-83af-96e49d15ac96": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06209788s
Feb 13 14:03:10.520: INFO: Pod "pod-projected-secrets-27b3d904-0136-43c0-83af-96e49d15ac96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.075701757s
STEP: Saw pod success
Feb 13 14:03:10.520: INFO: Pod "pod-projected-secrets-27b3d904-0136-43c0-83af-96e49d15ac96" satisfied condition "success or failure"
Feb 13 14:03:10.524: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-27b3d904-0136-43c0-83af-96e49d15ac96 container secret-volume-test: 
STEP: delete the pod
Feb 13 14:03:10.598: INFO: Waiting for pod pod-projected-secrets-27b3d904-0136-43c0-83af-96e49d15ac96 to disappear
Feb 13 14:03:10.665: INFO: Pod pod-projected-secrets-27b3d904-0136-43c0-83af-96e49d15ac96 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:03:10.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4454" for this suite.
Feb 13 14:03:16.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:03:16.808: INFO: namespace projected-4454 deletion completed in 6.13718022s

• [SLOW TEST:15.176 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:03:16.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Feb 13 14:03:16.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5717'
Feb 13 14:03:17.457: INFO: stderr: ""
Feb 13 14:03:17.457: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 13 14:03:17.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5717'
Feb 13 14:03:17.646: INFO: stderr: ""
Feb 13 14:03:17.646: INFO: stdout: "update-demo-nautilus-bl268 update-demo-nautilus-s5bld "
Feb 13 14:03:17.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bl268 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5717'
Feb 13 14:03:17.833: INFO: stderr: ""
Feb 13 14:03:17.833: INFO: stdout: ""
Feb 13 14:03:17.833: INFO: update-demo-nautilus-bl268 is created but not running
Feb 13 14:03:22.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5717'
Feb 13 14:03:23.811: INFO: stderr: ""
Feb 13 14:03:23.812: INFO: stdout: "update-demo-nautilus-bl268 update-demo-nautilus-s5bld "
Feb 13 14:03:23.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bl268 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5717'
Feb 13 14:03:24.275: INFO: stderr: ""
Feb 13 14:03:24.275: INFO: stdout: ""
Feb 13 14:03:24.275: INFO: update-demo-nautilus-bl268 is created but not running
Feb 13 14:03:29.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5717'
Feb 13 14:03:29.463: INFO: stderr: ""
Feb 13 14:03:29.463: INFO: stdout: "update-demo-nautilus-bl268 update-demo-nautilus-s5bld "
Feb 13 14:03:29.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bl268 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5717'
Feb 13 14:03:29.662: INFO: stderr: ""
Feb 13 14:03:29.662: INFO: stdout: "true"
Feb 13 14:03:29.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bl268 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5717'
Feb 13 14:03:29.826: INFO: stderr: ""
Feb 13 14:03:29.826: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 13 14:03:29.826: INFO: validating pod update-demo-nautilus-bl268
Feb 13 14:03:29.849: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 13 14:03:29.849: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 13 14:03:29.849: INFO: update-demo-nautilus-bl268 is verified up and running
Feb 13 14:03:29.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s5bld -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5717'
Feb 13 14:03:29.967: INFO: stderr: ""
Feb 13 14:03:29.967: INFO: stdout: "true"
Feb 13 14:03:29.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s5bld -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5717'
Feb 13 14:03:30.086: INFO: stderr: ""
Feb 13 14:03:30.086: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 13 14:03:30.086: INFO: validating pod update-demo-nautilus-s5bld
Feb 13 14:03:30.094: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 13 14:03:30.094: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 13 14:03:30.094: INFO: update-demo-nautilus-s5bld is verified up and running
STEP: using delete to clean up resources
Feb 13 14:03:30.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5717'
Feb 13 14:03:30.213: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 13 14:03:30.213: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 13 14:03:30.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5717'
Feb 13 14:03:30.346: INFO: stderr: "No resources found.\n"
Feb 13 14:03:30.346: INFO: stdout: ""
Feb 13 14:03:30.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5717 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 13 14:03:30.595: INFO: stderr: ""
Feb 13 14:03:30.595: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:03:30.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5717" for this suite.
Feb 13 14:03:52.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:03:52.849: INFO: namespace kubectl-5717 deletion completed in 22.217973553s

• [SLOW TEST:36.041 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:03:52.850: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 13 14:03:53.009: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb 13 14:03:53.159: INFO: Number of nodes with available pods: 0
Feb 13 14:03:53.159: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:03:54.175: INFO: Number of nodes with available pods: 0
Feb 13 14:03:54.176: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:03:55.449: INFO: Number of nodes with available pods: 0
Feb 13 14:03:55.449: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:03:56.172: INFO: Number of nodes with available pods: 0
Feb 13 14:03:56.172: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:03:57.716: INFO: Number of nodes with available pods: 0
Feb 13 14:03:57.717: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:03:58.172: INFO: Number of nodes with available pods: 0
Feb 13 14:03:58.172: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:03:59.176: INFO: Number of nodes with available pods: 0
Feb 13 14:03:59.176: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:04:01.412: INFO: Number of nodes with available pods: 0
Feb 13 14:04:01.412: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:04:02.172: INFO: Number of nodes with available pods: 0
Feb 13 14:04:02.172: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:04:03.897: INFO: Number of nodes with available pods: 0
Feb 13 14:04:03.897: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:04:04.326: INFO: Number of nodes with available pods: 0
Feb 13 14:04:04.326: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:04:05.176: INFO: Number of nodes with available pods: 0
Feb 13 14:04:05.177: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:04:06.226: INFO: Number of nodes with available pods: 0
Feb 13 14:04:06.226: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:04:07.182: INFO: Number of nodes with available pods: 1
Feb 13 14:04:07.182: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:04:08.175: INFO: Number of nodes with available pods: 2
Feb 13 14:04:08.175: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb 13 14:04:08.220: INFO: Wrong image for pod: daemon-set-546p7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:08.220: INFO: Wrong image for pod: daemon-set-tsqjj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:09.266: INFO: Wrong image for pod: daemon-set-546p7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:09.266: INFO: Wrong image for pod: daemon-set-tsqjj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:10.267: INFO: Wrong image for pod: daemon-set-546p7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:10.268: INFO: Wrong image for pod: daemon-set-tsqjj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:11.267: INFO: Wrong image for pod: daemon-set-546p7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:11.267: INFO: Wrong image for pod: daemon-set-tsqjj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:12.262: INFO: Wrong image for pod: daemon-set-546p7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:12.262: INFO: Wrong image for pod: daemon-set-tsqjj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:12.262: INFO: Pod daemon-set-tsqjj is not available
Feb 13 14:04:13.265: INFO: Wrong image for pod: daemon-set-546p7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:13.265: INFO: Wrong image for pod: daemon-set-tsqjj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:13.265: INFO: Pod daemon-set-tsqjj is not available
Feb 13 14:04:14.264: INFO: Wrong image for pod: daemon-set-546p7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:14.264: INFO: Wrong image for pod: daemon-set-tsqjj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:14.264: INFO: Pod daemon-set-tsqjj is not available
Feb 13 14:04:15.266: INFO: Wrong image for pod: daemon-set-546p7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:15.266: INFO: Wrong image for pod: daemon-set-tsqjj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:15.267: INFO: Pod daemon-set-tsqjj is not available
Feb 13 14:04:16.264: INFO: Wrong image for pod: daemon-set-546p7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:16.264: INFO: Wrong image for pod: daemon-set-tsqjj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:16.264: INFO: Pod daemon-set-tsqjj is not available
Feb 13 14:04:17.264: INFO: Wrong image for pod: daemon-set-546p7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:17.264: INFO: Wrong image for pod: daemon-set-tsqjj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:17.264: INFO: Pod daemon-set-tsqjj is not available
Feb 13 14:04:18.265: INFO: Wrong image for pod: daemon-set-546p7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:18.265: INFO: Pod daemon-set-f6ghl is not available
Feb 13 14:04:19.433: INFO: Wrong image for pod: daemon-set-546p7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:19.433: INFO: Pod daemon-set-f6ghl is not available
Feb 13 14:04:20.266: INFO: Wrong image for pod: daemon-set-546p7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:20.266: INFO: Pod daemon-set-f6ghl is not available
Feb 13 14:04:21.291: INFO: Wrong image for pod: daemon-set-546p7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:21.291: INFO: Pod daemon-set-f6ghl is not available
Feb 13 14:04:22.511: INFO: Wrong image for pod: daemon-set-546p7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:22.511: INFO: Pod daemon-set-f6ghl is not available
Feb 13 14:04:23.426: INFO: Wrong image for pod: daemon-set-546p7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:23.426: INFO: Pod daemon-set-f6ghl is not available
Feb 13 14:04:24.267: INFO: Wrong image for pod: daemon-set-546p7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:24.267: INFO: Pod daemon-set-f6ghl is not available
Feb 13 14:04:25.286: INFO: Wrong image for pod: daemon-set-546p7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:25.286: INFO: Pod daemon-set-f6ghl is not available
Feb 13 14:04:26.264: INFO: Wrong image for pod: daemon-set-546p7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:27.270: INFO: Wrong image for pod: daemon-set-546p7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:28.264: INFO: Wrong image for pod: daemon-set-546p7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:29.265: INFO: Wrong image for pod: daemon-set-546p7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:30.263: INFO: Wrong image for pod: daemon-set-546p7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:30.263: INFO: Pod daemon-set-546p7 is not available
Feb 13 14:04:31.290: INFO: Wrong image for pod: daemon-set-546p7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:31.290: INFO: Pod daemon-set-546p7 is not available
Feb 13 14:04:32.262: INFO: Wrong image for pod: daemon-set-546p7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:32.262: INFO: Pod daemon-set-546p7 is not available
Feb 13 14:04:34.815: INFO: Wrong image for pod: daemon-set-546p7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:34.815: INFO: Pod daemon-set-546p7 is not available
Feb 13 14:04:35.266: INFO: Wrong image for pod: daemon-set-546p7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:35.266: INFO: Pod daemon-set-546p7 is not available
Feb 13 14:04:36.266: INFO: Wrong image for pod: daemon-set-546p7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 13 14:04:36.266: INFO: Pod daemon-set-546p7 is not available
Feb 13 14:04:37.295: INFO: Pod daemon-set-qmgkz is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb 13 14:04:37.316: INFO: Number of nodes with available pods: 1
Feb 13 14:04:37.316: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:04:38.340: INFO: Number of nodes with available pods: 1
Feb 13 14:04:38.340: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:04:39.341: INFO: Number of nodes with available pods: 1
Feb 13 14:04:39.341: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:04:40.349: INFO: Number of nodes with available pods: 1
Feb 13 14:04:40.350: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:04:41.328: INFO: Number of nodes with available pods: 1
Feb 13 14:04:41.328: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:04:42.332: INFO: Number of nodes with available pods: 1
Feb 13 14:04:42.333: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:04:43.335: INFO: Number of nodes with available pods: 1
Feb 13 14:04:43.335: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:04:44.338: INFO: Number of nodes with available pods: 1
Feb 13 14:04:44.338: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:04:45.337: INFO: Number of nodes with available pods: 2
Feb 13 14:04:45.337: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8480, will wait for the garbage collector to delete the pods
Feb 13 14:04:45.493: INFO: Deleting DaemonSet.extensions daemon-set took: 36.281611ms
Feb 13 14:04:45.793: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.459631ms
Feb 13 14:04:53.133: INFO: Number of nodes with available pods: 0
Feb 13 14:04:53.133: INFO: Number of running nodes: 0, number of available pods: 0
Feb 13 14:04:53.136: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8480/daemonsets","resourceVersion":"24204305"},"items":null}

Feb 13 14:04:53.139: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8480/pods","resourceVersion":"24204305"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:04:53.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8480" for this suite.
Feb 13 14:04:59.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:04:59.321: INFO: namespace daemonsets-8480 deletion completed in 6.115148364s

• [SLOW TEST:66.471 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:04:59.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb 13 14:04:59.443: INFO: Pod name pod-release: Found 0 pods out of 1
Feb 13 14:05:04.452: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:05:05.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8466" for this suite.
Feb 13 14:05:11.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:05:11.767: INFO: namespace replication-controller-8466 deletion completed in 6.153465313s

• [SLOW TEST:12.446 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:05:11.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-587
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-587
STEP: Deleting pre-stop pod
Feb 13 14:05:37.031: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:05:37.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-587" for this suite.
Feb 13 14:06:15.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:06:15.238: INFO: namespace prestop-587 deletion completed in 38.166171317s

• [SLOW TEST:63.471 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:06:15.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-f57e668c-81ca-4bad-a038-abfc151188c1
STEP: Creating a pod to test consume configMaps
Feb 13 14:06:15.315: INFO: Waiting up to 5m0s for pod "pod-configmaps-9f122665-51c4-4de8-89a0-809f25d5f225" in namespace "configmap-5543" to be "success or failure"
Feb 13 14:06:15.380: INFO: Pod "pod-configmaps-9f122665-51c4-4de8-89a0-809f25d5f225": Phase="Pending", Reason="", readiness=false. Elapsed: 64.765059ms
Feb 13 14:06:17.391: INFO: Pod "pod-configmaps-9f122665-51c4-4de8-89a0-809f25d5f225": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075802866s
Feb 13 14:06:19.406: INFO: Pod "pod-configmaps-9f122665-51c4-4de8-89a0-809f25d5f225": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090343206s
Feb 13 14:06:21.418: INFO: Pod "pod-configmaps-9f122665-51c4-4de8-89a0-809f25d5f225": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102433647s
Feb 13 14:06:23.435: INFO: Pod "pod-configmaps-9f122665-51c4-4de8-89a0-809f25d5f225": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.120113677s
STEP: Saw pod success
Feb 13 14:06:23.436: INFO: Pod "pod-configmaps-9f122665-51c4-4de8-89a0-809f25d5f225" satisfied condition "success or failure"
Feb 13 14:06:23.442: INFO: Trying to get logs from node iruya-node pod pod-configmaps-9f122665-51c4-4de8-89a0-809f25d5f225 container configmap-volume-test: 
STEP: delete the pod
Feb 13 14:06:23.716: INFO: Waiting for pod pod-configmaps-9f122665-51c4-4de8-89a0-809f25d5f225 to disappear
Feb 13 14:06:23.741: INFO: Pod pod-configmaps-9f122665-51c4-4de8-89a0-809f25d5f225 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:06:23.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5543" for this suite.
Feb 13 14:06:29.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:06:29.975: INFO: namespace configmap-5543 deletion completed in 6.226098982s

• [SLOW TEST:14.736 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:06:29.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 13 14:06:30.117: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb 13 14:06:30.128: INFO: Number of nodes with available pods: 0
Feb 13 14:06:30.128: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb 13 14:06:30.166: INFO: Number of nodes with available pods: 0
Feb 13 14:06:30.166: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:06:31.177: INFO: Number of nodes with available pods: 0
Feb 13 14:06:31.177: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:06:32.180: INFO: Number of nodes with available pods: 0
Feb 13 14:06:32.180: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:06:33.177: INFO: Number of nodes with available pods: 0
Feb 13 14:06:33.177: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:06:34.236: INFO: Number of nodes with available pods: 0
Feb 13 14:06:34.236: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:06:35.179: INFO: Number of nodes with available pods: 0
Feb 13 14:06:35.179: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:06:36.176: INFO: Number of nodes with available pods: 0
Feb 13 14:06:36.176: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:06:37.173: INFO: Number of nodes with available pods: 1
Feb 13 14:06:37.173: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb 13 14:06:37.263: INFO: Number of nodes with available pods: 1
Feb 13 14:06:37.263: INFO: Number of running nodes: 0, number of available pods: 1
Feb 13 14:06:38.278: INFO: Number of nodes with available pods: 0
Feb 13 14:06:38.278: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb 13 14:06:38.443: INFO: Number of nodes with available pods: 0
Feb 13 14:06:38.443: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:06:39.453: INFO: Number of nodes with available pods: 0
Feb 13 14:06:39.453: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:06:40.453: INFO: Number of nodes with available pods: 0
Feb 13 14:06:40.454: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:06:41.453: INFO: Number of nodes with available pods: 0
Feb 13 14:06:41.453: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:06:42.452: INFO: Number of nodes with available pods: 0
Feb 13 14:06:42.452: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:06:43.453: INFO: Number of nodes with available pods: 0
Feb 13 14:06:43.453: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:06:44.455: INFO: Number of nodes with available pods: 0
Feb 13 14:06:44.455: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:06:45.452: INFO: Number of nodes with available pods: 0
Feb 13 14:06:45.452: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:06:46.451: INFO: Number of nodes with available pods: 0
Feb 13 14:06:46.451: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:06:47.457: INFO: Number of nodes with available pods: 0
Feb 13 14:06:47.457: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:06:48.465: INFO: Number of nodes with available pods: 0
Feb 13 14:06:48.465: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:06:49.454: INFO: Number of nodes with available pods: 0
Feb 13 14:06:49.454: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:06:50.460: INFO: Number of nodes with available pods: 0
Feb 13 14:06:50.460: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:06:51.450: INFO: Number of nodes with available pods: 0
Feb 13 14:06:51.450: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:06:52.468: INFO: Number of nodes with available pods: 0
Feb 13 14:06:52.468: INFO: Node iruya-node is running more than one daemon pod
Feb 13 14:06:53.478: INFO: Number of nodes with available pods: 1
Feb 13 14:06:53.478: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1704, will wait for the garbage collector to delete the pods
Feb 13 14:06:53.624: INFO: Deleting DaemonSet.extensions daemon-set took: 16.825077ms
Feb 13 14:06:53.924: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.574312ms
Feb 13 14:07:06.630: INFO: Number of nodes with available pods: 0
Feb 13 14:07:06.630: INFO: Number of running nodes: 0, number of available pods: 0
Feb 13 14:07:06.636: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1704/daemonsets","resourceVersion":"24204677"},"items":null}

Feb 13 14:07:06.639: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1704/pods","resourceVersion":"24204677"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:07:06.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1704" for this suite.
Feb 13 14:07:12.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:07:12.859: INFO: namespace daemonsets-1704 deletion completed in 6.151753859s

• [SLOW TEST:42.884 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:07:12.860: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-2382
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-2382
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2382
Feb 13 14:07:13.127: INFO: Found 0 stateful pods, waiting for 1
Feb 13 14:07:23.159: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb 13 14:07:23.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 13 14:07:23.836: INFO: stderr: "I0213 14:07:23.418477    2350 log.go:172] (0xc00013a630) (0xc00066e960) Create stream\nI0213 14:07:23.418973    2350 log.go:172] (0xc00013a630) (0xc00066e960) Stream added, broadcasting: 1\nI0213 14:07:23.430904    2350 log.go:172] (0xc00013a630) Reply frame received for 1\nI0213 14:07:23.431011    2350 log.go:172] (0xc00013a630) (0xc0006ca000) Create stream\nI0213 14:07:23.431026    2350 log.go:172] (0xc00013a630) (0xc0006ca000) Stream added, broadcasting: 3\nI0213 14:07:23.432695    2350 log.go:172] (0xc00013a630) Reply frame received for 3\nI0213 14:07:23.432755    2350 log.go:172] (0xc00013a630) (0xc00050a000) Create stream\nI0213 14:07:23.432780    2350 log.go:172] (0xc00013a630) (0xc00050a000) Stream added, broadcasting: 5\nI0213 14:07:23.436248    2350 log.go:172] (0xc00013a630) Reply frame received for 5\nI0213 14:07:23.567774    2350 log.go:172] (0xc00013a630) Data frame received for 5\nI0213 14:07:23.567940    2350 log.go:172] (0xc00050a000) (5) Data frame handling\nI0213 14:07:23.567969    2350 log.go:172] (0xc00050a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0213 14:07:23.650247    2350 log.go:172] (0xc00013a630) Data frame received for 3\nI0213 14:07:23.650395    2350 log.go:172] (0xc0006ca000) (3) Data frame handling\nI0213 14:07:23.650445    2350 log.go:172] (0xc0006ca000) (3) Data frame sent\nI0213 14:07:23.811986    2350 log.go:172] (0xc00013a630) (0xc0006ca000) Stream removed, broadcasting: 3\nI0213 14:07:23.812384    2350 log.go:172] (0xc00013a630) Data frame received for 1\nI0213 14:07:23.812419    2350 log.go:172] (0xc00066e960) (1) Data frame handling\nI0213 14:07:23.812445    2350 log.go:172] (0xc00066e960) (1) Data frame sent\nI0213 14:07:23.812455    2350 log.go:172] (0xc00013a630) (0xc00066e960) Stream removed, broadcasting: 1\nI0213 14:07:23.813025    2350 log.go:172] (0xc00013a630) (0xc00050a000) Stream removed, broadcasting: 5\nI0213 14:07:23.813436    2350 log.go:172] (0xc00013a630) Go away received\nI0213 14:07:23.814531    2350 log.go:172] (0xc00013a630) (0xc00066e960) Stream removed, broadcasting: 1\nI0213 14:07:23.814584    2350 log.go:172] (0xc00013a630) (0xc0006ca000) Stream removed, broadcasting: 3\nI0213 14:07:23.814605    2350 log.go:172] (0xc00013a630) (0xc00050a000) Stream removed, broadcasting: 5\n"
Feb 13 14:07:23.837: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 13 14:07:23.837: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 13 14:07:23.876: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 13 14:07:33.899: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 13 14:07:33.899: INFO: Waiting for statefulset status.replicas updated to 0
Feb 13 14:07:33.969: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999425s
Feb 13 14:07:34.999: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.981828467s
Feb 13 14:07:36.013: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.952487677s
Feb 13 14:07:37.025: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.938233742s
Feb 13 14:07:38.035: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.926142357s
Feb 13 14:07:39.044: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.916033812s
Feb 13 14:07:40.053: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.906979003s
Feb 13 14:07:41.062: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.897892061s
Feb 13 14:07:42.077: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.88894905s
Feb 13 14:07:43.086: INFO: Verifying statefulset ss doesn't scale past 1 for another 873.628042ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2382
Feb 13 14:07:44.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:07:44.752: INFO: stderr: "I0213 14:07:44.295036    2370 log.go:172] (0xc0008322c0) (0xc0008e46e0) Create stream\nI0213 14:07:44.295254    2370 log.go:172] (0xc0008322c0) (0xc0008e46e0) Stream added, broadcasting: 1\nI0213 14:07:44.299326    2370 log.go:172] (0xc0008322c0) Reply frame received for 1\nI0213 14:07:44.299374    2370 log.go:172] (0xc0008322c0) (0xc000542280) Create stream\nI0213 14:07:44.299384    2370 log.go:172] (0xc0008322c0) (0xc000542280) Stream added, broadcasting: 3\nI0213 14:07:44.303419    2370 log.go:172] (0xc0008322c0) Reply frame received for 3\nI0213 14:07:44.303532    2370 log.go:172] (0xc0008322c0) (0xc000456000) Create stream\nI0213 14:07:44.303540    2370 log.go:172] (0xc0008322c0) (0xc000456000) Stream added, broadcasting: 5\nI0213 14:07:44.305032    2370 log.go:172] (0xc0008322c0) Reply frame received for 5\nI0213 14:07:44.478609    2370 log.go:172] (0xc0008322c0) Data frame received for 3\nI0213 14:07:44.478888    2370 log.go:172] (0xc000542280) (3) Data frame handling\nI0213 14:07:44.478920    2370 log.go:172] (0xc0008322c0) Data frame received for 5\nI0213 14:07:44.478942    2370 log.go:172] (0xc000456000) (5) Data frame handling\nI0213 14:07:44.478952    2370 log.go:172] (0xc000456000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0213 14:07:44.478977    2370 log.go:172] (0xc000542280) (3) Data frame sent\nI0213 14:07:44.739941    2370 log.go:172] (0xc0008322c0) Data frame received for 1\nI0213 14:07:44.740059    2370 log.go:172] (0xc0008322c0) (0xc000542280) Stream removed, broadcasting: 3\nI0213 14:07:44.740143    2370 log.go:172] (0xc0008e46e0) (1) Data frame handling\nI0213 14:07:44.740171    2370 log.go:172] (0xc0008e46e0) (1) Data frame sent\nI0213 14:07:44.740185    2370 log.go:172] (0xc0008322c0) (0xc0008e46e0) Stream removed, broadcasting: 1\nI0213 14:07:44.740200    2370 log.go:172] (0xc0008322c0) (0xc000456000) Stream removed, broadcasting: 5\nI0213 14:07:44.740211    2370 log.go:172] (0xc0008322c0) Go away received\nI0213 14:07:44.741016    2370 log.go:172] (0xc0008322c0) (0xc0008e46e0) Stream removed, broadcasting: 1\nI0213 14:07:44.741028    2370 log.go:172] (0xc0008322c0) (0xc000542280) Stream removed, broadcasting: 3\nI0213 14:07:44.741032    2370 log.go:172] (0xc0008322c0) (0xc000456000) Stream removed, broadcasting: 5\n"
Feb 13 14:07:44.752: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 13 14:07:44.752: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 13 14:07:44.787: INFO: Found 2 stateful pods, waiting for 3
Feb 13 14:07:54.799: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 13 14:07:54.799: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 13 14:07:54.799: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 13 14:08:04.800: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 13 14:08:04.800: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 13 14:08:04.800: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb 13 14:08:04.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 13 14:08:05.529: INFO: stderr: "I0213 14:08:05.135673    2388 log.go:172] (0xc000116dc0) (0xc00066a780) Create stream\nI0213 14:08:05.136132    2388 log.go:172] (0xc000116dc0) (0xc00066a780) Stream added, broadcasting: 1\nI0213 14:08:05.149638    2388 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0213 14:08:05.149743    2388 log.go:172] (0xc000116dc0) (0xc0008100a0) Create stream\nI0213 14:08:05.149756    2388 log.go:172] (0xc000116dc0) (0xc0008100a0) Stream added, broadcasting: 3\nI0213 14:08:05.151993    2388 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0213 14:08:05.152038    2388 log.go:172] (0xc000116dc0) (0xc00085e000) Create stream\nI0213 14:08:05.152055    2388 log.go:172] (0xc000116dc0) (0xc00085e000) Stream added, broadcasting: 5\nI0213 14:08:05.153983    2388 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0213 14:08:05.313819    2388 log.go:172] (0xc000116dc0) Data frame received for 3\nI0213 14:08:05.313928    2388 log.go:172] (0xc0008100a0) (3) Data frame handling\nI0213 14:08:05.313952    2388 log.go:172] (0xc0008100a0) (3) Data frame sent\nI0213 14:08:05.313992    2388 log.go:172] (0xc000116dc0) Data frame received for 5\nI0213 14:08:05.314003    2388 log.go:172] (0xc00085e000) (5) Data frame handling\nI0213 14:08:05.314017    2388 log.go:172] (0xc00085e000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0213 14:08:05.503738    2388 log.go:172] (0xc000116dc0) (0xc0008100a0) Stream removed, broadcasting: 3\nI0213 14:08:05.504014    2388 log.go:172] (0xc000116dc0) (0xc00085e000) Stream removed, broadcasting: 5\nI0213 14:08:05.504060    2388 log.go:172] (0xc000116dc0) Data frame received for 1\nI0213 14:08:05.504084    2388 log.go:172] (0xc00066a780) (1) Data frame handling\nI0213 14:08:05.504118    2388 log.go:172] (0xc00066a780) (1) Data frame sent\nI0213 14:08:05.504135    2388 log.go:172] (0xc000116dc0) (0xc00066a780) Stream removed, broadcasting: 1\nI0213 14:08:05.504158    2388 log.go:172] (0xc000116dc0) Go away received\nI0213 14:08:05.506230    2388 log.go:172] (0xc000116dc0) (0xc00066a780) Stream removed, broadcasting: 1\nI0213 14:08:05.506265    2388 log.go:172] (0xc000116dc0) (0xc0008100a0) Stream removed, broadcasting: 3\nI0213 14:08:05.506282    2388 log.go:172] (0xc000116dc0) (0xc00085e000) Stream removed, broadcasting: 5\n"
Feb 13 14:08:05.529: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 13 14:08:05.529: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 13 14:08:05.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 13 14:08:06.091: INFO: stderr: "I0213 14:08:05.865939    2407 log.go:172] (0xc000a26420) (0xc000522640) Create stream\nI0213 14:08:05.866210    2407 log.go:172] (0xc000a26420) (0xc000522640) Stream added, broadcasting: 1\nI0213 14:08:05.870751    2407 log.go:172] (0xc000a26420) Reply frame received for 1\nI0213 14:08:05.870882    2407 log.go:172] (0xc000a26420) (0xc0005be3c0) Create stream\nI0213 14:08:05.870910    2407 log.go:172] (0xc000a26420) (0xc0005be3c0) Stream added, broadcasting: 3\nI0213 14:08:05.874958    2407 log.go:172] (0xc000a26420) Reply frame received for 3\nI0213 14:08:05.875051    2407 log.go:172] (0xc000a26420) (0xc00093a000) Create stream\nI0213 14:08:05.875088    2407 log.go:172] (0xc000a26420) (0xc00093a000) Stream added, broadcasting: 5\nI0213 14:08:05.877118    2407 log.go:172] (0xc000a26420) Reply frame received for 5\nI0213 14:08:05.955739    2407 log.go:172] (0xc000a26420) Data frame received for 5\nI0213 14:08:05.955818    2407 log.go:172] (0xc00093a000) (5) Data frame handling\nI0213 14:08:05.955842    2407 log.go:172] (0xc00093a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0213 14:08:05.976118    2407 log.go:172] (0xc000a26420) Data frame received for 3\nI0213 14:08:05.976199    2407 log.go:172] (0xc0005be3c0) (3) Data frame handling\nI0213 14:08:05.976221    2407 log.go:172] (0xc0005be3c0) (3) Data frame sent\nI0213 14:08:06.078953    2407 log.go:172] (0xc000a26420) Data frame received for 1\nI0213 14:08:06.079209    2407 log.go:172] (0xc000a26420) (0xc00093a000) Stream removed, broadcasting: 5\nI0213 14:08:06.079350    2407 log.go:172] (0xc000522640) (1) Data frame handling\nI0213 14:08:06.079388    2407 log.go:172] (0xc000522640) (1) Data frame sent\nI0213 14:08:06.079417    2407 log.go:172] (0xc000a26420) (0xc0005be3c0) Stream removed, broadcasting: 3\nI0213 14:08:06.079479    2407 log.go:172] (0xc000a26420) (0xc000522640) Stream removed, broadcasting: 1\nI0213 14:08:06.079509    2407 log.go:172] (0xc000a26420) Go away received\nI0213 14:08:06.080533    2407 log.go:172] (0xc000a26420) (0xc000522640) Stream removed, broadcasting: 1\nI0213 14:08:06.080550    2407 log.go:172] (0xc000a26420) (0xc0005be3c0) Stream removed, broadcasting: 3\nI0213 14:08:06.080558    2407 log.go:172] (0xc000a26420) (0xc00093a000) Stream removed, broadcasting: 5\n"
Feb 13 14:08:06.092: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 13 14:08:06.092: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 13 14:08:06.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 13 14:08:06.861: INFO: stderr: "I0213 14:08:06.354392    2428 log.go:172] (0xc0009240b0) (0xc00081a6e0) Create stream\nI0213 14:08:06.355133    2428 log.go:172] (0xc0009240b0) (0xc00081a6e0) Stream added, broadcasting: 1\nI0213 14:08:06.372788    2428 log.go:172] (0xc0009240b0) Reply frame received for 1\nI0213 14:08:06.372942    2428 log.go:172] (0xc0009240b0) (0xc00053e280) Create stream\nI0213 14:08:06.372978    2428 log.go:172] (0xc0009240b0) (0xc00053e280) Stream added, broadcasting: 3\nI0213 14:08:06.376732    2428 log.go:172] (0xc0009240b0) Reply frame received for 3\nI0213 14:08:06.376805    2428 log.go:172] (0xc0009240b0) (0xc0002d2000) Create stream\nI0213 14:08:06.376821    2428 log.go:172] (0xc0009240b0) (0xc0002d2000) Stream added, broadcasting: 5\nI0213 14:08:06.380265    2428 log.go:172] (0xc0009240b0) Reply frame received for 5\nI0213 14:08:06.643355    2428 log.go:172] (0xc0009240b0) Data frame received for 5\nI0213 14:08:06.643513    2428 log.go:172] (0xc0002d2000) (5) Data frame handling\nI0213 14:08:06.643581    2428 log.go:172] (0xc0002d2000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0213 14:08:06.664817    2428 log.go:172] (0xc0009240b0) Data frame received for 3\nI0213 14:08:06.664936    2428 log.go:172] (0xc00053e280) (3) Data frame handling\nI0213 14:08:06.664958    2428 log.go:172] (0xc00053e280) (3) Data frame sent\nI0213 14:08:06.840648    2428 log.go:172] (0xc0009240b0) Data frame received for 1\nI0213 14:08:06.840886    2428 log.go:172] (0xc0009240b0) (0xc0002d2000) Stream removed, broadcasting: 5\nI0213 14:08:06.841065    2428 log.go:172] (0xc00081a6e0) (1) Data frame handling\nI0213 14:08:06.841108    2428 log.go:172] (0xc00081a6e0) (1) Data frame sent\nI0213 14:08:06.841122    2428 log.go:172] (0xc0009240b0) (0xc00053e280) Stream removed, broadcasting: 3\nI0213 14:08:06.841169    2428 log.go:172] (0xc0009240b0) (0xc00081a6e0) Stream removed, broadcasting: 1\nI0213 14:08:06.841231    2428 log.go:172] (0xc0009240b0) Go away received\nI0213 14:08:06.842888    2428 log.go:172] (0xc0009240b0) (0xc00081a6e0) Stream removed, broadcasting: 1\nI0213 14:08:06.842914    2428 log.go:172] (0xc0009240b0) (0xc00053e280) Stream removed, broadcasting: 3\nI0213 14:08:06.842960    2428 log.go:172] (0xc0009240b0) (0xc0002d2000) Stream removed, broadcasting: 5\n"
Feb 13 14:08:06.861: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 13 14:08:06.861: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 13 14:08:06.861: INFO: Waiting for statefulset status.replicas updated to 0
Feb 13 14:08:06.890: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 13 14:08:06.890: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 13 14:08:06.890: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 13 14:08:06.941: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999519s
Feb 13 14:08:07.950: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.975969582s
Feb 13 14:08:08.959: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.967034179s
Feb 13 14:08:09.967: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.95853101s
Feb 13 14:08:10.976: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.950357357s
Feb 13 14:08:11.987: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.940893872s
Feb 13 14:08:12.998: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.92977696s
Feb 13 14:08:14.010: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.918884467s
Feb 13 14:08:15.029: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.906765523s
Feb 13 14:08:16.045: INFO: Verifying statefulset ss doesn't scale past 3 for another 888.23868ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2382
Feb 13 14:08:17.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:08:17.739: INFO: stderr: "I0213 14:08:17.252402    2447 log.go:172] (0xc000a74420) (0xc0006c86e0) Create stream\nI0213 14:08:17.252737    2447 log.go:172] (0xc000a74420) (0xc0006c86e0) Stream added, broadcasting: 1\nI0213 14:08:17.260337    2447 log.go:172] (0xc000a74420) Reply frame received for 1\nI0213 14:08:17.260483    2447 log.go:172] (0xc000a74420) (0xc000596460) Create stream\nI0213 14:08:17.260502    2447 log.go:172] (0xc000a74420) (0xc000596460) Stream added, broadcasting: 3\nI0213 14:08:17.261980    2447 log.go:172] (0xc000a74420) Reply frame received for 3\nI0213 14:08:17.262041    2447 log.go:172] (0xc000a74420) (0xc000a42000) Create stream\nI0213 14:08:17.262077    2447 log.go:172] (0xc000a74420) (0xc000a42000) Stream added, broadcasting: 5\nI0213 14:08:17.265225    2447 log.go:172] (0xc000a74420) Reply frame received for 5\nI0213 14:08:17.400825    2447 log.go:172] (0xc000a74420) Data frame received for 5\nI0213 14:08:17.401025    2447 log.go:172] (0xc000a42000) (5) Data frame handling\nI0213 14:08:17.401048    2447 log.go:172] (0xc000a42000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0213 14:08:17.401075    2447 log.go:172] (0xc000a74420) Data frame received for 3\nI0213 14:08:17.401089    2447 log.go:172] (0xc000596460) (3) Data frame handling\nI0213 14:08:17.401118    2447 log.go:172] (0xc000596460) (3) Data frame sent\nI0213 14:08:17.721851    2447 log.go:172] (0xc000a74420) Data frame received for 1\nI0213 14:08:17.722143    2447 log.go:172] (0xc0006c86e0) (1) Data frame handling\nI0213 14:08:17.722281    2447 log.go:172] (0xc0006c86e0) (1) Data frame sent\nI0213 14:08:17.722333    2447 log.go:172] (0xc000a74420) (0xc0006c86e0) Stream removed, broadcasting: 1\nI0213 14:08:17.725271    2447 log.go:172] (0xc000a74420) (0xc000596460) Stream removed, broadcasting: 3\nI0213 14:08:17.725695    2447 log.go:172] (0xc000a74420) (0xc000a42000) Stream removed, broadcasting: 5\nI0213 14:08:17.725792    2447 log.go:172] (0xc000a74420) Go away received\nI0213 14:08:17.725953    2447 log.go:172] (0xc000a74420) (0xc0006c86e0) Stream removed, broadcasting: 1\nI0213 14:08:17.725977    2447 log.go:172] (0xc000a74420) (0xc000596460) Stream removed, broadcasting: 3\nI0213 14:08:17.725993    2447 log.go:172] (0xc000a74420) (0xc000a42000) Stream removed, broadcasting: 5\n"
Feb 13 14:08:17.739: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 13 14:08:17.739: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 13 14:08:17.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:08:18.200: INFO: stderr: "I0213 14:08:18.011632    2467 log.go:172] (0xc00086a370) (0xc000880c80) Create stream\nI0213 14:08:18.011847    2467 log.go:172] (0xc00086a370) (0xc000880c80) Stream added, broadcasting: 1\nI0213 14:08:18.024616    2467 log.go:172] (0xc00086a370) Reply frame received for 1\nI0213 14:08:18.024675    2467 log.go:172] (0xc00086a370) (0xc000880000) Create stream\nI0213 14:08:18.024690    2467 log.go:172] (0xc00086a370) (0xc000880000) Stream added, broadcasting: 3\nI0213 14:08:18.025808    2467 log.go:172] (0xc00086a370) Reply frame received for 3\nI0213 14:08:18.025832    2467 log.go:172] (0xc00086a370) (0xc0001d4320) Create stream\nI0213 14:08:18.025841    2467 log.go:172] (0xc00086a370) (0xc0001d4320) Stream added, broadcasting: 5\nI0213 14:08:18.028191    2467 log.go:172] (0xc00086a370) Reply frame received for 5\nI0213 14:08:18.111117    2467 log.go:172] (0xc00086a370) Data frame received for 5\nI0213 14:08:18.111203    2467 log.go:172] (0xc0001d4320) (5) Data frame handling\nI0213 14:08:18.111224    2467 log.go:172] (0xc0001d4320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0213 14:08:18.111461    2467 log.go:172] (0xc00086a370) Data frame received for 3\nI0213 14:08:18.111474    2467 log.go:172] (0xc000880000) (3) Data frame handling\nI0213 14:08:18.111489    2467 log.go:172] (0xc000880000) (3) Data frame sent\nI0213 14:08:18.185973    2467 log.go:172] (0xc00086a370) Data frame received for 1\nI0213 14:08:18.186099    2467 log.go:172] (0xc000880c80) (1) Data frame handling\nI0213 14:08:18.186144    2467 log.go:172] (0xc000880c80) (1) Data frame sent\nI0213 14:08:18.186186    2467 log.go:172] (0xc00086a370) (0xc000880c80) Stream removed, broadcasting: 1\nI0213 14:08:18.187248    2467 log.go:172] (0xc00086a370) (0xc000880000) Stream removed, broadcasting: 3\nI0213 14:08:18.187521    2467 log.go:172] (0xc00086a370) (0xc0001d4320) Stream removed, broadcasting: 5\nI0213 14:08:18.187544    2467 log.go:172] (0xc00086a370) Go away received\nI0213 14:08:18.187607    2467 log.go:172] (0xc00086a370) (0xc000880c80) Stream removed, broadcasting: 1\nI0213 14:08:18.187669    2467 log.go:172] (0xc00086a370) (0xc000880000) Stream removed, broadcasting: 3\nI0213 14:08:18.187705    2467 log.go:172] (0xc00086a370) (0xc0001d4320) Stream removed, broadcasting: 5\n"
Feb 13 14:08:18.200: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 13 14:08:18.200: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 13 14:08:18.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:08:18.663: INFO: rc: 126
Feb 13 14:08:18.664: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []   cannot exec in a stopped state: unknown
 I0213 14:08:18.524201    2488 log.go:172] (0xc0007628f0) (0xc00081ea00) Create stream
I0213 14:08:18.524853    2488 log.go:172] (0xc0007628f0) (0xc00081ea00) Stream added, broadcasting: 1
I0213 14:08:18.557787    2488 log.go:172] (0xc0007628f0) Reply frame received for 1
I0213 14:08:18.558053    2488 log.go:172] (0xc0007628f0) (0xc00081e0a0) Create stream
I0213 14:08:18.558074    2488 log.go:172] (0xc0007628f0) (0xc00081e0a0) Stream added, broadcasting: 3
I0213 14:08:18.561179    2488 log.go:172] (0xc0007628f0) Reply frame received for 3
I0213 14:08:18.561243    2488 log.go:172] (0xc0007628f0) (0xc00096e000) Create stream
I0213 14:08:18.561265    2488 log.go:172] (0xc0007628f0) (0xc00096e000) Stream added, broadcasting: 5
I0213 14:08:18.564381    2488 log.go:172] (0xc0007628f0) Reply frame received for 5
I0213 14:08:18.632737    2488 log.go:172] (0xc0007628f0) Data frame received for 3
I0213 14:08:18.632970    2488 log.go:172] (0xc00081e0a0) (3) Data frame handling
I0213 14:08:18.633029    2488 log.go:172] (0xc00081e0a0) (3) Data frame sent
I0213 14:08:18.637598    2488 log.go:172] (0xc0007628f0) Data frame received for 1
I0213 14:08:18.637671    2488 log.go:172] (0xc0007628f0) (0xc00096e000) Stream removed, broadcasting: 5
I0213 14:08:18.637712    2488 log.go:172] (0xc00081ea00) (1) Data frame handling
I0213 14:08:18.637728    2488 log.go:172] (0xc00081ea00) (1) Data frame sent
I0213 14:08:18.637895    2488 log.go:172] (0xc0007628f0) (0xc00081e0a0) Stream removed, broadcasting: 3
I0213 14:08:18.637946    2488 log.go:172] (0xc0007628f0) (0xc00081ea00) Stream removed, broadcasting: 1
I0213 14:08:18.637980    2488 log.go:172] (0xc0007628f0) Go away received
I0213 14:08:18.639905    2488 log.go:172] (0xc0007628f0) (0xc00081ea00) Stream removed, broadcasting: 1
I0213 14:08:18.639928    2488 log.go:172] (0xc0007628f0) (0xc00081e0a0) Stream removed, broadcasting: 3
I0213 14:08:18.639942    2488 log.go:172] (0xc0007628f0) (0xc00096e000) Stream removed, broadcasting: 5
command terminated with exit code 126
 []  0xc001f644e0 exit status 126   true [0xc000f3e040 0xc000f3e058 0xc000f3e070] [0xc000f3e040 0xc000f3e058 0xc000f3e070] [0xc000f3e050 0xc000f3e068] [0xba6c50 0xba6c50] 0xc002860c60 }:
Command stdout:
cannot exec in a stopped state: unknown

stderr:
I0213 14:08:18.524201    2488 log.go:172] (0xc0007628f0) (0xc00081ea00) Create stream
I0213 14:08:18.524853    2488 log.go:172] (0xc0007628f0) (0xc00081ea00) Stream added, broadcasting: 1
I0213 14:08:18.557787    2488 log.go:172] (0xc0007628f0) Reply frame received for 1
I0213 14:08:18.558053    2488 log.go:172] (0xc0007628f0) (0xc00081e0a0) Create stream
I0213 14:08:18.558074    2488 log.go:172] (0xc0007628f0) (0xc00081e0a0) Stream added, broadcasting: 3
I0213 14:08:18.561179    2488 log.go:172] (0xc0007628f0) Reply frame received for 3
I0213 14:08:18.561243    2488 log.go:172] (0xc0007628f0) (0xc00096e000) Create stream
I0213 14:08:18.561265    2488 log.go:172] (0xc0007628f0) (0xc00096e000) Stream added, broadcasting: 5
I0213 14:08:18.564381    2488 log.go:172] (0xc0007628f0) Reply frame received for 5
I0213 14:08:18.632737    2488 log.go:172] (0xc0007628f0) Data frame received for 3
I0213 14:08:18.632970    2488 log.go:172] (0xc00081e0a0) (3) Data frame handling
I0213 14:08:18.633029    2488 log.go:172] (0xc00081e0a0) (3) Data frame sent
I0213 14:08:18.637598    2488 log.go:172] (0xc0007628f0) Data frame received for 1
I0213 14:08:18.637671    2488 log.go:172] (0xc0007628f0) (0xc00096e000) Stream removed, broadcasting: 5
I0213 14:08:18.637712    2488 log.go:172] (0xc00081ea00) (1) Data frame handling
I0213 14:08:18.637728    2488 log.go:172] (0xc00081ea00) (1) Data frame sent
I0213 14:08:18.637895    2488 log.go:172] (0xc0007628f0) (0xc00081e0a0) Stream removed, broadcasting: 3
I0213 14:08:18.637946    2488 log.go:172] (0xc0007628f0) (0xc00081ea00) Stream removed, broadcasting: 1
I0213 14:08:18.637980    2488 log.go:172] (0xc0007628f0) Go away received
I0213 14:08:18.639905    2488 log.go:172] (0xc0007628f0) (0xc00081ea00) Stream removed, broadcasting: 1
I0213 14:08:18.639928    2488 log.go:172] (0xc0007628f0) (0xc00081e0a0) Stream removed, broadcasting: 3
I0213 14:08:18.639942    2488 log.go:172] (0xc0007628f0) (0xc00096e000) Stream removed, broadcasting: 5
command terminated with exit code 126

error:
exit status 126
Feb 13 14:08:28.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:08:28.988: INFO: rc: 1
Feb 13 14:08:28.988: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001f3df50 exit status 1   true [0xc0032500e8 0xc003250100 0xc003250118] [0xc0032500e8 0xc003250100 0xc003250118] [0xc0032500f8 0xc003250110] [0xba6c50 0xba6c50] 0xc0024193e0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Feb 13 14:08:38.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:08:39.252: INFO: rc: 1
Feb 13 14:08:39.253: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00205ba70 exit status 1   true [0xc0017aa848 0xc0017aa8d8 0xc0017aaa18] [0xc0017aa848 0xc0017aa8d8 0xc0017aaa18] [0xc0017aa8d0 0xc0017aa978] [0xba6c50 0xba6c50] 0xc002bf6600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 13 14:08:49.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:08:49.436: INFO: rc: 1
Feb 13 14:08:49.437: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002d70030 exit status 1   true [0xc003250120 0xc003250138 0xc003250150] [0xc003250120 0xc003250138 0xc003250150] [0xc003250130 0xc003250148] [0xba6c50 0xba6c50] 0xc002419740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 13 14:08:59.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:08:59.626: INFO: rc: 1
Feb 13 14:08:59.627: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f64630 exit status 1   true [0xc000f3e078 0xc000f3e090 0xc000f3e0a8] [0xc000f3e078 0xc000f3e090 0xc000f3e0a8] [0xc000f3e088 0xc000f3e0a0] [0xba6c50 0xba6c50] 0xc0028611a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 13 14:09:09.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:09:09.873: INFO: rc: 1
Feb 13 14:09:09.874: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002cc20c0 exit status 1   true [0xc00135c038 0xc00135c0b0 0xc00135c138] [0xc00135c038 0xc00135c0b0 0xc00135c138] [0xc00135c078 0xc00135c128] [0xba6c50 0xba6c50] 0xc0018fe3c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 13 14:09:19.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:09:20.060: INFO: rc: 1
Feb 13 14:09:20.060: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0018420f0 exit status 1   true [0xc001964008 0xc001964020 0xc001964038] [0xc001964008 0xc001964020 0xc001964038] [0xc001964018 0xc001964030] [0xba6c50 0xba6c50] 0xc002082420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 13 14:09:30.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:09:30.214: INFO: rc: 1
Feb 13 14:09:30.214: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc003224090 exit status 1   true [0xc0007351a0 0xc000735330 0xc000735438] [0xc0007351a0 0xc000735330 0xc000735438] [0xc0007352b0 0xc000735418] [0xba6c50 0xba6c50] 0xc0019671a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 13 14:09:40.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:09:40.382: INFO: rc: 1
Feb 13 14:09:40.382: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00212a090 exit status 1   true [0xc000186138 0xc000187d38 0xc000187e88] [0xc000186138 0xc000187d38 0xc000187e88] [0xc000186148 0xc000187e00] [0xba6c50 0xba6c50] 0xc0022de720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 13 14:09:50.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:09:50.586: INFO: rc: 1
Feb 13 14:09:50.586: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f3c090 exit status 1   true [0xc000546068 0xc000546198 0xc000546528] [0xc000546068 0xc000546198 0xc000546528] [0xc0005460f8 0xc0005463d0] [0xba6c50 0xba6c50] 0xc001d3a300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 13 14:10:00.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:10:02.925: INFO: rc: 1
Feb 13 14:10:02.925: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f3c150 exit status 1   true [0xc000546580 0xc000546710 0xc000546c28] [0xc000546580 0xc000546710 0xc000546c28] [0xc000546668 0xc000546b80] [0xba6c50 0xba6c50] 0xc001d3a840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 13 14:10:12.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:10:13.139: INFO: rc: 1
Feb 13 14:10:13.139: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc003224180 exit status 1   true [0xc000735448 0xc000735490 0xc0007355b8] [0xc000735448 0xc000735490 0xc0007355b8] [0xc000735460 0xc000735570] [0xba6c50 0xba6c50] 0xc001967980 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 13 14:10:23.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:10:23.328: INFO: rc: 1
Feb 13 14:10:23.328: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002d34120 exit status 1   true [0xc000f3e000 0xc000f3e018 0xc000f3e030] [0xc000f3e000 0xc000f3e018 0xc000f3e030] [0xc000f3e010 0xc000f3e028] [0xba6c50 0xba6c50] 0xc00260e240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 13 14:10:33.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:10:33.524: INFO: rc: 1
Feb 13 14:10:33.524: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002d341e0 exit status 1   true [0xc000f3e038 0xc000f3e050 0xc000f3e068] [0xc000f3e038 0xc000f3e050 0xc000f3e068] [0xc000f3e048 0xc000f3e060] [0xba6c50 0xba6c50] 0xc00260e540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 13 14:10:43.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:10:43.727: INFO: rc: 1
Feb 13 14:10:43.728: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00212a150 exit status 1   true [0xc000187f40 0xc000187fe0 0xc001964048] [0xc000187f40 0xc000187fe0 0xc001964048] [0xc000187fc8 0xc001964040] [0xba6c50 0xba6c50] 0xc0022df9e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 13 14:10:53.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:10:53.927: INFO: rc: 1
Feb 13 14:10:53.927: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002d342a0 exit status 1   true [0xc000f3e070 0xc000f3e088 0xc000f3e0a0] [0xc000f3e070 0xc000f3e088 0xc000f3e0a0] [0xc000f3e080 0xc000f3e098] [0xba6c50 0xba6c50] 0xc00260e9c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 13 14:11:03.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:11:04.082: INFO: rc: 1
Feb 13 14:11:04.082: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00212a240 exit status 1   true [0xc001964050 0xc001964068 0xc001964080] [0xc001964050 0xc001964068 0xc001964080] [0xc001964060 0xc001964078] [0xba6c50 0xba6c50] 0xc0028600c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 13 14:11:14.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:11:14.190: INFO: rc: 1
Feb 13 14:11:14.190: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f3c270 exit status 1   true [0xc000546c60 0xc000546e10 0xc000547000] [0xc000546c60 0xc000546e10 0xc000547000] [0xc000546d28 0xc000546fb0] [0xba6c50 0xba6c50] 0xc001d3ac60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 13 14:11:24.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:11:24.904: INFO: rc: 1
Feb 13 14:11:24.904: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002d34090 exit status 1   true [0xc000186140 0xc000187db0 0xc000187f40] [0xc000186140 0xc000187db0 0xc000187f40] [0xc000187d38 0xc000187e88] [0xba6c50 0xba6c50] 0xc0022de480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 13 14:11:34.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:11:35.145: INFO: rc: 1
Feb 13 14:11:35.146: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0032240c0 exit status 1   true [0xc000f3e000 0xc000f3e018 0xc000f3e030] [0xc000f3e000 0xc000f3e018 0xc000f3e030] [0xc000f3e010 0xc000f3e028] [0xba6c50 0xba6c50] 0xc00260e0c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 13 14:11:45.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:11:45.318: INFO: rc: 1
Feb 13 14:11:45.319: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002d34180 exit status 1   true [0xc000187f88 0xc000187ff0 0xc001964010] [0xc000187f88 0xc000187ff0 0xc001964010] [0xc000187fe0 0xc001964008] [0xba6c50 0xba6c50] 0xc0022df260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 13 14:11:55.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:11:55.543: INFO: rc: 1
Feb 13 14:11:55.543: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002d34270 exit status 1   true [0xc001964018 0xc001964030 0xc001964048] [0xc001964018 0xc001964030 0xc001964048] [0xc001964028 0xc001964040] [0xba6c50 0xba6c50] 0xc0028600c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 13 14:12:05.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:12:05.752: INFO: rc: 1
Feb 13 14:12:05.752: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f3c120 exit status 1   true [0xc0007351a0 0xc000735330 0xc000735438] [0xc0007351a0 0xc000735330 0xc000735438] [0xc0007352b0 0xc000735418] [0xba6c50 0xba6c50] 0xc0019676e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 13 14:12:15.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:12:15.916: INFO: rc: 1
Feb 13 14:12:15.916: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f3c2a0 exit status 1   true [0xc000735448 0xc000735490 0xc0007355b8] [0xc000735448 0xc000735490 0xc0007355b8] [0xc000735460 0xc000735570] [0xba6c50 0xba6c50] 0xc001967aa0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 13 14:12:25.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:12:26.073: INFO: rc: 1
Feb 13 14:12:26.073: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00212a180 exit status 1   true [0xc000546068 0xc000546198 0xc000546528] [0xc000546068 0xc000546198 0xc000546528] [0xc0005460f8 0xc0005463d0] [0xba6c50 0xba6c50] 0xc001d3a300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 13 14:12:36.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:12:36.211: INFO: rc: 1
Feb 13 14:12:36.212: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00212a3c0 exit status 1   true [0xc000546580 0xc000546710 0xc000546c28] [0xc000546580 0xc000546710 0xc000546c28] [0xc000546668 0xc000546b80] [0xba6c50 0xba6c50] 0xc001d3a840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 13 14:12:46.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:12:46.359: INFO: rc: 1
Feb 13 14:12:46.360: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f3c4b0 exit status 1   true [0xc000735608 0xc000735688 0xc000735790] [0xc000735608 0xc000735688 0xc000735790] [0xc000735678 0xc0007356d8] [0xba6c50 0xba6c50] 0xc001967e00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 13 14:12:56.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:12:56.633: INFO: rc: 1
Feb 13 14:12:56.634: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002d34390 exit status 1   true [0xc001964050 0xc001964068 0xc001964080] [0xc001964050 0xc001964068 0xc001964080] [0xc001964060 0xc001964078] [0xba6c50 0xba6c50] 0xc002860660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 13 14:13:06.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:13:06.797: INFO: rc: 1
Feb 13 14:13:06.797: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f3c5a0 exit status 1   true [0xc0007357b0 0xc000735840 0xc000735858] [0xc0007357b0 0xc000735840 0xc000735858] [0xc000735800 0xc000735850] [0xba6c50 0xba6c50] 0xc002082480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 13 14:13:16.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:13:16.927: INFO: rc: 1
Feb 13 14:13:16.927: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0032241e0 exit status 1   true [0xc000f3e038 0xc000f3e050 0xc000f3e068] [0xc000f3e038 0xc000f3e050 0xc000f3e068] [0xc000f3e048 0xc000f3e060] [0xba6c50 0xba6c50] 0xc00260e3c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 13 14:13:26.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 13 14:13:27.123: INFO: rc: 1
Feb 13 14:13:27.124: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Feb 13 14:13:27.124: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 13 14:13:27.136: INFO: Deleting all statefulset in ns statefulset-2382
Feb 13 14:13:27.139: INFO: Scaling statefulset ss to 0
Feb 13 14:13:27.153: INFO: Waiting for statefulset status.replicas updated to 0
Feb 13 14:13:27.160: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:13:27.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2382" for this suite.
Feb 13 14:13:33.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:13:33.349: INFO: namespace statefulset-2382 deletion completed in 6.146991611s

• [SLOW TEST:380.489 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:13:33.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 13 14:13:42.064: INFO: Successfully updated pod "annotationupdate454a1704-1860-450a-b14f-522b9a586e14"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:13:44.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3278" for this suite.
Feb 13 14:14:06.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:14:06.362: INFO: namespace downward-api-3278 deletion completed in 22.172450019s

• [SLOW TEST:33.013 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:14:06.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-hnvs9 in namespace proxy-2520
I0213 14:14:06.569670       8 runners.go:180] Created replication controller with name: proxy-service-hnvs9, namespace: proxy-2520, replica count: 1
I0213 14:14:07.620726       8 runners.go:180] proxy-service-hnvs9 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0213 14:14:08.621124       8 runners.go:180] proxy-service-hnvs9 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0213 14:14:09.621495       8 runners.go:180] proxy-service-hnvs9 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0213 14:14:10.622311       8 runners.go:180] proxy-service-hnvs9 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0213 14:14:11.622797       8 runners.go:180] proxy-service-hnvs9 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0213 14:14:12.623281       8 runners.go:180] proxy-service-hnvs9 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0213 14:14:13.623859       8 runners.go:180] proxy-service-hnvs9 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0213 14:14:14.624182       8 runners.go:180] proxy-service-hnvs9 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0213 14:14:15.624455       8 runners.go:180] proxy-service-hnvs9 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0213 14:14:16.624924       8 runners.go:180] proxy-service-hnvs9 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0213 14:14:17.625307       8 runners.go:180] proxy-service-hnvs9 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0213 14:14:18.625872       8 runners.go:180] proxy-service-hnvs9 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0213 14:14:19.626280       8 runners.go:180] proxy-service-hnvs9 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0213 14:14:20.626956       8 runners.go:180] proxy-service-hnvs9 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0213 14:14:21.627322       8 runners.go:180] proxy-service-hnvs9 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0213 14:14:22.627715       8 runners.go:180] proxy-service-hnvs9 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0213 14:14:23.628355       8 runners.go:180] proxy-service-hnvs9 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0213 14:14:24.628732       8 runners.go:180] proxy-service-hnvs9 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 13 14:14:24.635: INFO: setup took 18.187080404s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb 13 14:14:24.657: INFO: (0) /api/v1/namespaces/proxy-2520/services/http:proxy-service-hnvs9:portname1/proxy/: foo (200; 21.204204ms)
Feb 13 14:14:24.657: INFO: (0) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:160/proxy/: foo (200; 21.488602ms)
Feb 13 14:14:24.657: INFO: (0) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:1080/proxy/: ... (200; 21.793869ms)
Feb 13 14:14:24.658: INFO: (0) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 22.251631ms)
Feb 13 14:14:24.659: INFO: (0) /api/v1/namespaces/proxy-2520/services/proxy-service-hnvs9:portname2/proxy/: bar (200; 23.068531ms)
Feb 13 14:14:24.659: INFO: (0) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:160/proxy/: foo (200; 22.998696ms)
Feb 13 14:14:24.659: INFO: (0) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:1080/proxy/: test<... (200; 22.938442ms)
Feb 13 14:14:24.659: INFO: (0) /api/v1/namespaces/proxy-2520/services/proxy-service-hnvs9:portname1/proxy/: foo (200; 23.166674ms)
Feb 13 14:14:24.660: INFO: (0) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 24.777121ms)
Feb 13 14:14:24.661: INFO: (0) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz/proxy/: test (200; 25.297937ms)
Feb 13 14:14:24.662: INFO: (0) /api/v1/namespaces/proxy-2520/services/http:proxy-service-hnvs9:portname2/proxy/: bar (200; 25.997445ms)
Feb 13 14:14:24.677: INFO: (0) /api/v1/namespaces/proxy-2520/services/https:proxy-service-hnvs9:tlsportname1/proxy/: tls baz (200; 41.087809ms)
Feb 13 14:14:24.677: INFO: (0) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:460/proxy/: tls baz (200; 41.596359ms)
Feb 13 14:14:24.678: INFO: (0) /api/v1/namespaces/proxy-2520/services/https:proxy-service-hnvs9:tlsportname2/proxy/: tls qux (200; 41.926214ms)
Feb 13 14:14:24.678: INFO: (0) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:443/proxy/: ... (200; 14.35235ms)
Feb 13 14:14:24.693: INFO: (1) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:443/proxy/: test (200; 15.018306ms)
Feb 13 14:14:24.694: INFO: (1) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:160/proxy/: foo (200; 15.118243ms)
Feb 13 14:14:24.695: INFO: (1) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:1080/proxy/: test<... (200; 16.129113ms)
Feb 13 14:14:24.697: INFO: (1) /api/v1/namespaces/proxy-2520/services/proxy-service-hnvs9:portname1/proxy/: foo (200; 17.810658ms)
Feb 13 14:14:24.697: INFO: (1) /api/v1/namespaces/proxy-2520/services/https:proxy-service-hnvs9:tlsportname1/proxy/: tls baz (200; 18.043361ms)
Feb 13 14:14:24.697: INFO: (1) /api/v1/namespaces/proxy-2520/services/https:proxy-service-hnvs9:tlsportname2/proxy/: tls qux (200; 18.139611ms)
Feb 13 14:14:24.697: INFO: (1) /api/v1/namespaces/proxy-2520/services/http:proxy-service-hnvs9:portname2/proxy/: bar (200; 18.122384ms)
Feb 13 14:14:24.699: INFO: (1) /api/v1/namespaces/proxy-2520/services/http:proxy-service-hnvs9:portname1/proxy/: foo (200; 20.533117ms)
Feb 13 14:14:24.715: INFO: (2) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:160/proxy/: foo (200; 15.620463ms)
Feb 13 14:14:24.718: INFO: (2) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 18.82373ms)
Feb 13 14:14:24.720: INFO: (2) /api/v1/namespaces/proxy-2520/services/http:proxy-service-hnvs9:portname2/proxy/: bar (200; 20.231959ms)
Feb 13 14:14:24.720: INFO: (2) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:1080/proxy/: ... (200; 20.274635ms)
Feb 13 14:14:24.720: INFO: (2) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:460/proxy/: tls baz (200; 20.565737ms)
Feb 13 14:14:24.720: INFO: (2) /api/v1/namespaces/proxy-2520/services/http:proxy-service-hnvs9:portname1/proxy/: foo (200; 20.725812ms)
Feb 13 14:14:24.720: INFO: (2) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz/proxy/: test (200; 21.065822ms)
Feb 13 14:14:24.721: INFO: (2) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:443/proxy/: test<... (200; 21.348582ms)
Feb 13 14:14:24.721: INFO: (2) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 21.417905ms)
Feb 13 14:14:24.723: INFO: (2) /api/v1/namespaces/proxy-2520/services/https:proxy-service-hnvs9:tlsportname1/proxy/: tls baz (200; 23.74562ms)
Feb 13 14:14:24.724: INFO: (2) /api/v1/namespaces/proxy-2520/services/proxy-service-hnvs9:portname2/proxy/: bar (200; 24.397499ms)
Feb 13 14:14:24.725: INFO: (2) /api/v1/namespaces/proxy-2520/services/https:proxy-service-hnvs9:tlsportname2/proxy/: tls qux (200; 25.18188ms)
Feb 13 14:14:24.725: INFO: (2) /api/v1/namespaces/proxy-2520/services/proxy-service-hnvs9:portname1/proxy/: foo (200; 25.319959ms)
Feb 13 14:14:24.726: INFO: (2) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:160/proxy/: foo (200; 26.660337ms)
Feb 13 14:14:24.744: INFO: (3) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:160/proxy/: foo (200; 17.610378ms)
Feb 13 14:14:24.744: INFO: (3) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz/proxy/: test (200; 18.028747ms)
Feb 13 14:14:24.744: INFO: (3) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 18.273235ms)
Feb 13 14:14:24.745: INFO: (3) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:460/proxy/: tls baz (200; 18.355532ms)
Feb 13 14:14:24.745: INFO: (3) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 19.121767ms)
Feb 13 14:14:24.745: INFO: (3) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:462/proxy/: tls qux (200; 19.269314ms)
Feb 13 14:14:24.746: INFO: (3) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:160/proxy/: foo (200; 19.79729ms)
Feb 13 14:14:24.746: INFO: (3) /api/v1/namespaces/proxy-2520/services/http:proxy-service-hnvs9:portname1/proxy/: foo (200; 19.961321ms)
Feb 13 14:14:24.746: INFO: (3) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:1080/proxy/: test<... (200; 19.861557ms)
Feb 13 14:14:24.747: INFO: (3) /api/v1/namespaces/proxy-2520/services/proxy-service-hnvs9:portname2/proxy/: bar (200; 21.315245ms)
Feb 13 14:14:24.747: INFO: (3) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:443/proxy/: ... (200; 21.581467ms)
Feb 13 14:14:24.748: INFO: (3) /api/v1/namespaces/proxy-2520/services/proxy-service-hnvs9:portname1/proxy/: foo (200; 21.61875ms)
Feb 13 14:14:24.750: INFO: (3) /api/v1/namespaces/proxy-2520/services/http:proxy-service-hnvs9:portname2/proxy/: bar (200; 23.665454ms)
Feb 13 14:14:24.752: INFO: (3) /api/v1/namespaces/proxy-2520/services/https:proxy-service-hnvs9:tlsportname1/proxy/: tls baz (200; 25.597334ms)
Feb 13 14:14:24.752: INFO: (3) /api/v1/namespaces/proxy-2520/services/https:proxy-service-hnvs9:tlsportname2/proxy/: tls qux (200; 25.982438ms)
Feb 13 14:14:24.768: INFO: (4) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz/proxy/: test (200; 15.179099ms)
Feb 13 14:14:24.768: INFO: (4) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:160/proxy/: foo (200; 15.368473ms)
Feb 13 14:14:24.777: INFO: (4) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 24.023324ms)
Feb 13 14:14:24.778: INFO: (4) /api/v1/namespaces/proxy-2520/services/http:proxy-service-hnvs9:portname1/proxy/: foo (200; 25.642253ms)
Feb 13 14:14:24.779: INFO: (4) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:160/proxy/: foo (200; 26.087799ms)
Feb 13 14:14:24.779: INFO: (4) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 26.727621ms)
Feb 13 14:14:24.779: INFO: (4) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:1080/proxy/: ... (200; 26.681929ms)
Feb 13 14:14:24.779: INFO: (4) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:460/proxy/: tls baz (200; 26.8121ms)
Feb 13 14:14:24.779: INFO: (4) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:443/proxy/: test<... (200; 26.884902ms)
Feb 13 14:14:24.780: INFO: (4) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:462/proxy/: tls qux (200; 27.537185ms)
Feb 13 14:14:24.780: INFO: (4) /api/v1/namespaces/proxy-2520/services/https:proxy-service-hnvs9:tlsportname1/proxy/: tls baz (200; 27.557693ms)
Feb 13 14:14:24.780: INFO: (4) /api/v1/namespaces/proxy-2520/services/proxy-service-hnvs9:portname1/proxy/: foo (200; 27.776365ms)
Feb 13 14:14:24.781: INFO: (4) /api/v1/namespaces/proxy-2520/services/https:proxy-service-hnvs9:tlsportname2/proxy/: tls qux (200; 28.237418ms)
Feb 13 14:14:24.783: INFO: (4) /api/v1/namespaces/proxy-2520/services/http:proxy-service-hnvs9:portname2/proxy/: bar (200; 29.952605ms)
Feb 13 14:14:24.783: INFO: (4) /api/v1/namespaces/proxy-2520/services/proxy-service-hnvs9:portname2/proxy/: bar (200; 30.532635ms)
Feb 13 14:14:24.793: INFO: (5) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:160/proxy/: foo (200; 9.653962ms)
Feb 13 14:14:24.795: INFO: (5) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 11.371441ms)
Feb 13 14:14:24.795: INFO: (5) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:460/proxy/: tls baz (200; 10.802839ms)
Feb 13 14:14:24.796: INFO: (5) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 12.121476ms)
Feb 13 14:14:24.799: INFO: (5) /api/v1/namespaces/proxy-2520/services/proxy-service-hnvs9:portname2/proxy/: bar (200; 14.475226ms)
Feb 13 14:14:24.806: INFO: (5) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:160/proxy/: foo (200; 22.426279ms)
Feb 13 14:14:24.806: INFO: (5) /api/v1/namespaces/proxy-2520/services/http:proxy-service-hnvs9:portname1/proxy/: foo (200; 22.887877ms)
Feb 13 14:14:24.806: INFO: (5) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:1080/proxy/: test<... (200; 22.76573ms)
Feb 13 14:14:24.806: INFO: (5) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz/proxy/: test (200; 21.700993ms)
Feb 13 14:14:24.806: INFO: (5) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:443/proxy/: ... (200; 22.823669ms)
Feb 13 14:14:24.807: INFO: (5) /api/v1/namespaces/proxy-2520/services/http:proxy-service-hnvs9:portname2/proxy/: bar (200; 22.240288ms)
Feb 13 14:14:24.807: INFO: (5) /api/v1/namespaces/proxy-2520/services/https:proxy-service-hnvs9:tlsportname2/proxy/: tls qux (200; 22.97348ms)
Feb 13 14:14:24.808: INFO: (5) /api/v1/namespaces/proxy-2520/services/proxy-service-hnvs9:portname1/proxy/: foo (200; 23.103609ms)
Feb 13 14:14:24.825: INFO: (6) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:160/proxy/: foo (200; 16.771602ms)
Feb 13 14:14:24.825: INFO: (6) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:1080/proxy/: test<... (200; 17.086671ms)
Feb 13 14:14:24.826: INFO: (6) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 17.742791ms)
Feb 13 14:14:24.826: INFO: (6) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:1080/proxy/: ... (200; 17.50069ms)
Feb 13 14:14:24.826: INFO: (6) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz/proxy/: test (200; 17.517665ms)
Feb 13 14:14:24.826: INFO: (6) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:443/proxy/: test<... (200; 20.637085ms)
Feb 13 14:14:24.873: INFO: (7) /api/v1/namespaces/proxy-2520/services/https:proxy-service-hnvs9:tlsportname2/proxy/: tls qux (200; 20.807397ms)
Feb 13 14:14:24.873: INFO: (7) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:160/proxy/: foo (200; 20.626094ms)
Feb 13 14:14:24.873: INFO: (7) /api/v1/namespaces/proxy-2520/services/http:proxy-service-hnvs9:portname2/proxy/: bar (200; 20.754351ms)
Feb 13 14:14:24.875: INFO: (7) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:460/proxy/: tls baz (200; 22.021432ms)
Feb 13 14:14:24.875: INFO: (7) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 22.300048ms)
Feb 13 14:14:24.877: INFO: (7) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:1080/proxy/: ... (200; 24.721747ms)
Feb 13 14:14:24.877: INFO: (7) /api/v1/namespaces/proxy-2520/services/proxy-service-hnvs9:portname2/proxy/: bar (200; 24.71877ms)
Feb 13 14:14:24.877: INFO: (7) /api/v1/namespaces/proxy-2520/services/proxy-service-hnvs9:portname1/proxy/: foo (200; 24.695276ms)
Feb 13 14:14:24.877: INFO: (7) /api/v1/namespaces/proxy-2520/services/https:proxy-service-hnvs9:tlsportname1/proxy/: tls baz (200; 24.989182ms)
Feb 13 14:14:24.877: INFO: (7) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz/proxy/: test (200; 24.807707ms)
Feb 13 14:14:24.878: INFO: (7) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:462/proxy/: tls qux (200; 24.848358ms)
Feb 13 14:14:24.878: INFO: (7) /api/v1/namespaces/proxy-2520/services/http:proxy-service-hnvs9:portname1/proxy/: foo (200; 24.888606ms)
Feb 13 14:14:24.878: INFO: (7) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 25.645435ms)
Feb 13 14:14:24.887: INFO: (8) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 8.399871ms)
Feb 13 14:14:24.887: INFO: (8) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:1080/proxy/: test<... (200; 8.416517ms)
Feb 13 14:14:24.887: INFO: (8) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz/proxy/: test (200; 8.785894ms)
Feb 13 14:14:24.900: INFO: (8) /api/v1/namespaces/proxy-2520/services/http:proxy-service-hnvs9:portname2/proxy/: bar (200; 21.398938ms)
Feb 13 14:14:24.900: INFO: (8) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 21.256786ms)
Feb 13 14:14:24.900: INFO: (8) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:1080/proxy/: ... (200; 21.2988ms)
Feb 13 14:14:24.900: INFO: (8) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:160/proxy/: foo (200; 21.25695ms)
Feb 13 14:14:24.900: INFO: (8) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:160/proxy/: foo (200; 21.374177ms)
Feb 13 14:14:24.900: INFO: (8) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:443/proxy/: ... (200; 9.944115ms)
Feb 13 14:14:24.911: INFO: (9) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz/proxy/: test (200; 9.995427ms)
Feb 13 14:14:24.911: INFO: (9) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:160/proxy/: foo (200; 10.178131ms)
Feb 13 14:14:24.911: INFO: (9) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:1080/proxy/: test<... (200; 10.405601ms)
Feb 13 14:14:24.916: INFO: (9) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:462/proxy/: tls qux (200; 14.642291ms)
Feb 13 14:14:24.917: INFO: (9) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:443/proxy/: test (200; 13.388545ms)
Feb 13 14:14:24.936: INFO: (10) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:160/proxy/: foo (200; 13.77224ms)
Feb 13 14:14:24.937: INFO: (10) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:462/proxy/: tls qux (200; 14.157307ms)
Feb 13 14:14:24.937: INFO: (10) /api/v1/namespaces/proxy-2520/services/https:proxy-service-hnvs9:tlsportname2/proxy/: tls qux (200; 14.387279ms)
Feb 13 14:14:24.937: INFO: (10) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:1080/proxy/: ... (200; 14.690813ms)
Feb 13 14:14:24.937: INFO: (10) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 15.19482ms)
Feb 13 14:14:24.938: INFO: (10) /api/v1/namespaces/proxy-2520/services/proxy-service-hnvs9:portname1/proxy/: foo (200; 14.913755ms)
Feb 13 14:14:24.938: INFO: (10) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:1080/proxy/: test<... (200; 15.677849ms)
Feb 13 14:14:24.939: INFO: (10) /api/v1/namespaces/proxy-2520/services/http:proxy-service-hnvs9:portname2/proxy/: bar (200; 16.138669ms)
Feb 13 14:14:24.939: INFO: (10) /api/v1/namespaces/proxy-2520/services/proxy-service-hnvs9:portname2/proxy/: bar (200; 16.298044ms)
Feb 13 14:14:24.943: INFO: (10) /api/v1/namespaces/proxy-2520/services/https:proxy-service-hnvs9:tlsportname1/proxy/: tls baz (200; 21.126692ms)
Feb 13 14:14:24.944: INFO: (10) /api/v1/namespaces/proxy-2520/services/http:proxy-service-hnvs9:portname1/proxy/: foo (200; 20.621234ms)
Feb 13 14:14:24.957: INFO: (11) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 12.548471ms)
Feb 13 14:14:24.957: INFO: (11) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:160/proxy/: foo (200; 12.707754ms)
Feb 13 14:14:24.958: INFO: (11) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:1080/proxy/: test<... (200; 14.43323ms)
Feb 13 14:14:24.961: INFO: (11) /api/v1/namespaces/proxy-2520/services/proxy-service-hnvs9:portname2/proxy/: bar (200; 17.01816ms)
Feb 13 14:14:24.962: INFO: (11) /api/v1/namespaces/proxy-2520/services/http:proxy-service-hnvs9:portname1/proxy/: foo (200; 17.950854ms)
Feb 13 14:14:24.962: INFO: (11) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 17.702868ms)
Feb 13 14:14:24.962: INFO: (11) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:460/proxy/: tls baz (200; 17.814108ms)
Feb 13 14:14:24.962: INFO: (11) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:1080/proxy/: ... (200; 17.939604ms)
Feb 13 14:14:24.962: INFO: (11) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:160/proxy/: foo (200; 17.945826ms)
Feb 13 14:14:24.962: INFO: (11) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:443/proxy/: test (200; 18.275993ms)
Feb 13 14:14:24.964: INFO: (11) /api/v1/namespaces/proxy-2520/services/https:proxy-service-hnvs9:tlsportname2/proxy/: tls qux (200; 19.528687ms)
Feb 13 14:14:24.964: INFO: (11) /api/v1/namespaces/proxy-2520/services/https:proxy-service-hnvs9:tlsportname1/proxy/: tls baz (200; 19.716552ms)
Feb 13 14:14:24.964: INFO: (11) /api/v1/namespaces/proxy-2520/services/proxy-service-hnvs9:portname1/proxy/: foo (200; 19.905243ms)
Feb 13 14:14:24.966: INFO: (11) /api/v1/namespaces/proxy-2520/services/http:proxy-service-hnvs9:portname2/proxy/: bar (200; 21.656111ms)
Feb 13 14:14:24.975: INFO: (12) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 8.642612ms)
Feb 13 14:14:24.976: INFO: (12) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:462/proxy/: tls qux (200; 10.262506ms)
Feb 13 14:14:24.976: INFO: (12) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:1080/proxy/: ... (200; 10.341593ms)
Feb 13 14:14:24.976: INFO: (12) /api/v1/namespaces/proxy-2520/services/http:proxy-service-hnvs9:portname2/proxy/: bar (200; 10.218921ms)
Feb 13 14:14:24.976: INFO: (12) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:1080/proxy/: test<... (200; 10.287539ms)
Feb 13 14:14:24.979: INFO: (12) /api/v1/namespaces/proxy-2520/services/proxy-service-hnvs9:portname1/proxy/: foo (200; 12.783635ms)
Feb 13 14:14:24.979: INFO: (12) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz/proxy/: test (200; 12.7221ms)
Feb 13 14:14:24.979: INFO: (12) /api/v1/namespaces/proxy-2520/services/https:proxy-service-hnvs9:tlsportname2/proxy/: tls qux (200; 13.135371ms)
Feb 13 14:14:24.980: INFO: (12) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:460/proxy/: tls baz (200; 13.436375ms)
Feb 13 14:14:24.981: INFO: (12) /api/v1/namespaces/proxy-2520/services/proxy-service-hnvs9:portname2/proxy/: bar (200; 14.507656ms)
Feb 13 14:14:24.981: INFO: (12) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 14.591787ms)
Feb 13 14:14:24.981: INFO: (12) /api/v1/namespaces/proxy-2520/services/https:proxy-service-hnvs9:tlsportname1/proxy/: tls baz (200; 14.738131ms)
Feb 13 14:14:24.981: INFO: (12) /api/v1/namespaces/proxy-2520/services/http:proxy-service-hnvs9:portname1/proxy/: foo (200; 14.827403ms)
Feb 13 14:14:24.981: INFO: (12) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:443/proxy/: ... (200; 7.525611ms)
Feb 13 14:14:24.989: INFO: (13) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:443/proxy/: test<... (200; 8.184703ms)
Feb 13 14:14:24.990: INFO: (13) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz/proxy/: test (200; 8.671702ms)
Feb 13 14:14:24.991: INFO: (13) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 9.290154ms)
Feb 13 14:14:24.991: INFO: (13) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:160/proxy/: foo (200; 8.660218ms)
Feb 13 14:14:24.991: INFO: (13) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:460/proxy/: tls baz (200; 8.75793ms)
Feb 13 14:14:24.991: INFO: (13) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:462/proxy/: tls qux (200; 9.208392ms)
Feb 13 14:14:24.994: INFO: (13) /api/v1/namespaces/proxy-2520/services/proxy-service-hnvs9:portname2/proxy/: bar (200; 11.660088ms)
Feb 13 14:14:24.994: INFO: (13) /api/v1/namespaces/proxy-2520/services/proxy-service-hnvs9:portname1/proxy/: foo (200; 11.557421ms)
Feb 13 14:14:24.994: INFO: (13) /api/v1/namespaces/proxy-2520/services/https:proxy-service-hnvs9:tlsportname1/proxy/: tls baz (200; 11.945316ms)
Feb 13 14:14:24.994: INFO: (13) /api/v1/namespaces/proxy-2520/services/http:proxy-service-hnvs9:portname2/proxy/: bar (200; 11.711341ms)
Feb 13 14:14:24.995: INFO: (13) /api/v1/namespaces/proxy-2520/services/http:proxy-service-hnvs9:portname1/proxy/: foo (200; 13.156526ms)
Feb 13 14:14:24.996: INFO: (13) /api/v1/namespaces/proxy-2520/services/https:proxy-service-hnvs9:tlsportname2/proxy/: tls qux (200; 14.147969ms)
Feb 13 14:14:25.025: INFO: (14) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz/proxy/: test (200; 28.471968ms)
Feb 13 14:14:25.025: INFO: (14) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:1080/proxy/: ... (200; 28.601876ms)
Feb 13 14:14:25.025: INFO: (14) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 28.773396ms)
Feb 13 14:14:25.025: INFO: (14) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:460/proxy/: tls baz (200; 29.196252ms)
Feb 13 14:14:25.025: INFO: (14) /api/v1/namespaces/proxy-2520/services/https:proxy-service-hnvs9:tlsportname1/proxy/: tls baz (200; 29.181834ms)
Feb 13 14:14:25.026: INFO: (14) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:160/proxy/: foo (200; 30.051838ms)
Feb 13 14:14:25.027: INFO: (14) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 30.592537ms)
Feb 13 14:14:25.027: INFO: (14) /api/v1/namespaces/proxy-2520/services/http:proxy-service-hnvs9:portname2/proxy/: bar (200; 31.590877ms)
Feb 13 14:14:25.029: INFO: (14) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:1080/proxy/: test<... (200; 33.501866ms)
Feb 13 14:14:25.029: INFO: (14) /api/v1/namespaces/proxy-2520/services/proxy-service-hnvs9:portname1/proxy/: foo (200; 33.415469ms)
Feb 13 14:14:25.030: INFO: (14) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:160/proxy/: foo (200; 33.591906ms)
Feb 13 14:14:25.030: INFO: (14) /api/v1/namespaces/proxy-2520/services/http:proxy-service-hnvs9:portname1/proxy/: foo (200; 33.69135ms)
Feb 13 14:14:25.030: INFO: (14) /api/v1/namespaces/proxy-2520/services/proxy-service-hnvs9:portname2/proxy/: bar (200; 33.564801ms)
Feb 13 14:14:25.030: INFO: (14) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:443/proxy/: ... (200; 15.436346ms)
Feb 13 14:14:25.047: INFO: (15) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:460/proxy/: tls baz (200; 16.307914ms)
Feb 13 14:14:25.047: INFO: (15) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 16.371509ms)
Feb 13 14:14:25.047: INFO: (15) /api/v1/namespaces/proxy-2520/services/http:proxy-service-hnvs9:portname2/proxy/: bar (200; 16.613911ms)
Feb 13 14:14:25.048: INFO: (15) /api/v1/namespaces/proxy-2520/services/https:proxy-service-hnvs9:tlsportname2/proxy/: tls qux (200; 16.590718ms)
Feb 13 14:14:25.048: INFO: (15) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:1080/proxy/: test<... (200; 16.751159ms)
Feb 13 14:14:25.048: INFO: (15) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz/proxy/: test (200; 16.77205ms)
Feb 13 14:14:25.048: INFO: (15) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:160/proxy/: foo (200; 16.733011ms)
Feb 13 14:14:25.048: INFO: (15) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:462/proxy/: tls qux (200; 16.743025ms)
Feb 13 14:14:25.048: INFO: (15) /api/v1/namespaces/proxy-2520/services/proxy-service-hnvs9:portname2/proxy/: bar (200; 17.087845ms)
Feb 13 14:14:25.048: INFO: (15) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:443/proxy/: test<... (200; 5.260235ms)
Feb 13 14:14:25.053: INFO: (16) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 5.305496ms)
Feb 13 14:14:25.055: INFO: (16) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz/proxy/: test (200; 6.521458ms)
Feb 13 14:14:25.055: INFO: (16) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:1080/proxy/: ... (200; 6.869324ms)
Feb 13 14:14:25.055: INFO: (16) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:160/proxy/: foo (200; 6.845763ms)
Feb 13 14:14:25.055: INFO: (16) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:443/proxy/: test<... (200; 12.797161ms)
Feb 13 14:14:25.075: INFO: (17) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 13.145814ms)
Feb 13 14:14:25.075: INFO: (17) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz/proxy/: test (200; 13.12771ms)
Feb 13 14:14:25.075: INFO: (17) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:443/proxy/: ... (200; 13.260989ms)
Feb 13 14:14:25.075: INFO: (17) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:160/proxy/: foo (200; 13.259444ms)
Feb 13 14:14:25.075: INFO: (17) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:460/proxy/: tls baz (200; 13.351606ms)
Feb 13 14:14:25.075: INFO: (17) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 13.247251ms)
Feb 13 14:14:25.075: INFO: (17) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:160/proxy/: foo (200; 13.279609ms)
Feb 13 14:14:25.083: INFO: (17) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:462/proxy/: tls qux (200; 20.825754ms)
Feb 13 14:14:25.084: INFO: (17) /api/v1/namespaces/proxy-2520/services/http:proxy-service-hnvs9:portname2/proxy/: bar (200; 21.829354ms)
Feb 13 14:14:25.084: INFO: (17) /api/v1/namespaces/proxy-2520/services/http:proxy-service-hnvs9:portname1/proxy/: foo (200; 22.428936ms)
Feb 13 14:14:25.085: INFO: (17) /api/v1/namespaces/proxy-2520/services/proxy-service-hnvs9:portname1/proxy/: foo (200; 22.777174ms)
Feb 13 14:14:25.085: INFO: (17) /api/v1/namespaces/proxy-2520/services/https:proxy-service-hnvs9:tlsportname2/proxy/: tls qux (200; 22.761109ms)
Feb 13 14:14:25.085: INFO: (17) /api/v1/namespaces/proxy-2520/services/https:proxy-service-hnvs9:tlsportname1/proxy/: tls baz (200; 23.42046ms)
Feb 13 14:14:25.086: INFO: (17) /api/v1/namespaces/proxy-2520/services/proxy-service-hnvs9:portname2/proxy/: bar (200; 23.501063ms)
Feb 13 14:14:25.098: INFO: (18) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz/proxy/: test (200; 11.636453ms)
Feb 13 14:14:25.098: INFO: (18) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:160/proxy/: foo (200; 11.665869ms)
Feb 13 14:14:25.098: INFO: (18) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 12.020275ms)
Feb 13 14:14:25.099: INFO: (18) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 12.609141ms)
Feb 13 14:14:25.099: INFO: (18) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:462/proxy/: tls qux (200; 12.755053ms)
Feb 13 14:14:25.099: INFO: (18) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:1080/proxy/: test<... (200; 12.954597ms)
Feb 13 14:14:25.099: INFO: (18) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:443/proxy/: ... (200; 17.323513ms)
Feb 13 14:14:25.105: INFO: (18) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:460/proxy/: tls baz (200; 18.414174ms)
Feb 13 14:14:25.106: INFO: (18) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:160/proxy/: foo (200; 19.170941ms)
Feb 13 14:14:25.111: INFO: (19) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 5.122752ms)
Feb 13 14:14:25.111: INFO: (19) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:162/proxy/: bar (200; 5.317867ms)
Feb 13 14:14:25.113: INFO: (19) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:1080/proxy/: test<... (200; 7.606978ms)
Feb 13 14:14:25.114: INFO: (19) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz:160/proxy/: foo (200; 8.656478ms)
Feb 13 14:14:25.115: INFO: (19) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:160/proxy/: foo (200; 9.676979ms)
Feb 13 14:14:25.116: INFO: (19) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:462/proxy/: tls qux (200; 9.921482ms)
Feb 13 14:14:25.116: INFO: (19) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:460/proxy/: tls baz (200; 9.999348ms)
Feb 13 14:14:25.116: INFO: (19) /api/v1/namespaces/proxy-2520/pods/proxy-service-hnvs9-kmrfz/proxy/: test (200; 9.975035ms)
Feb 13 14:14:25.119: INFO: (19) /api/v1/namespaces/proxy-2520/pods/http:proxy-service-hnvs9-kmrfz:1080/proxy/: ... (200; 13.470485ms)
Feb 13 14:14:25.119: INFO: (19) /api/v1/namespaces/proxy-2520/services/proxy-service-hnvs9:portname1/proxy/: foo (200; 13.513159ms)
Feb 13 14:14:25.120: INFO: (19) /api/v1/namespaces/proxy-2520/services/https:proxy-service-hnvs9:tlsportname2/proxy/: tls qux (200; 14.12641ms)
Feb 13 14:14:25.120: INFO: (19) /api/v1/namespaces/proxy-2520/services/proxy-service-hnvs9:portname2/proxy/: bar (200; 14.706984ms)
Feb 13 14:14:25.121: INFO: (19) /api/v1/namespaces/proxy-2520/services/https:proxy-service-hnvs9:tlsportname1/proxy/: tls baz (200; 14.789893ms)
Feb 13 14:14:25.121: INFO: (19) /api/v1/namespaces/proxy-2520/services/http:proxy-service-hnvs9:portname1/proxy/: foo (200; 15.032426ms)
Feb 13 14:14:25.121: INFO: (19) /api/v1/namespaces/proxy-2520/services/http:proxy-service-hnvs9:portname2/proxy/: bar (200; 15.772777ms)
Feb 13 14:14:25.122: INFO: (19) /api/v1/namespaces/proxy-2520/pods/https:proxy-service-hnvs9-kmrfz:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Feb 13 14:14:42.911: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix359989365/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:14:43.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8312" for this suite.
Feb 13 14:14:49.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:14:49.211: INFO: namespace kubectl-8312 deletion completed in 6.163512347s

• [SLOW TEST:6.480 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:14:49.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 13 14:14:49.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-9448'
Feb 13 14:14:49.424: INFO: stderr: ""
Feb 13 14:14:49.424: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Feb 13 14:14:59.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-9448 -o json'
Feb 13 14:14:59.684: INFO: stderr: ""
Feb 13 14:14:59.685: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-13T14:14:49Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-9448\",\n        \"resourceVersion\": \"24205594\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-9448/pods/e2e-test-nginx-pod\",\n        \"uid\": \"b360dc0d-af15-45ca-8252-5566895ab46f\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-kc2cq\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-kc2cq\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-kc2cq\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-13T14:14:49Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-13T14:14:58Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-13T14:14:58Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-13T14:14:49Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://c8b554acdc08fc83d523fca701d827f328eaab40eeeb73428eafec8e6b85e6da\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-13T14:14:57Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-13T14:14:49Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb 13 14:14:59.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9448'
Feb 13 14:15:00.164: INFO: stderr: ""
Feb 13 14:15:00.165: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Feb 13 14:15:00.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-9448'
Feb 13 14:15:08.288: INFO: stderr: ""
Feb 13 14:15:08.288: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:15:08.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9448" for this suite.
Feb 13 14:15:14.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:15:14.532: INFO: namespace kubectl-9448 deletion completed in 6.209390082s

• [SLOW TEST:25.321 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:15:14.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:15:19.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9649" for this suite.
Feb 13 14:15:26.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:15:26.253: INFO: namespace watch-9649 deletion completed in 6.255949018s

• [SLOW TEST:11.720 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:15:26.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-6eb50753-c217-458e-b4eb-75703dc05d19
STEP: Creating a pod to test consume secrets
Feb 13 14:15:26.466: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3dd3cf49-fea5-43ec-82a9-7556f3e03a10" in namespace "projected-457" to be "success or failure"
Feb 13 14:15:26.472: INFO: Pod "pod-projected-secrets-3dd3cf49-fea5-43ec-82a9-7556f3e03a10": Phase="Pending", Reason="", readiness=false. Elapsed: 5.772629ms
Feb 13 14:15:28.490: INFO: Pod "pod-projected-secrets-3dd3cf49-fea5-43ec-82a9-7556f3e03a10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022967732s
Feb 13 14:15:30.531: INFO: Pod "pod-projected-secrets-3dd3cf49-fea5-43ec-82a9-7556f3e03a10": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064104658s
Feb 13 14:15:32.549: INFO: Pod "pod-projected-secrets-3dd3cf49-fea5-43ec-82a9-7556f3e03a10": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082711239s
Feb 13 14:15:34.570: INFO: Pod "pod-projected-secrets-3dd3cf49-fea5-43ec-82a9-7556f3e03a10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.103051384s
STEP: Saw pod success
Feb 13 14:15:34.570: INFO: Pod "pod-projected-secrets-3dd3cf49-fea5-43ec-82a9-7556f3e03a10" satisfied condition "success or failure"
Feb 13 14:15:34.579: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-3dd3cf49-fea5-43ec-82a9-7556f3e03a10 container projected-secret-volume-test: 
STEP: delete the pod
Feb 13 14:15:34.719: INFO: Waiting for pod pod-projected-secrets-3dd3cf49-fea5-43ec-82a9-7556f3e03a10 to disappear
Feb 13 14:15:34.722: INFO: Pod pod-projected-secrets-3dd3cf49-fea5-43ec-82a9-7556f3e03a10 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:15:34.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-457" for this suite.
Feb 13 14:15:40.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:15:40.905: INFO: namespace projected-457 deletion completed in 6.177720418s

• [SLOW TEST:14.651 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:15:40.906: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3775.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3775.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3775.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3775.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3775.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3775.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 13 14:15:53.086: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3775/dns-test-5275c5d0-7790-46a9-961d-e3d6557c6bd4: the server could not find the requested resource (get pods dns-test-5275c5d0-7790-46a9-961d-e3d6557c6bd4)
Feb 13 14:15:53.093: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3775/dns-test-5275c5d0-7790-46a9-961d-e3d6557c6bd4: the server could not find the requested resource (get pods dns-test-5275c5d0-7790-46a9-961d-e3d6557c6bd4)
Feb 13 14:15:53.098: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-3775.svc.cluster.local from pod dns-3775/dns-test-5275c5d0-7790-46a9-961d-e3d6557c6bd4: the server could not find the requested resource (get pods dns-test-5275c5d0-7790-46a9-961d-e3d6557c6bd4)
Feb 13 14:15:53.104: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-3775/dns-test-5275c5d0-7790-46a9-961d-e3d6557c6bd4: the server could not find the requested resource (get pods dns-test-5275c5d0-7790-46a9-961d-e3d6557c6bd4)
Feb 13 14:15:53.110: INFO: Unable to read jessie_udp@PodARecord from pod dns-3775/dns-test-5275c5d0-7790-46a9-961d-e3d6557c6bd4: the server could not find the requested resource (get pods dns-test-5275c5d0-7790-46a9-961d-e3d6557c6bd4)
Feb 13 14:15:53.114: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3775/dns-test-5275c5d0-7790-46a9-961d-e3d6557c6bd4: the server could not find the requested resource (get pods dns-test-5275c5d0-7790-46a9-961d-e3d6557c6bd4)
Feb 13 14:15:53.114: INFO: Lookups using dns-3775/dns-test-5275c5d0-7790-46a9-961d-e3d6557c6bd4 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-3775.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 13 14:15:58.166: INFO: DNS probes using dns-3775/dns-test-5275c5d0-7790-46a9-961d-e3d6557c6bd4 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:15:58.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3775" for this suite.
Feb 13 14:16:04.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:16:04.433: INFO: namespace dns-3775 deletion completed in 6.192780491s

• [SLOW TEST:23.528 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:16:04.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 13 14:16:04.545: INFO: Waiting up to 5m0s for pod "downwardapi-volume-40ee34c0-2e72-4d5c-9282-41ea322f9a70" in namespace "downward-api-4636" to be "success or failure"
Feb 13 14:16:04.561: INFO: Pod "downwardapi-volume-40ee34c0-2e72-4d5c-9282-41ea322f9a70": Phase="Pending", Reason="", readiness=false. Elapsed: 16.026111ms
Feb 13 14:16:06.573: INFO: Pod "downwardapi-volume-40ee34c0-2e72-4d5c-9282-41ea322f9a70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028056215s
Feb 13 14:16:08.593: INFO: Pod "downwardapi-volume-40ee34c0-2e72-4d5c-9282-41ea322f9a70": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047356675s
Feb 13 14:16:10.603: INFO: Pod "downwardapi-volume-40ee34c0-2e72-4d5c-9282-41ea322f9a70": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057918458s
Feb 13 14:16:12.611: INFO: Pod "downwardapi-volume-40ee34c0-2e72-4d5c-9282-41ea322f9a70": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065493803s
Feb 13 14:16:14.629: INFO: Pod "downwardapi-volume-40ee34c0-2e72-4d5c-9282-41ea322f9a70": Phase="Pending", Reason="", readiness=false. Elapsed: 10.083905204s
Feb 13 14:16:16.642: INFO: Pod "downwardapi-volume-40ee34c0-2e72-4d5c-9282-41ea322f9a70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.09621617s
STEP: Saw pod success
Feb 13 14:16:16.642: INFO: Pod "downwardapi-volume-40ee34c0-2e72-4d5c-9282-41ea322f9a70" satisfied condition "success or failure"
Feb 13 14:16:16.769: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-40ee34c0-2e72-4d5c-9282-41ea322f9a70 container client-container: 
STEP: delete the pod
Feb 13 14:16:16.824: INFO: Waiting for pod downwardapi-volume-40ee34c0-2e72-4d5c-9282-41ea322f9a70 to disappear
Feb 13 14:16:16.842: INFO: Pod downwardapi-volume-40ee34c0-2e72-4d5c-9282-41ea322f9a70 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:16:16.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4636" for this suite.
Feb 13 14:16:22.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:16:23.049: INFO: namespace downward-api-4636 deletion completed in 6.200141734s

• [SLOW TEST:18.615 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:16:23.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-7f37117b-3ae1-444c-8850-779d37999931 in namespace container-probe-4702
Feb 13 14:16:33.198: INFO: Started pod liveness-7f37117b-3ae1-444c-8850-779d37999931 in namespace container-probe-4702
STEP: checking the pod's current state and verifying that restartCount is present
Feb 13 14:16:33.204: INFO: Initial restart count of pod liveness-7f37117b-3ae1-444c-8850-779d37999931 is 0
Feb 13 14:16:47.928: INFO: Restart count of pod container-probe-4702/liveness-7f37117b-3ae1-444c-8850-779d37999931 is now 1 (14.724316472s elapsed)
Feb 13 14:17:06.028: INFO: Restart count of pod container-probe-4702/liveness-7f37117b-3ae1-444c-8850-779d37999931 is now 2 (32.824117626s elapsed)
Feb 13 14:17:26.125: INFO: Restart count of pod container-probe-4702/liveness-7f37117b-3ae1-444c-8850-779d37999931 is now 3 (52.92117692s elapsed)
Feb 13 14:17:46.225: INFO: Restart count of pod container-probe-4702/liveness-7f37117b-3ae1-444c-8850-779d37999931 is now 4 (1m13.021658371s elapsed)
Feb 13 14:19:00.690: INFO: Restart count of pod container-probe-4702/liveness-7f37117b-3ae1-444c-8850-779d37999931 is now 5 (2m27.486519406s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:19:00.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4702" for this suite.
Feb 13 14:19:06.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:19:07.121: INFO: namespace container-probe-4702 deletion completed in 6.321509438s

• [SLOW TEST:164.071 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:19:07.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:19:39.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5548" for this suite.
Feb 13 14:19:45.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:19:45.673: INFO: namespace namespaces-5548 deletion completed in 6.195302349s
STEP: Destroying namespace "nsdeletetest-4378" for this suite.
Feb 13 14:19:45.678: INFO: Namespace nsdeletetest-4378 was already deleted
STEP: Destroying namespace "nsdeletetest-8117" for this suite.
Feb 13 14:19:51.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:19:51.841: INFO: namespace nsdeletetest-8117 deletion completed in 6.163240142s

• [SLOW TEST:44.720 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:19:51.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Feb 13 14:19:51.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6050'
Feb 13 14:19:52.304: INFO: stderr: ""
Feb 13 14:19:52.304: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 13 14:19:54.151: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:19:54.152: INFO: Found 0 / 1
Feb 13 14:19:54.314: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:19:54.314: INFO: Found 0 / 1
Feb 13 14:19:55.313: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:19:55.313: INFO: Found 0 / 1
Feb 13 14:19:56.313: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:19:56.313: INFO: Found 0 / 1
Feb 13 14:19:57.312: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:19:57.312: INFO: Found 0 / 1
Feb 13 14:19:58.321: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:19:58.321: INFO: Found 0 / 1
Feb 13 14:19:59.320: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:19:59.321: INFO: Found 0 / 1
Feb 13 14:20:00.318: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:20:00.318: INFO: Found 1 / 1
Feb 13 14:20:00.318: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb 13 14:20:00.324: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:20:00.324: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 13 14:20:00.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-nrt9d --namespace=kubectl-6050 -p {"metadata":{"annotations":{"x":"y"}}}'
Feb 13 14:20:00.575: INFO: stderr: ""
Feb 13 14:20:00.575: INFO: stdout: "pod/redis-master-nrt9d patched\n"
STEP: checking annotations
Feb 13 14:20:00.583: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:20:00.583: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:20:00.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6050" for this suite.
Feb 13 14:20:24.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:20:24.758: INFO: namespace kubectl-6050 deletion completed in 24.169533001s

• [SLOW TEST:32.916 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:20:24.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 13 14:20:24.885: INFO: Waiting up to 5m0s for pod "pod-d2e2c785-bc27-4bc6-8e2a-7d0d61f6b32e" in namespace "emptydir-6384" to be "success or failure"
Feb 13 14:20:25.395: INFO: Pod "pod-d2e2c785-bc27-4bc6-8e2a-7d0d61f6b32e": Phase="Pending", Reason="", readiness=false. Elapsed: 509.635033ms
Feb 13 14:20:27.407: INFO: Pod "pod-d2e2c785-bc27-4bc6-8e2a-7d0d61f6b32e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.521618315s
Feb 13 14:20:29.424: INFO: Pod "pod-d2e2c785-bc27-4bc6-8e2a-7d0d61f6b32e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.538547149s
Feb 13 14:20:31.431: INFO: Pod "pod-d2e2c785-bc27-4bc6-8e2a-7d0d61f6b32e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.545853611s
Feb 13 14:20:33.442: INFO: Pod "pod-d2e2c785-bc27-4bc6-8e2a-7d0d61f6b32e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.556518268s
STEP: Saw pod success
Feb 13 14:20:33.442: INFO: Pod "pod-d2e2c785-bc27-4bc6-8e2a-7d0d61f6b32e" satisfied condition "success or failure"
Feb 13 14:20:33.446: INFO: Trying to get logs from node iruya-node pod pod-d2e2c785-bc27-4bc6-8e2a-7d0d61f6b32e container test-container: 
STEP: delete the pod
Feb 13 14:20:33.539: INFO: Waiting for pod pod-d2e2c785-bc27-4bc6-8e2a-7d0d61f6b32e to disappear
Feb 13 14:20:33.545: INFO: Pod pod-d2e2c785-bc27-4bc6-8e2a-7d0d61f6b32e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:20:33.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6384" for this suite.
Feb 13 14:20:39.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:20:39.776: INFO: namespace emptydir-6384 deletion completed in 6.224521521s

• [SLOW TEST:15.018 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:20:39.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-a837ffa6-9cee-4a85-9c49-d8ee85435c5c
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:20:50.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9162" for this suite.
Feb 13 14:21:12.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:21:12.149: INFO: namespace configmap-9162 deletion completed in 22.136235054s

• [SLOW TEST:32.372 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:21:12.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-76f6d946-11a4-4e23-a9bb-75960683eba4
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-76f6d946-11a4-4e23-a9bb-75960683eba4
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:22:44.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4038" for this suite.
Feb 13 14:23:06.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:23:06.270: INFO: namespace projected-4038 deletion completed in 22.169133275s

• [SLOW TEST:114.120 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:23:06.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 13 14:23:06.378: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Feb 13 14:23:09.816: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:23:09.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3318" for this suite.
Feb 13 14:23:22.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:23:22.216: INFO: namespace replication-controller-3318 deletion completed in 12.317016426s

• [SLOW TEST:15.946 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:23:22.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 13 14:26:24.362: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 13 14:26:24.394: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 13 14:26:26.394: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 13 14:26:26.410: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 13 14:26:28.395: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 13 14:26:28.408: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 13 14:26:30.395: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 13 14:26:30.403: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 13 14:26:32.395: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 13 14:26:32.402: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 13 14:26:34.395: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 13 14:26:34.405: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 13 14:26:36.395: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 13 14:26:36.403: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 13 14:26:38.395: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 13 14:26:38.406: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 13 14:26:40.395: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 13 14:26:40.405: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 13 14:26:42.395: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 13 14:26:42.405: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 13 14:26:44.395: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 13 14:26:44.402: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 13 14:26:46.394: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 13 14:26:46.404: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 13 14:26:48.394: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 13 14:26:48.405: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 13 14:26:50.394: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 13 14:26:50.402: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 13 14:26:52.395: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 13 14:26:52.401: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 13 14:26:54.394: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 13 14:26:54.485: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 13 14:26:56.394: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 13 14:26:56.402: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 13 14:26:58.395: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 13 14:26:58.405: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 13 14:27:00.395: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 13 14:27:00.429: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 13 14:27:02.395: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 13 14:27:02.410: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 13 14:27:04.395: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 13 14:27:04.413: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 13 14:27:06.395: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 13 14:27:06.407: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 13 14:27:08.395: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 13 14:27:08.409: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:27:08.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4975" for this suite.
Feb 13 14:27:30.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:27:30.618: INFO: namespace container-lifecycle-hook-4975 deletion completed in 22.201777094s

• [SLOW TEST:248.402 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:27:30.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-e4af0853-e6df-4d4f-b566-a45143410c4f
Feb 13 14:27:30.761: INFO: Pod name my-hostname-basic-e4af0853-e6df-4d4f-b566-a45143410c4f: Found 0 pods out of 1
Feb 13 14:27:35.774: INFO: Pod name my-hostname-basic-e4af0853-e6df-4d4f-b566-a45143410c4f: Found 1 pods out of 1
Feb 13 14:27:35.774: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-e4af0853-e6df-4d4f-b566-a45143410c4f" are running
Feb 13 14:27:37.793: INFO: Pod "my-hostname-basic-e4af0853-e6df-4d4f-b566-a45143410c4f-cbtzf" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-13 14:27:30 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-13 14:27:30 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e4af0853-e6df-4d4f-b566-a45143410c4f]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-13 14:27:30 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e4af0853-e6df-4d4f-b566-a45143410c4f]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-13 14:27:30 +0000 UTC Reason: Message:}])
Feb 13 14:27:37.794: INFO: Trying to dial the pod
Feb 13 14:27:42.827: INFO: Controller my-hostname-basic-e4af0853-e6df-4d4f-b566-a45143410c4f: Got expected result from replica 1 [my-hostname-basic-e4af0853-e6df-4d4f-b566-a45143410c4f-cbtzf]: "my-hostname-basic-e4af0853-e6df-4d4f-b566-a45143410c4f-cbtzf", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:27:42.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7465" for this suite.
Feb 13 14:27:48.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:27:48.970: INFO: namespace replication-controller-7465 deletion completed in 6.139232134s

• [SLOW TEST:18.352 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:27:48.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Feb 13 14:27:49.047: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Feb 13 14:27:49.825: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Feb 13 14:27:52.371: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717200869, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717200869, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717200869, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717200869, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 13 14:27:54.385: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717200869, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717200869, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717200869, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717200869, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 13 14:27:56.381: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717200869, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717200869, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717200869, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717200869, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 13 14:27:58.380: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717200869, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717200869, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717200869, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717200869, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 13 14:28:00.378: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717200869, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717200869, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717200869, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717200869, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 13 14:28:07.867: INFO: Waited 5.479632756s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:28:08.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-1094" for this suite.
Feb 13 14:28:14.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:28:14.514: INFO: namespace aggregator-1094 deletion completed in 6.161606291s

• [SLOW TEST:25.543 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:28:14.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb 13 14:28:36.823: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3674 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 13 14:28:36.823: INFO: >>> kubeConfig: /root/.kube/config
I0213 14:28:36.921981       8 log.go:172] (0xc0032ff290) (0xc0021e2c80) Create stream
I0213 14:28:36.922080       8 log.go:172] (0xc0032ff290) (0xc0021e2c80) Stream added, broadcasting: 1
I0213 14:28:36.935724       8 log.go:172] (0xc0032ff290) Reply frame received for 1
I0213 14:28:36.935852       8 log.go:172] (0xc0032ff290) (0xc00235cbe0) Create stream
I0213 14:28:36.935866       8 log.go:172] (0xc0032ff290) (0xc00235cbe0) Stream added, broadcasting: 3
I0213 14:28:36.939939       8 log.go:172] (0xc0032ff290) Reply frame received for 3
I0213 14:28:36.940014       8 log.go:172] (0xc0032ff290) (0xc0021e2d20) Create stream
I0213 14:28:36.940027       8 log.go:172] (0xc0032ff290) (0xc0021e2d20) Stream added, broadcasting: 5
I0213 14:28:36.943578       8 log.go:172] (0xc0032ff290) Reply frame received for 5
I0213 14:28:37.113380       8 log.go:172] (0xc0032ff290) Data frame received for 3
I0213 14:28:37.113490       8 log.go:172] (0xc00235cbe0) (3) Data frame handling
I0213 14:28:37.113537       8 log.go:172] (0xc00235cbe0) (3) Data frame sent
I0213 14:28:37.308095       8 log.go:172] (0xc0032ff290) (0xc00235cbe0) Stream removed, broadcasting: 3
I0213 14:28:37.308291       8 log.go:172] (0xc0032ff290) Data frame received for 1
I0213 14:28:37.308336       8 log.go:172] (0xc0021e2c80) (1) Data frame handling
I0213 14:28:37.308362       8 log.go:172] (0xc0021e2c80) (1) Data frame sent
I0213 14:28:37.308512       8 log.go:172] (0xc0032ff290) (0xc0021e2c80) Stream removed, broadcasting: 1
I0213 14:28:37.308533       8 log.go:172] (0xc0032ff290) (0xc0021e2d20) Stream removed, broadcasting: 5
I0213 14:28:37.308571       8 log.go:172] (0xc0032ff290) Go away received
I0213 14:28:37.308905       8 log.go:172] (0xc0032ff290) (0xc0021e2c80) Stream removed, broadcasting: 1
I0213 14:28:37.308920       8 log.go:172] (0xc0032ff290) (0xc00235cbe0) Stream removed, broadcasting: 3
I0213 14:28:37.308935       8 log.go:172] (0xc0032ff290) (0xc0021e2d20) Stream removed, broadcasting: 5
Feb 13 14:28:37.309: INFO: Exec stderr: ""
Feb 13 14:28:37.309: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3674 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 13 14:28:37.309: INFO: >>> kubeConfig: /root/.kube/config
I0213 14:28:37.381603       8 log.go:172] (0xc003384000) (0xc0021e3040) Create stream
I0213 14:28:37.381680       8 log.go:172] (0xc003384000) (0xc0021e3040) Stream added, broadcasting: 1
I0213 14:28:37.389583       8 log.go:172] (0xc003384000) Reply frame received for 1
I0213 14:28:37.389614       8 log.go:172] (0xc003384000) (0xc0016aea00) Create stream
I0213 14:28:37.389622       8 log.go:172] (0xc003384000) (0xc0016aea00) Stream added, broadcasting: 3
I0213 14:28:37.390860       8 log.go:172] (0xc003384000) Reply frame received for 3
I0213 14:28:37.390879       8 log.go:172] (0xc003384000) (0xc0021e30e0) Create stream
I0213 14:28:37.390887       8 log.go:172] (0xc003384000) (0xc0021e30e0) Stream added, broadcasting: 5
I0213 14:28:37.391940       8 log.go:172] (0xc003384000) Reply frame received for 5
I0213 14:28:37.479146       8 log.go:172] (0xc003384000) Data frame received for 3
I0213 14:28:37.479257       8 log.go:172] (0xc0016aea00) (3) Data frame handling
I0213 14:28:37.479407       8 log.go:172] (0xc0016aea00) (3) Data frame sent
I0213 14:28:37.625457       8 log.go:172] (0xc003384000) Data frame received for 1
I0213 14:28:37.625571       8 log.go:172] (0xc003384000) (0xc0021e30e0) Stream removed, broadcasting: 5
I0213 14:28:37.625673       8 log.go:172] (0xc0021e3040) (1) Data frame handling
I0213 14:28:37.625696       8 log.go:172] (0xc0021e3040) (1) Data frame sent
I0213 14:28:37.625839       8 log.go:172] (0xc003384000) (0xc0016aea00) Stream removed, broadcasting: 3
I0213 14:28:37.626180       8 log.go:172] (0xc003384000) (0xc0021e3040) Stream removed, broadcasting: 1
I0213 14:28:37.626248       8 log.go:172] (0xc003384000) Go away received
I0213 14:28:37.626829       8 log.go:172] (0xc003384000) (0xc0021e3040) Stream removed, broadcasting: 1
I0213 14:28:37.626896       8 log.go:172] (0xc003384000) (0xc0016aea00) Stream removed, broadcasting: 3
I0213 14:28:37.626919       8 log.go:172] (0xc003384000) (0xc0021e30e0) Stream removed, broadcasting: 5
Feb 13 14:28:37.626: INFO: Exec stderr: ""
Feb 13 14:28:37.627: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3674 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 13 14:28:37.627: INFO: >>> kubeConfig: /root/.kube/config
I0213 14:28:37.732932       8 log.go:172] (0xc003384f20) (0xc0021e3540) Create stream
I0213 14:28:37.733126       8 log.go:172] (0xc003384f20) (0xc0021e3540) Stream added, broadcasting: 1
I0213 14:28:37.744937       8 log.go:172] (0xc003384f20) Reply frame received for 1
I0213 14:28:37.745050       8 log.go:172] (0xc003384f20) (0xc0016aeb40) Create stream
I0213 14:28:37.745069       8 log.go:172] (0xc003384f20) (0xc0016aeb40) Stream added, broadcasting: 3
I0213 14:28:37.751809       8 log.go:172] (0xc003384f20) Reply frame received for 3
I0213 14:28:37.751846       8 log.go:172] (0xc003384f20) (0xc00235cc80) Create stream
I0213 14:28:37.751860       8 log.go:172] (0xc003384f20) (0xc00235cc80) Stream added, broadcasting: 5
I0213 14:28:37.755298       8 log.go:172] (0xc003384f20) Reply frame received for 5
I0213 14:28:37.893163       8 log.go:172] (0xc003384f20) Data frame received for 3
I0213 14:28:37.893239       8 log.go:172] (0xc0016aeb40) (3) Data frame handling
I0213 14:28:37.893264       8 log.go:172] (0xc0016aeb40) (3) Data frame sent
I0213 14:28:38.065719       8 log.go:172] (0xc003384f20) (0xc00235cc80) Stream removed, broadcasting: 5
I0213 14:28:38.066069       8 log.go:172] (0xc003384f20) Data frame received for 1
I0213 14:28:38.066151       8 log.go:172] (0xc003384f20) (0xc0016aeb40) Stream removed, broadcasting: 3
I0213 14:28:38.066214       8 log.go:172] (0xc0021e3540) (1) Data frame handling
I0213 14:28:38.066238       8 log.go:172] (0xc0021e3540) (1) Data frame sent
I0213 14:28:38.066249       8 log.go:172] (0xc003384f20) (0xc0021e3540) Stream removed, broadcasting: 1
I0213 14:28:38.066268       8 log.go:172] (0xc003384f20) Go away received
I0213 14:28:38.067161       8 log.go:172] (0xc003384f20) (0xc0021e3540) Stream removed, broadcasting: 1
I0213 14:28:38.067183       8 log.go:172] (0xc003384f20) (0xc0016aeb40) Stream removed, broadcasting: 3
I0213 14:28:38.067188       8 log.go:172] (0xc003384f20) (0xc00235cc80) Stream removed, broadcasting: 5
Feb 13 14:28:38.067: INFO: Exec stderr: ""
Feb 13 14:28:38.067: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3674 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 13 14:28:38.067: INFO: >>> kubeConfig: /root/.kube/config
I0213 14:28:38.138520       8 log.go:172] (0xc0036ac8f0) (0xc0031774a0) Create stream
I0213 14:28:38.138632       8 log.go:172] (0xc0036ac8f0) (0xc0031774a0) Stream added, broadcasting: 1
I0213 14:28:38.157673       8 log.go:172] (0xc0036ac8f0) Reply frame received for 1
I0213 14:28:38.157900       8 log.go:172] (0xc0036ac8f0) (0xc000454140) Create stream
I0213 14:28:38.157920       8 log.go:172] (0xc0036ac8f0) (0xc000454140) Stream added, broadcasting: 3
I0213 14:28:38.161969       8 log.go:172] (0xc0036ac8f0) Reply frame received for 3
I0213 14:28:38.161994       8 log.go:172] (0xc0036ac8f0) (0xc001e380a0) Create stream
I0213 14:28:38.162003       8 log.go:172] (0xc0036ac8f0) (0xc001e380a0) Stream added, broadcasting: 5
I0213 14:28:38.164194       8 log.go:172] (0xc0036ac8f0) Reply frame received for 5
I0213 14:28:38.273520       8 log.go:172] (0xc0036ac8f0) Data frame received for 3
I0213 14:28:38.273620       8 log.go:172] (0xc000454140) (3) Data frame handling
I0213 14:28:38.273686       8 log.go:172] (0xc000454140) (3) Data frame sent
I0213 14:28:38.413233       8 log.go:172] (0xc0036ac8f0) (0xc000454140) Stream removed, broadcasting: 3
I0213 14:28:38.413438       8 log.go:172] (0xc0036ac8f0) Data frame received for 1
I0213 14:28:38.413481       8 log.go:172] (0xc0031774a0) (1) Data frame handling
I0213 14:28:38.413499       8 log.go:172] (0xc0031774a0) (1) Data frame sent
I0213 14:28:38.413525       8 log.go:172] (0xc0036ac8f0) (0xc001e380a0) Stream removed, broadcasting: 5
I0213 14:28:38.413550       8 log.go:172] (0xc0036ac8f0) (0xc0031774a0) Stream removed, broadcasting: 1
I0213 14:28:38.413573       8 log.go:172] (0xc0036ac8f0) Go away received
I0213 14:28:38.413775       8 log.go:172] (0xc0036ac8f0) (0xc0031774a0) Stream removed, broadcasting: 1
I0213 14:28:38.413795       8 log.go:172] (0xc0036ac8f0) (0xc000454140) Stream removed, broadcasting: 3
I0213 14:28:38.413806       8 log.go:172] (0xc0036ac8f0) (0xc001e380a0) Stream removed, broadcasting: 5
Feb 13 14:28:38.413: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb 13 14:28:38.414: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3674 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 13 14:28:38.414: INFO: >>> kubeConfig: /root/.kube/config
I0213 14:28:38.492033       8 log.go:172] (0xc000a998c0) (0xc000ea8460) Create stream
I0213 14:28:38.492126       8 log.go:172] (0xc000a998c0) (0xc000ea8460) Stream added, broadcasting: 1
I0213 14:28:38.540141       8 log.go:172] (0xc000a998c0) Reply frame received for 1
I0213 14:28:38.540262       8 log.go:172] (0xc000a998c0) (0xc000454320) Create stream
I0213 14:28:38.540279       8 log.go:172] (0xc000a998c0) (0xc000454320) Stream added, broadcasting: 3
I0213 14:28:38.547199       8 log.go:172] (0xc000a998c0) Reply frame received for 3
I0213 14:28:38.547249       8 log.go:172] (0xc000a998c0) (0xc000618000) Create stream
I0213 14:28:38.547265       8 log.go:172] (0xc000a998c0) (0xc000618000) Stream added, broadcasting: 5
I0213 14:28:38.550747       8 log.go:172] (0xc000a998c0) Reply frame received for 5
I0213 14:28:38.791463       8 log.go:172] (0xc000a998c0) Data frame received for 3
I0213 14:28:38.791619       8 log.go:172] (0xc000454320) (3) Data frame handling
I0213 14:28:38.791651       8 log.go:172] (0xc000454320) (3) Data frame sent
I0213 14:28:38.934745       8 log.go:172] (0xc000a998c0) (0xc000618000) Stream removed, broadcasting: 5
I0213 14:28:38.934867       8 log.go:172] (0xc000a998c0) Data frame received for 1
I0213 14:28:38.934891       8 log.go:172] (0xc000ea8460) (1) Data frame handling
I0213 14:28:38.934912       8 log.go:172] (0xc000ea8460) (1) Data frame sent
I0213 14:28:38.934962       8 log.go:172] (0xc000a998c0) (0xc000ea8460) Stream removed, broadcasting: 1
I0213 14:28:38.934990       8 log.go:172] (0xc000a998c0) (0xc000454320) Stream removed, broadcasting: 3
I0213 14:28:38.935057       8 log.go:172] (0xc000a998c0) Go away received
I0213 14:28:38.935141       8 log.go:172] (0xc000a998c0) (0xc000ea8460) Stream removed, broadcasting: 1
I0213 14:28:38.935153       8 log.go:172] (0xc000a998c0) (0xc000454320) Stream removed, broadcasting: 3
I0213 14:28:38.935157       8 log.go:172] (0xc000a998c0) (0xc000618000) Stream removed, broadcasting: 5
Feb 13 14:28:38.935: INFO: Exec stderr: ""
Feb 13 14:28:38.935: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3674 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 13 14:28:38.935: INFO: >>> kubeConfig: /root/.kube/config
I0213 14:28:39.009625       8 log.go:172] (0xc0001cbce0) (0xc0013aa460) Create stream
I0213 14:28:39.009762       8 log.go:172] (0xc0001cbce0) (0xc0013aa460) Stream added, broadcasting: 1
I0213 14:28:39.016929       8 log.go:172] (0xc0001cbce0) Reply frame received for 1
I0213 14:28:39.017026       8 log.go:172] (0xc0001cbce0) (0xc000ea85a0) Create stream
I0213 14:28:39.017087       8 log.go:172] (0xc0001cbce0) (0xc000ea85a0) Stream added, broadcasting: 3
I0213 14:28:39.018483       8 log.go:172] (0xc0001cbce0) Reply frame received for 3
I0213 14:28:39.018500       8 log.go:172] (0xc0001cbce0) (0xc0013aa5a0) Create stream
I0213 14:28:39.018505       8 log.go:172] (0xc0001cbce0) (0xc0013aa5a0) Stream added, broadcasting: 5
I0213 14:28:39.019438       8 log.go:172] (0xc0001cbce0) Reply frame received for 5
I0213 14:28:39.128281       8 log.go:172] (0xc0001cbce0) Data frame received for 3
I0213 14:28:39.128348       8 log.go:172] (0xc000ea85a0) (3) Data frame handling
I0213 14:28:39.128369       8 log.go:172] (0xc000ea85a0) (3) Data frame sent
I0213 14:28:39.351321       8 log.go:172] (0xc0001cbce0) (0xc000ea85a0) Stream removed, broadcasting: 3
I0213 14:28:39.351718       8 log.go:172] (0xc0001cbce0) Data frame received for 1
I0213 14:28:39.351774       8 log.go:172] (0xc0013aa460) (1) Data frame handling
I0213 14:28:39.351836       8 log.go:172] (0xc0013aa460) (1) Data frame sent
I0213 14:28:39.351880       8 log.go:172] (0xc0001cbce0) (0xc0013aa5a0) Stream removed, broadcasting: 5
I0213 14:28:39.351914       8 log.go:172] (0xc0001cbce0) (0xc0013aa460) Stream removed, broadcasting: 1
I0213 14:28:39.351935       8 log.go:172] (0xc0001cbce0) Go away received
I0213 14:28:39.352286       8 log.go:172] (0xc0001cbce0) (0xc0013aa460) Stream removed, broadcasting: 1
I0213 14:28:39.352398       8 log.go:172] (0xc0001cbce0) (0xc000ea85a0) Stream removed, broadcasting: 3
I0213 14:28:39.352412       8 log.go:172] (0xc0001cbce0) (0xc0013aa5a0) Stream removed, broadcasting: 5
Feb 13 14:28:39.352: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb 13 14:28:39.352: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3674 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 13 14:28:39.352: INFO: >>> kubeConfig: /root/.kube/config
I0213 14:28:39.414766       8 log.go:172] (0xc000cb4790) (0xc000618960) Create stream
I0213 14:28:39.414825       8 log.go:172] (0xc000cb4790) (0xc000618960) Stream added, broadcasting: 1
I0213 14:28:39.423057       8 log.go:172] (0xc000cb4790) Reply frame received for 1
I0213 14:28:39.423093       8 log.go:172] (0xc000cb4790) (0xc0004545a0) Create stream
I0213 14:28:39.423101       8 log.go:172] (0xc000cb4790) (0xc0004545a0) Stream added, broadcasting: 3
I0213 14:28:39.425202       8 log.go:172] (0xc000cb4790) Reply frame received for 3
I0213 14:28:39.425231       8 log.go:172] (0xc000cb4790) (0xc000454640) Create stream
I0213 14:28:39.425243       8 log.go:172] (0xc000cb4790) (0xc000454640) Stream added, broadcasting: 5
I0213 14:28:39.426594       8 log.go:172] (0xc000cb4790) Reply frame received for 5
I0213 14:28:39.513814       8 log.go:172] (0xc000cb4790) Data frame received for 3
I0213 14:28:39.513850       8 log.go:172] (0xc0004545a0) (3) Data frame handling
I0213 14:28:39.513863       8 log.go:172] (0xc0004545a0) (3) Data frame sent
I0213 14:28:39.637758       8 log.go:172] (0xc000cb4790) Data frame received for 1
I0213 14:28:39.637842       8 log.go:172] (0xc000cb4790) (0xc0004545a0) Stream removed, broadcasting: 3
I0213 14:28:39.637887       8 log.go:172] (0xc000618960) (1) Data frame handling
I0213 14:28:39.637902       8 log.go:172] (0xc000618960) (1) Data frame sent
I0213 14:28:39.637931       8 log.go:172] (0xc000cb4790) (0xc000618960) Stream removed, broadcasting: 1
I0213 14:28:39.637949       8 log.go:172] (0xc000cb4790) (0xc000454640) Stream removed, broadcasting: 5
I0213 14:28:39.637968       8 log.go:172] (0xc000cb4790) Go away received
I0213 14:28:39.638132       8 log.go:172] (0xc000cb4790) (0xc000618960) Stream removed, broadcasting: 1
I0213 14:28:39.638148       8 log.go:172] (0xc000cb4790) (0xc0004545a0) Stream removed, broadcasting: 3
I0213 14:28:39.638162       8 log.go:172] (0xc000cb4790) (0xc000454640) Stream removed, broadcasting: 5
Feb 13 14:28:39.638: INFO: Exec stderr: ""
Feb 13 14:28:39.638: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3674 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 13 14:28:39.638: INFO: >>> kubeConfig: /root/.kube/config
I0213 14:28:39.702375       8 log.go:172] (0xc0000ede40) (0xc001e383c0) Create stream
I0213 14:28:39.702434       8 log.go:172] (0xc0000ede40) (0xc001e383c0) Stream added, broadcasting: 1
I0213 14:28:39.707406       8 log.go:172] (0xc0000ede40) Reply frame received for 1
I0213 14:28:39.707450       8 log.go:172] (0xc0000ede40) (0xc000ea8640) Create stream
I0213 14:28:39.707465       8 log.go:172] (0xc0000ede40) (0xc000ea8640) Stream added, broadcasting: 3
I0213 14:28:39.709013       8 log.go:172] (0xc0000ede40) Reply frame received for 3
I0213 14:28:39.709035       8 log.go:172] (0xc0000ede40) (0xc000ea8820) Create stream
I0213 14:28:39.709045       8 log.go:172] (0xc0000ede40) (0xc000ea8820) Stream added, broadcasting: 5
I0213 14:28:39.711481       8 log.go:172] (0xc0000ede40) Reply frame received for 5
I0213 14:28:39.818867       8 log.go:172] (0xc0000ede40) Data frame received for 3
I0213 14:28:39.818940       8 log.go:172] (0xc000ea8640) (3) Data frame handling
I0213 14:28:39.818952       8 log.go:172] (0xc000ea8640) (3) Data frame sent
I0213 14:28:39.958818       8 log.go:172] (0xc0000ede40) (0xc000ea8640) Stream removed, broadcasting: 3
I0213 14:28:39.958891       8 log.go:172] (0xc0000ede40) Data frame received for 1
I0213 14:28:39.958920       8 log.go:172] (0xc001e383c0) (1) Data frame handling
I0213 14:28:39.958933       8 log.go:172] (0xc001e383c0) (1) Data frame sent
I0213 14:28:39.958988       8 log.go:172] (0xc0000ede40) (0xc001e383c0) Stream removed, broadcasting: 1
I0213 14:28:39.959029       8 log.go:172] (0xc0000ede40) (0xc000ea8820) Stream removed, broadcasting: 5
I0213 14:28:39.959051       8 log.go:172] (0xc0000ede40) Go away received
I0213 14:28:39.959135       8 log.go:172] (0xc0000ede40) (0xc001e383c0) Stream removed, broadcasting: 1
I0213 14:28:39.959147       8 log.go:172] (0xc0000ede40) (0xc000ea8640) Stream removed, broadcasting: 3
I0213 14:28:39.959152       8 log.go:172] (0xc0000ede40) (0xc000ea8820) Stream removed, broadcasting: 5
Feb 13 14:28:39.959: INFO: Exec stderr: ""
Feb 13 14:28:39.959: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3674 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 13 14:28:39.959: INFO: >>> kubeConfig: /root/.kube/config
I0213 14:28:40.057918       8 log.go:172] (0xc0010f66e0) (0xc001e38460) Create stream
I0213 14:28:40.058007       8 log.go:172] (0xc0010f66e0) (0xc001e38460) Stream added, broadcasting: 1
I0213 14:28:40.064342       8 log.go:172] (0xc0010f66e0) Reply frame received for 1
I0213 14:28:40.064461       8 log.go:172] (0xc0010f66e0) (0xc001e38500) Create stream
I0213 14:28:40.064475       8 log.go:172] (0xc0010f66e0) (0xc001e38500) Stream added, broadcasting: 3
I0213 14:28:40.065958       8 log.go:172] (0xc0010f66e0) Reply frame received for 3
I0213 14:28:40.066022       8 log.go:172] (0xc0010f66e0) (0xc000ea8960) Create stream
I0213 14:28:40.066038       8 log.go:172] (0xc0010f66e0) (0xc000ea8960) Stream added, broadcasting: 5
I0213 14:28:40.073092       8 log.go:172] (0xc0010f66e0) Reply frame received for 5
I0213 14:28:40.217995       8 log.go:172] (0xc0010f66e0) Data frame received for 3
I0213 14:28:40.218043       8 log.go:172] (0xc001e38500) (3) Data frame handling
I0213 14:28:40.218058       8 log.go:172] (0xc001e38500) (3) Data frame sent
I0213 14:28:40.367823       8 log.go:172] (0xc0010f66e0) Data frame received for 1
I0213 14:28:40.368072       8 log.go:172] (0xc0010f66e0) (0xc001e38500) Stream removed, broadcasting: 3
I0213 14:28:40.368172       8 log.go:172] (0xc001e38460) (1) Data frame handling
I0213 14:28:40.368209       8 log.go:172] (0xc001e38460) (1) Data frame sent
I0213 14:28:40.368235       8 log.go:172] (0xc0010f66e0) (0xc000ea8960) Stream removed, broadcasting: 5
I0213 14:28:40.368408       8 log.go:172] (0xc0010f66e0) (0xc001e38460) Stream removed, broadcasting: 1
I0213 14:28:40.368448       8 log.go:172] (0xc0010f66e0) Go away received
I0213 14:28:40.368669       8 log.go:172] (0xc0010f66e0) (0xc001e38460) Stream removed, broadcasting: 1
I0213 14:28:40.368682       8 log.go:172] (0xc0010f66e0) (0xc001e38500) Stream removed, broadcasting: 3
I0213 14:28:40.368686       8 log.go:172] (0xc0010f66e0) (0xc000ea8960) Stream removed, broadcasting: 5
Feb 13 14:28:40.368: INFO: Exec stderr: ""
Feb 13 14:28:40.368: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3674 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 13 14:28:40.368: INFO: >>> kubeConfig: /root/.kube/config
I0213 14:28:40.425401       8 log.go:172] (0xc000cb5ad0) (0xc000619b80) Create stream
I0213 14:28:40.425432       8 log.go:172] (0xc000cb5ad0) (0xc000619b80) Stream added, broadcasting: 1
I0213 14:28:40.433676       8 log.go:172] (0xc000cb5ad0) Reply frame received for 1
I0213 14:28:40.433713       8 log.go:172] (0xc000cb5ad0) (0xc00039e8c0) Create stream
I0213 14:28:40.433724       8 log.go:172] (0xc000cb5ad0) (0xc00039e8c0) Stream added, broadcasting: 3
I0213 14:28:40.434992       8 log.go:172] (0xc000cb5ad0) Reply frame received for 3
I0213 14:28:40.435026       8 log.go:172] (0xc000cb5ad0) (0xc00039e960) Create stream
I0213 14:28:40.435040       8 log.go:172] (0xc000cb5ad0) (0xc00039e960) Stream added, broadcasting: 5
I0213 14:28:40.440356       8 log.go:172] (0xc000cb5ad0) Reply frame received for 5
I0213 14:28:40.608921       8 log.go:172] (0xc000cb5ad0) Data frame received for 3
I0213 14:28:40.608983       8 log.go:172] (0xc00039e8c0) (3) Data frame handling
I0213 14:28:40.609000       8 log.go:172] (0xc00039e8c0) (3) Data frame sent
I0213 14:28:40.747770       8 log.go:172] (0xc000cb5ad0) Data frame received for 1
I0213 14:28:40.748029       8 log.go:172] (0xc000cb5ad0) (0xc00039e960) Stream removed, broadcasting: 5
I0213 14:28:40.748101       8 log.go:172] (0xc000619b80) (1) Data frame handling
I0213 14:28:40.748156       8 log.go:172] (0xc000619b80) (1) Data frame sent
I0213 14:28:40.748221       8 log.go:172] (0xc000cb5ad0) (0xc00039e8c0) Stream removed, broadcasting: 3
I0213 14:28:40.748261       8 log.go:172] (0xc000cb5ad0) (0xc000619b80) Stream removed, broadcasting: 1
I0213 14:28:40.748285       8 log.go:172] (0xc000cb5ad0) Go away received
I0213 14:28:40.748634       8 log.go:172] (0xc000cb5ad0) (0xc000619b80) Stream removed, broadcasting: 1
I0213 14:28:40.748650       8 log.go:172] (0xc000cb5ad0) (0xc00039e8c0) Stream removed, broadcasting: 3
I0213 14:28:40.748656       8 log.go:172] (0xc000cb5ad0) (0xc00039e960) Stream removed, broadcasting: 5
Feb 13 14:28:40.748: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:28:40.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-3674" for this suite.
Feb 13 14:29:26.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:29:26.922: INFO: namespace e2e-kubelet-etc-hosts-3674 deletion completed in 46.164949491s

• [SLOW TEST:72.407 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:29:26.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:29:35.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9573" for this suite.
Feb 13 14:30:17.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:30:17.253: INFO: namespace kubelet-test-9573 deletion completed in 42.122807751s

• [SLOW TEST:50.330 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:30:17.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-590da32a-09ad-4d8e-ade0-2f9828349d5c
STEP: Creating a pod to test consume secrets
Feb 13 14:30:17.358: INFO: Waiting up to 5m0s for pod "pod-secrets-92c967d9-a101-44a9-9f59-356780771640" in namespace "secrets-5609" to be "success or failure"
Feb 13 14:30:17.369: INFO: Pod "pod-secrets-92c967d9-a101-44a9-9f59-356780771640": Phase="Pending", Reason="", readiness=false. Elapsed: 10.093215ms
Feb 13 14:30:19.376: INFO: Pod "pod-secrets-92c967d9-a101-44a9-9f59-356780771640": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017683532s
Feb 13 14:30:21.391: INFO: Pod "pod-secrets-92c967d9-a101-44a9-9f59-356780771640": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032222988s
Feb 13 14:30:23.398: INFO: Pod "pod-secrets-92c967d9-a101-44a9-9f59-356780771640": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039928235s
Feb 13 14:30:25.405: INFO: Pod "pod-secrets-92c967d9-a101-44a9-9f59-356780771640": Phase="Running", Reason="", readiness=true. Elapsed: 8.046419459s
Feb 13 14:30:27.420: INFO: Pod "pod-secrets-92c967d9-a101-44a9-9f59-356780771640": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061225172s
STEP: Saw pod success
Feb 13 14:30:27.420: INFO: Pod "pod-secrets-92c967d9-a101-44a9-9f59-356780771640" satisfied condition "success or failure"
Feb 13 14:30:27.428: INFO: Trying to get logs from node iruya-node pod pod-secrets-92c967d9-a101-44a9-9f59-356780771640 container secret-volume-test: 
STEP: delete the pod
Feb 13 14:30:27.501: INFO: Waiting for pod pod-secrets-92c967d9-a101-44a9-9f59-356780771640 to disappear
Feb 13 14:30:27.511: INFO: Pod pod-secrets-92c967d9-a101-44a9-9f59-356780771640 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:30:27.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5609" for this suite.
Feb 13 14:30:33.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:30:33.677: INFO: namespace secrets-5609 deletion completed in 6.159407643s

• [SLOW TEST:16.424 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:30:33.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 13 14:30:33.793: INFO: Creating ReplicaSet my-hostname-basic-5aa6853e-81de-4b4e-88ba-b4ef6011c943
Feb 13 14:30:33.829: INFO: Pod name my-hostname-basic-5aa6853e-81de-4b4e-88ba-b4ef6011c943: Found 0 pods out of 1
Feb 13 14:30:38.848: INFO: Pod name my-hostname-basic-5aa6853e-81de-4b4e-88ba-b4ef6011c943: Found 1 pods out of 1
Feb 13 14:30:38.848: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-5aa6853e-81de-4b4e-88ba-b4ef6011c943" is running
Feb 13 14:30:42.868: INFO: Pod "my-hostname-basic-5aa6853e-81de-4b4e-88ba-b4ef6011c943-82xfh" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-13 14:30:33 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-13 14:30:33 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-5aa6853e-81de-4b4e-88ba-b4ef6011c943]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-13 14:30:33 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-5aa6853e-81de-4b4e-88ba-b4ef6011c943]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-13 14:30:33 +0000 UTC Reason: Message:}])
Feb 13 14:30:42.868: INFO: Trying to dial the pod
Feb 13 14:30:47.911: INFO: Controller my-hostname-basic-5aa6853e-81de-4b4e-88ba-b4ef6011c943: Got expected result from replica 1 [my-hostname-basic-5aa6853e-81de-4b4e-88ba-b4ef6011c943-82xfh]: "my-hostname-basic-5aa6853e-81de-4b4e-88ba-b4ef6011c943-82xfh", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:30:47.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-2681" for this suite.
Feb 13 14:30:54.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:30:54.198: INFO: namespace replicaset-2681 deletion completed in 6.281697027s

• [SLOW TEST:20.521 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:30:54.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:30:54.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8520" for this suite.
Feb 13 14:31:16.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:31:16.666: INFO: namespace pods-8520 deletion completed in 22.186308726s

• [SLOW TEST:22.467 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:31:16.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 13 14:31:27.001: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:31:27.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2814" for this suite.
Feb 13 14:31:33.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:31:33.976: INFO: namespace container-runtime-2814 deletion completed in 6.195792295s

• [SLOW TEST:17.310 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:31:33.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 13 14:31:34.104: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a853abe4-9fbd-40ef-b676-af4dae9747c4" in namespace "downward-api-5133" to be "success or failure"
Feb 13 14:31:34.117: INFO: Pod "downwardapi-volume-a853abe4-9fbd-40ef-b676-af4dae9747c4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.072954ms
Feb 13 14:31:36.137: INFO: Pod "downwardapi-volume-a853abe4-9fbd-40ef-b676-af4dae9747c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031812068s
Feb 13 14:31:38.150: INFO: Pod "downwardapi-volume-a853abe4-9fbd-40ef-b676-af4dae9747c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044684949s
Feb 13 14:31:40.188: INFO: Pod "downwardapi-volume-a853abe4-9fbd-40ef-b676-af4dae9747c4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083354112s
Feb 13 14:31:42.214: INFO: Pod "downwardapi-volume-a853abe4-9fbd-40ef-b676-af4dae9747c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.109368565s
STEP: Saw pod success
Feb 13 14:31:42.215: INFO: Pod "downwardapi-volume-a853abe4-9fbd-40ef-b676-af4dae9747c4" satisfied condition "success or failure"
Feb 13 14:31:42.231: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a853abe4-9fbd-40ef-b676-af4dae9747c4 container client-container: 
STEP: delete the pod
Feb 13 14:31:42.466: INFO: Waiting for pod downwardapi-volume-a853abe4-9fbd-40ef-b676-af4dae9747c4 to disappear
Feb 13 14:31:42.471: INFO: Pod downwardapi-volume-a853abe4-9fbd-40ef-b676-af4dae9747c4 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:31:42.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5133" for this suite.
Feb 13 14:31:48.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:31:48.630: INFO: namespace downward-api-5133 deletion completed in 6.151591156s

• [SLOW TEST:14.653 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:31:48.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 13 14:31:48.758: INFO: Creating deployment "nginx-deployment"
Feb 13 14:31:48.769: INFO: Waiting for observed generation 1
Feb 13 14:31:52.242: INFO: Waiting for all required pods to come up
Feb 13 14:31:52.577: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb 13 14:32:24.356: INFO: Waiting for deployment "nginx-deployment" to complete
Feb 13 14:32:24.366: INFO: Updating deployment "nginx-deployment" with a non-existent image
Feb 13 14:32:24.376: INFO: Updating deployment nginx-deployment
Feb 13 14:32:24.376: INFO: Waiting for observed generation 2
Feb 13 14:32:27.593: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb 13 14:32:27.611: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb 13 14:32:27.671: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb 13 14:32:29.026: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb 13 14:32:29.027: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb 13 14:32:29.033: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb 13 14:32:29.613: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Feb 13 14:32:29.613: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Feb 13 14:32:29.620: INFO: Updating deployment nginx-deployment
Feb 13 14:32:29.620: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Feb 13 14:32:30.154: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb 13 14:32:31.914: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 13 14:32:37.955: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-7052,SelfLink:/apis/apps/v1/namespaces/deployment-7052/deployments/nginx-deployment,UID:bd71a40d-dedd-4e74-85bc-ee65768f10e8,ResourceVersion:24208064,Generation:3,CreationTimestamp:2020-02-13 14:31:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:25,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-02-13 14:32:26 +0000 UTC 2020-02-13 14:31:48 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-02-13 14:32:29 +0000 UTC 2020-02-13 14:32:29 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Feb 13 14:32:40.132: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-7052,SelfLink:/apis/apps/v1/namespaces/deployment-7052/replicasets/nginx-deployment-55fb7cb77f,UID:4cc3a36c-086a-40ba-874b-baa70d5eaf45,ResourceVersion:24208069,Generation:3,CreationTimestamp:2020-02-13 14:32:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment bd71a40d-dedd-4e74-85bc-ee65768f10e8 0xc0029da127 0xc0029da128}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 13 14:32:40.132: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Feb 13 14:32:40.133: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-7052,SelfLink:/apis/apps/v1/namespaces/deployment-7052/replicasets/nginx-deployment-7b8c6f4498,UID:df0df291-6a53-430c-948e-b3f049b15830,ResourceVersion:24208062,Generation:3,CreationTimestamp:2020-02-13 14:31:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment bd71a40d-dedd-4e74-85bc-ee65768f10e8 0xc0029da1f7 0xc0029da1f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Feb 13 14:32:42.666: INFO: Pod "nginx-deployment-55fb7cb77f-5792l" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5792l,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-55fb7cb77f-5792l,UID:916259ab-4ec5-4c26-9a04-3efb2760bf1c,ResourceVersion:24208044,Generation:0,CreationTimestamp:2020-02-13 14:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4cc3a36c-086a-40ba-874b-baa70d5eaf45 0xc001f200f7 0xc001f200f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f20160} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f20180}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:31 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.666: INFO: Pod "nginx-deployment-55fb7cb77f-6rnsv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6rnsv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-55fb7cb77f-6rnsv,UID:365ce11d-d495-4263-89c3-38db43fc805e,ResourceVersion:24208045,Generation:0,CreationTimestamp:2020-02-13 14:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4cc3a36c-086a-40ba-874b-baa70d5eaf45 0xc001f20207 0xc001f20208}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f20280} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f202a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:31 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.666: INFO: Pod "nginx-deployment-55fb7cb77f-7p66z" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7p66z,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-55fb7cb77f-7p66z,UID:94674a1b-7199-4116-9956-49227e564cc4,ResourceVersion:24208029,Generation:0,CreationTimestamp:2020-02-13 14:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4cc3a36c-086a-40ba-874b-baa70d5eaf45 0xc001f20327 0xc001f20328}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f20390} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f203b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:30 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.666: INFO: Pod "nginx-deployment-55fb7cb77f-89cs6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-89cs6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-55fb7cb77f-89cs6,UID:9616d5a0-ca67-4213-b1b4-35a355ac89e9,ResourceVersion:24208070,Generation:0,CreationTimestamp:2020-02-13 14:32:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4cc3a36c-086a-40ba-874b-baa70d5eaf45 0xc001f20437 0xc001f20438}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f204a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f204c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-13 14:32:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.667: INFO: Pod "nginx-deployment-55fb7cb77f-b4sz6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-b4sz6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-55fb7cb77f-b4sz6,UID:db477fca-fcbb-445a-a5d1-d8c493a270f5,ResourceVersion:24207985,Generation:0,CreationTimestamp:2020-02-13 14:32:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4cc3a36c-086a-40ba-874b-baa70d5eaf45 0xc001f20597 0xc001f20598}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f20600} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f20620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-13 14:32:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.667: INFO: Pod "nginx-deployment-55fb7cb77f-c5qkc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-c5qkc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-55fb7cb77f-c5qkc,UID:cee29332-4d17-40aa-ac2b-fb3131abdf16,ResourceVersion:24208063,Generation:0,CreationTimestamp:2020-02-13 14:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4cc3a36c-086a-40ba-874b-baa70d5eaf45 0xc001f206f7 0xc001f206f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f20770} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f20790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-13 14:32:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.667: INFO: Pod "nginx-deployment-55fb7cb77f-cmxqt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cmxqt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-55fb7cb77f-cmxqt,UID:e6d5079b-2800-4ba7-8683-9ea46f6ca754,ResourceVersion:24207987,Generation:0,CreationTimestamp:2020-02-13 14:32:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4cc3a36c-086a-40ba-874b-baa70d5eaf45 0xc001f20887 0xc001f20888}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f20900} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f20930}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-13 14:32:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.668: INFO: Pod "nginx-deployment-55fb7cb77f-g7685" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-g7685,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-55fb7cb77f-g7685,UID:462cd665-2e8e-4327-92c8-f9580dca0ae2,ResourceVersion:24208037,Generation:0,CreationTimestamp:2020-02-13 14:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4cc3a36c-086a-40ba-874b-baa70d5eaf45 0xc001f20a47 0xc001f20a48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f20b60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f20b80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:31 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.668: INFO: Pod "nginx-deployment-55fb7cb77f-gxq26" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gxq26,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-55fb7cb77f-gxq26,UID:18a8cf47-c90e-4a5f-a811-f2773405a0c2,ResourceVersion:24207994,Generation:0,CreationTimestamp:2020-02-13 14:32:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4cc3a36c-086a-40ba-874b-baa70d5eaf45 0xc001f20d57 0xc001f20d58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f20ec0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f20ef0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-13 14:32:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.668: INFO: Pod "nginx-deployment-55fb7cb77f-hkdzm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hkdzm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-55fb7cb77f-hkdzm,UID:2ca5190d-7df3-4916-900c-052624909452,ResourceVersion:24207974,Generation:0,CreationTimestamp:2020-02-13 14:32:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4cc3a36c-086a-40ba-874b-baa70d5eaf45 0xc001f20fd7 0xc001f20fd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f21040} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f21060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-13 14:32:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.669: INFO: Pod "nginx-deployment-55fb7cb77f-j6r6m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-j6r6m,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-55fb7cb77f-j6r6m,UID:d9c58aa0-797b-40e4-95e6-939f5a48a356,ResourceVersion:24208053,Generation:0,CreationTimestamp:2020-02-13 14:32:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4cc3a36c-086a-40ba-874b-baa70d5eaf45 0xc001f211b7 0xc001f211b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f21230} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f21250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:31 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.669: INFO: Pod "nginx-deployment-55fb7cb77f-nvw4h" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nvw4h,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-55fb7cb77f-nvw4h,UID:fb9d7145-20ed-4417-9e9b-eb0c58ea8326,ResourceVersion:24207977,Generation:0,CreationTimestamp:2020-02-13 14:32:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4cc3a36c-086a-40ba-874b-baa70d5eaf45 0xc001f21497 0xc001f21498}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f215d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f215f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-13 14:32:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.669: INFO: Pod "nginx-deployment-55fb7cb77f-rnss7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rnss7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-55fb7cb77f-rnss7,UID:a6db3275-da58-452f-badf-23b7f5965176,ResourceVersion:24208046,Generation:0,CreationTimestamp:2020-02-13 14:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4cc3a36c-086a-40ba-874b-baa70d5eaf45 0xc001f217c7 0xc001f217c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f219a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f219c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:31 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.669: INFO: Pod "nginx-deployment-7b8c6f4498-6smj9" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6smj9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-7b8c6f4498-6smj9,UID:a98dc2b3-09ef-498d-89f4-222db8c9a79a,ResourceVersion:24207915,Generation:0,CreationTimestamp:2020-02-13 14:31:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 df0df291-6a53-430c-948e-b3f049b15830 0xc001f21a97 0xc001f21a98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f21bf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f21c10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:31:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:16 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:16 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:31:48 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-13 14:31:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-13 14:32:15 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://bbfb7f662053d4d6bb9ab1aafbff84ac277a26eebf4d972d389b42140fe20c5a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.670: INFO: Pod "nginx-deployment-7b8c6f4498-6wndw" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6wndw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-7b8c6f4498-6wndw,UID:14e68740-fcd8-4654-9001-fddb34da945a,ResourceVersion:24207885,Generation:0,CreationTimestamp:2020-02-13 14:31:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 df0df291-6a53-430c-948e-b3f049b15830 0xc001f21db7 0xc001f21db8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f21ef0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f21f10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:31:49 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:31:48 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-13 14:31:49 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-13 14:32:11 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a63faf79628a4d5fb35cdcdd9017d538f53c951a464e1d3962113df368cf3405}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.670: INFO: Pod "nginx-deployment-7b8c6f4498-8gw6g" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8gw6g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-7b8c6f4498-8gw6g,UID:3ba50391-d739-4a60-bdf4-38c8329ad2f7,ResourceVersion:24207928,Generation:0,CreationTimestamp:2020-02-13 14:31:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 df0df291-6a53-430c-948e-b3f049b15830 0xc002ab2097 0xc002ab2098}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ab2100} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ab2120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:31:53 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:23 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:23 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:31:49 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-02-13 14:31:53 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-13 14:32:22 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e55b9f51d43f2b55e8ba690afe224b4ef1acdc557b329961ad7290ab8ee97de3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.670: INFO: Pod "nginx-deployment-7b8c6f4498-c2xqc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-c2xqc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-7b8c6f4498-c2xqc,UID:005bfbec-8159-4fb4-974c-18c2d89cbd1c,ResourceVersion:24208068,Generation:0,CreationTimestamp:2020-02-13 14:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 df0df291-6a53-430c-948e-b3f049b15830 0xc002ab21f7 0xc002ab21f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ab2280} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ab22a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-13 14:32:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.670: INFO: Pod "nginx-deployment-7b8c6f4498-cdrtv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cdrtv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-7b8c6f4498-cdrtv,UID:54fafaea-6602-42c8-b611-fa17f57714e9,ResourceVersion:24208042,Generation:0,CreationTimestamp:2020-02-13 14:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 df0df291-6a53-430c-948e-b3f049b15830 0xc002ab2377 0xc002ab2378}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ab23e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ab2400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:31 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.671: INFO: Pod "nginx-deployment-7b8c6f4498-cdslt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cdslt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-7b8c6f4498-cdslt,UID:6d4954ad-0745-4a18-af53-2d17962f2870,ResourceVersion:24208048,Generation:0,CreationTimestamp:2020-02-13 14:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 df0df291-6a53-430c-948e-b3f049b15830 0xc002ab2487 0xc002ab2488}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ab24f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ab2510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:31 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.671: INFO: Pod "nginx-deployment-7b8c6f4498-dfbr6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dfbr6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-7b8c6f4498-dfbr6,UID:701450af-2a74-44c7-9bef-850c6af9f98b,ResourceVersion:24208036,Generation:0,CreationTimestamp:2020-02-13 14:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 df0df291-6a53-430c-948e-b3f049b15830 0xc002ab2597 0xc002ab2598}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ab2600} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ab2620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:30 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.672: INFO: Pod "nginx-deployment-7b8c6f4498-hzdfq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hzdfq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-7b8c6f4498-hzdfq,UID:37cde51b-4215-4de8-92a3-84708a30f668,ResourceVersion:24208039,Generation:0,CreationTimestamp:2020-02-13 14:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 df0df291-6a53-430c-948e-b3f049b15830 0xc002ab26c7 0xc002ab26c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ab2740} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ab2760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:31 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.672: INFO: Pod "nginx-deployment-7b8c6f4498-k279m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-k279m,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-7b8c6f4498-k279m,UID:5a9432ec-7c08-4ff7-bd65-2d8b9303e029,ResourceVersion:24208047,Generation:0,CreationTimestamp:2020-02-13 14:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 df0df291-6a53-430c-948e-b3f049b15830 0xc002ab27e7 0xc002ab27e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ab2860} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ab2880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:31 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.672: INFO: Pod "nginx-deployment-7b8c6f4498-l4rkn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-l4rkn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-7b8c6f4498-l4rkn,UID:570e20db-4af0-4e81-9b76-ec59b61e5a49,ResourceVersion:24208061,Generation:0,CreationTimestamp:2020-02-13 14:32:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 df0df291-6a53-430c-948e-b3f049b15830 0xc002ab2907 0xc002ab2908}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ab2970} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ab2990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-13 14:32:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.672: INFO: Pod "nginx-deployment-7b8c6f4498-nxx7d" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nxx7d,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-7b8c6f4498-nxx7d,UID:9a44be6c-e5af-4a4a-be07-337734c7fd86,ResourceVersion:24207893,Generation:0,CreationTimestamp:2020-02-13 14:31:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 df0df291-6a53-430c-948e-b3f049b15830 0xc002ab2a57 0xc002ab2a58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ab2ad0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ab2af0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:31:49 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:16 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:16 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:31:49 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-02-13 14:31:49 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-13 14:32:15 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://24c52bee7b0df5e54cb6faf69308b4194fac6889287779dac9bf863bf2ccc861}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.672: INFO: Pod "nginx-deployment-7b8c6f4498-pcpm8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pcpm8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-7b8c6f4498-pcpm8,UID:ede128ef-8979-4480-b620-71ae2834b10d,ResourceVersion:24208081,Generation:0,CreationTimestamp:2020-02-13 14:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 df0df291-6a53-430c-948e-b3f049b15830 0xc002ab2bc7 0xc002ab2bc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ab2c40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ab2c60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-13 14:32:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.673: INFO: Pod "nginx-deployment-7b8c6f4498-qlxkc" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qlxkc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-7b8c6f4498-qlxkc,UID:68494ca4-2274-41d1-8dc2-1a0923b1a4a7,ResourceVersion:24207906,Generation:0,CreationTimestamp:2020-02-13 14:31:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 df0df291-6a53-430c-948e-b3f049b15830 0xc002ab2d27 0xc002ab2d28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ab2d90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ab2db0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:31:49 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:19 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:31:48 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-02-13 14:31:49 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-13 14:32:17 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6ef14a62b25514edf5920b72828087b99e7857d5381455f07c629773d562af6a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.673: INFO: Pod "nginx-deployment-7b8c6f4498-qm54v" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qm54v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-7b8c6f4498-qm54v,UID:2404c748-79f8-4244-9e1d-e20dae43505f,ResourceVersion:24208051,Generation:0,CreationTimestamp:2020-02-13 14:32:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 df0df291-6a53-430c-948e-b3f049b15830 0xc002ab2e87 0xc002ab2e88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ab2f00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ab2f20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-13 14:32:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.673: INFO: Pod "nginx-deployment-7b8c6f4498-rh87d" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rh87d,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-7b8c6f4498-rh87d,UID:ea626681-e9c4-445b-8d7e-1714e10daa1c,ResourceVersion:24207909,Generation:0,CreationTimestamp:2020-02-13 14:31:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 df0df291-6a53-430c-948e-b3f049b15830 0xc002ab2fe7 0xc002ab2fe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ab3060} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ab3080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:31:49 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:17 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:31:48 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-02-13 14:31:49 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-13 14:32:15 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://fab3c37086307eee25f97b234f6c2c50ae393226a971011494483ba8aaafeb5f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.673: INFO: Pod "nginx-deployment-7b8c6f4498-rl5bs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rl5bs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-7b8c6f4498-rl5bs,UID:b0853c2b-9e79-4aae-8799-17fa255273cd,ResourceVersion:24208040,Generation:0,CreationTimestamp:2020-02-13 14:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 df0df291-6a53-430c-948e-b3f049b15830 0xc002ab3157 0xc002ab3158}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ab31d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ab31f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:31 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.673: INFO: Pod "nginx-deployment-7b8c6f4498-t8vjx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-t8vjx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-7b8c6f4498-t8vjx,UID:3b3cf09b-1f52-4be9-b720-183e10c2dfa2,ResourceVersion:24208080,Generation:0,CreationTimestamp:2020-02-13 14:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 df0df291-6a53-430c-948e-b3f049b15830 0xc002ab3277 0xc002ab3278}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ab32e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ab3300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-13 14:32:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.674: INFO: Pod "nginx-deployment-7b8c6f4498-v2qz2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-v2qz2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-7b8c6f4498-v2qz2,UID:0baf23fc-27c2-4271-a3e8-dc59935f226f,ResourceVersion:24208050,Generation:0,CreationTimestamp:2020-02-13 14:32:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 df0df291-6a53-430c-948e-b3f049b15830 0xc002ab33d7 0xc002ab33d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ab3440} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ab3460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:29 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-13 14:32:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.674: INFO: Pod "nginx-deployment-7b8c6f4498-w8ntj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-w8ntj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-7b8c6f4498-w8ntj,UID:8a492090-0619-44d4-a0ad-6a127dc65323,ResourceVersion:24207934,Generation:0,CreationTimestamp:2020-02-13 14:31:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 df0df291-6a53-430c-948e-b3f049b15830 0xc002ab3527 0xc002ab3528}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ab35a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ab35c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:31:52 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:23 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:23 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:31:49 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-02-13 14:31:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-13 14:32:23 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c394f979ebc34882aa1d119e5cd75ffaa447d5e0c68a13237f070abd34e9d3cc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 13 14:32:42.674: INFO: Pod "nginx-deployment-7b8c6f4498-xzc95" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xzc95,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7052,SelfLink:/api/v1/namespaces/deployment-7052/pods/nginx-deployment-7b8c6f4498-xzc95,UID:d3639981-f1e3-41dc-a0cc-232c4fcfdec4,ResourceVersion:24207903,Generation:0,CreationTimestamp:2020-02-13 14:31:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 df0df291-6a53-430c-948e-b3f049b15830 0xc002ab3697 0xc002ab3698}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5ghkg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ab3710} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ab3730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:31:49 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:17 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:32:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 14:31:48 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2020-02-13 14:31:49 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-13 14:32:16 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c99dac7d01d882b9114fdad1088283a91239b628de63b10042bd9ca224b90faf}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:32:42.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7052" for this suite.
Feb 13 14:33:42.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:33:42.947: INFO: namespace deployment-7052 deletion completed in 57.170776733s

• [SLOW TEST:114.316 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:33:42.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb 13 14:33:44.217: INFO: Pod name wrapped-volume-race-75fe9954-1e5c-42ac-86cb-80d62759b9ea: Found 0 pods out of 5
Feb 13 14:33:49.232: INFO: Pod name wrapped-volume-race-75fe9954-1e5c-42ac-86cb-80d62759b9ea: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-75fe9954-1e5c-42ac-86cb-80d62759b9ea in namespace emptydir-wrapper-1534, will wait for the garbage collector to delete the pods
Feb 13 14:34:17.511: INFO: Deleting ReplicationController wrapped-volume-race-75fe9954-1e5c-42ac-86cb-80d62759b9ea took: 27.15652ms
Feb 13 14:34:18.012: INFO: Terminating ReplicationController wrapped-volume-race-75fe9954-1e5c-42ac-86cb-80d62759b9ea pods took: 501.007949ms
STEP: Creating RC which spawns configmap-volume pods
Feb 13 14:35:07.672: INFO: Pod name wrapped-volume-race-13e352ec-b402-4f0a-8e27-75b923d9d7c5: Found 0 pods out of 5
Feb 13 14:35:12.687: INFO: Pod name wrapped-volume-race-13e352ec-b402-4f0a-8e27-75b923d9d7c5: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-13e352ec-b402-4f0a-8e27-75b923d9d7c5 in namespace emptydir-wrapper-1534, will wait for the garbage collector to delete the pods
Feb 13 14:35:42.790: INFO: Deleting ReplicationController wrapped-volume-race-13e352ec-b402-4f0a-8e27-75b923d9d7c5 took: 10.818149ms
Feb 13 14:35:43.191: INFO: Terminating ReplicationController wrapped-volume-race-13e352ec-b402-4f0a-8e27-75b923d9d7c5 pods took: 400.623547ms
STEP: Creating RC which spawns configmap-volume pods
Feb 13 14:36:27.858: INFO: Pod name wrapped-volume-race-6d83ebcb-cccf-4afa-bdf6-801df6db594a: Found 0 pods out of 5
Feb 13 14:36:32.876: INFO: Pod name wrapped-volume-race-6d83ebcb-cccf-4afa-bdf6-801df6db594a: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-6d83ebcb-cccf-4afa-bdf6-801df6db594a in namespace emptydir-wrapper-1534, will wait for the garbage collector to delete the pods
Feb 13 14:37:05.007: INFO: Deleting ReplicationController wrapped-volume-race-6d83ebcb-cccf-4afa-bdf6-801df6db594a took: 17.731928ms
Feb 13 14:37:05.408: INFO: Terminating ReplicationController wrapped-volume-race-6d83ebcb-cccf-4afa-bdf6-801df6db594a pods took: 400.488682ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:37:57.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-1534" for this suite.
Feb 13 14:38:07.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:38:07.825: INFO: namespace emptydir-wrapper-1534 deletion completed in 10.183018987s

• [SLOW TEST:264.877 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:38:07.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-1a8b0dc3-f9b1-4a95-9c41-2e0ad5d4654c
STEP: Creating configMap with name cm-test-opt-upd-c239aa31-2f98-4011-97db-32f14f3d72be
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-1a8b0dc3-f9b1-4a95-9c41-2e0ad5d4654c
STEP: Updating configmap cm-test-opt-upd-c239aa31-2f98-4011-97db-32f14f3d72be
STEP: Creating configMap with name cm-test-opt-create-c4068cd9-1ce5-41b3-97c7-4a2da2a0c5dc
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:38:28.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3222" for this suite.
Feb 13 14:38:52.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:38:52.626: INFO: namespace configmap-3222 deletion completed in 24.201897264s

• [SLOW TEST:44.801 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:38:52.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-8cc37857-c0e5-4965-b645-d802dd27d150
STEP: Creating a pod to test consume configMaps
Feb 13 14:38:52.762: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a4c33e62-0ac5-44ae-a1cb-37e4a161e72e" in namespace "projected-9772" to be "success or failure"
Feb 13 14:38:52.773: INFO: Pod "pod-projected-configmaps-a4c33e62-0ac5-44ae-a1cb-37e4a161e72e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.030791ms
Feb 13 14:38:54.807: INFO: Pod "pod-projected-configmaps-a4c33e62-0ac5-44ae-a1cb-37e4a161e72e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04528582s
Feb 13 14:38:56.830: INFO: Pod "pod-projected-configmaps-a4c33e62-0ac5-44ae-a1cb-37e4a161e72e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067635472s
Feb 13 14:38:58.845: INFO: Pod "pod-projected-configmaps-a4c33e62-0ac5-44ae-a1cb-37e4a161e72e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083077439s
Feb 13 14:39:00.856: INFO: Pod "pod-projected-configmaps-a4c33e62-0ac5-44ae-a1cb-37e4a161e72e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093473272s
STEP: Saw pod success
Feb 13 14:39:00.856: INFO: Pod "pod-projected-configmaps-a4c33e62-0ac5-44ae-a1cb-37e4a161e72e" satisfied condition "success or failure"
Feb 13 14:39:00.865: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-a4c33e62-0ac5-44ae-a1cb-37e4a161e72e container projected-configmap-volume-test: 
STEP: delete the pod
Feb 13 14:39:00.914: INFO: Waiting for pod pod-projected-configmaps-a4c33e62-0ac5-44ae-a1cb-37e4a161e72e to disappear
Feb 13 14:39:00.921: INFO: Pod pod-projected-configmaps-a4c33e62-0ac5-44ae-a1cb-37e4a161e72e no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:39:00.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9772" for this suite.
Feb 13 14:39:06.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:39:07.068: INFO: namespace projected-9772 deletion completed in 6.140438688s

• [SLOW TEST:14.442 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:39:07.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Feb 13 14:39:07.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5776'
Feb 13 14:39:10.039: INFO: stderr: ""
Feb 13 14:39:10.040: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Feb 13 14:39:11.051: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:39:11.051: INFO: Found 0 / 1
Feb 13 14:39:12.047: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:39:12.047: INFO: Found 0 / 1
Feb 13 14:39:13.060: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:39:13.061: INFO: Found 0 / 1
Feb 13 14:39:14.057: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:39:14.057: INFO: Found 0 / 1
Feb 13 14:39:15.049: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:39:15.049: INFO: Found 0 / 1
Feb 13 14:39:16.046: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:39:16.047: INFO: Found 0 / 1
Feb 13 14:39:17.689: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:39:17.690: INFO: Found 0 / 1
Feb 13 14:39:18.094: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:39:18.095: INFO: Found 0 / 1
Feb 13 14:39:19.048: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:39:19.048: INFO: Found 0 / 1
Feb 13 14:39:20.047: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:39:20.047: INFO: Found 1 / 1
Feb 13 14:39:20.047: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 13 14:39:20.052: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 14:39:20.052: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Feb 13 14:39:20.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-s4c8m redis-master --namespace=kubectl-5776'
Feb 13 14:39:20.281: INFO: stderr: ""
Feb 13 14:39:20.281: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 13 Feb 14:39:18.474 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 13 Feb 14:39:18.474 # Server started, Redis version 3.2.12\n1:M 13 Feb 14:39:18.475 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 13 Feb 14:39:18.475 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Feb 13 14:39:20.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-s4c8m redis-master --namespace=kubectl-5776 --tail=1'
Feb 13 14:39:20.409: INFO: stderr: ""
Feb 13 14:39:20.409: INFO: stdout: "1:M 13 Feb 14:39:18.475 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Feb 13 14:39:20.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-s4c8m redis-master --namespace=kubectl-5776 --limit-bytes=1'
Feb 13 14:39:20.645: INFO: stderr: ""
Feb 13 14:39:20.645: INFO: stdout: " "
STEP: exposing timestamps
Feb 13 14:39:20.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-s4c8m redis-master --namespace=kubectl-5776 --tail=1 --timestamps'
Feb 13 14:39:20.909: INFO: stderr: ""
Feb 13 14:39:20.909: INFO: stdout: "2020-02-13T14:39:18.479625655Z 1:M 13 Feb 14:39:18.475 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Feb 13 14:39:23.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-s4c8m redis-master --namespace=kubectl-5776 --since=1s'
Feb 13 14:39:23.665: INFO: stderr: ""
Feb 13 14:39:23.665: INFO: stdout: ""
Feb 13 14:39:23.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-s4c8m redis-master --namespace=kubectl-5776 --since=24h'
Feb 13 14:39:23.843: INFO: stderr: ""
Feb 13 14:39:23.843: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 13 Feb 14:39:18.474 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 13 Feb 14:39:18.474 # Server started, Redis version 3.2.12\n1:M 13 Feb 14:39:18.475 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 13 Feb 14:39:18.475 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Feb 13 14:39:23.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5776'
Feb 13 14:39:24.070: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 13 14:39:24.070: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Feb 13 14:39:24.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-5776'
Feb 13 14:39:24.385: INFO: stderr: "No resources found.\n"
Feb 13 14:39:24.386: INFO: stdout: ""
Feb 13 14:39:24.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-5776 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 13 14:39:24.499: INFO: stderr: ""
Feb 13 14:39:24.500: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:39:24.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5776" for this suite.
Feb 13 14:39:46.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:39:46.665: INFO: namespace kubectl-5776 deletion completed in 22.157567071s

• [SLOW TEST:39.597 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:39:46.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-5a772c7a-5aa5-4ffd-bb3f-6f0d3e972678 in namespace container-probe-7356
Feb 13 14:39:55.035: INFO: Started pod busybox-5a772c7a-5aa5-4ffd-bb3f-6f0d3e972678 in namespace container-probe-7356
STEP: checking the pod's current state and verifying that restartCount is present
Feb 13 14:39:55.041: INFO: Initial restart count of pod busybox-5a772c7a-5aa5-4ffd-bb3f-6f0d3e972678 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:43:56.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7356" for this suite.
Feb 13 14:44:02.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:44:03.289: INFO: namespace container-probe-7356 deletion completed in 6.706530507s

• [SLOW TEST:256.624 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:44:03.289: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-61184ef4-a107-40b2-8eee-a7ceeafafd45
STEP: Creating a pod to test consume secrets
Feb 13 14:44:03.372: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-de511ae3-a48c-4bfa-a3f6-399143bc4c3b" in namespace "projected-1965" to be "success or failure"
Feb 13 14:44:03.379: INFO: Pod "pod-projected-secrets-de511ae3-a48c-4bfa-a3f6-399143bc4c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.602796ms
Feb 13 14:44:05.387: INFO: Pod "pod-projected-secrets-de511ae3-a48c-4bfa-a3f6-399143bc4c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015482825s
Feb 13 14:44:07.401: INFO: Pod "pod-projected-secrets-de511ae3-a48c-4bfa-a3f6-399143bc4c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029174704s
Feb 13 14:44:09.410: INFO: Pod "pod-projected-secrets-de511ae3-a48c-4bfa-a3f6-399143bc4c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037832851s
Feb 13 14:44:11.416: INFO: Pod "pod-projected-secrets-de511ae3-a48c-4bfa-a3f6-399143bc4c3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.044457252s
STEP: Saw pod success
Feb 13 14:44:11.416: INFO: Pod "pod-projected-secrets-de511ae3-a48c-4bfa-a3f6-399143bc4c3b" satisfied condition "success or failure"
Feb 13 14:44:11.420: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-de511ae3-a48c-4bfa-a3f6-399143bc4c3b container projected-secret-volume-test: 
STEP: delete the pod
Feb 13 14:44:11.498: INFO: Waiting for pod pod-projected-secrets-de511ae3-a48c-4bfa-a3f6-399143bc4c3b to disappear
Feb 13 14:44:11.504: INFO: Pod pod-projected-secrets-de511ae3-a48c-4bfa-a3f6-399143bc4c3b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:44:11.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1965" for this suite.
Feb 13 14:44:17.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:44:17.677: INFO: namespace projected-1965 deletion completed in 6.164273221s

• [SLOW TEST:14.388 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:44:17.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0213 14:44:33.114644       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 13 14:44:33.114: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:44:33.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3873" for this suite.
Feb 13 14:44:45.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:44:46.192: INFO: namespace gc-3873 deletion completed in 12.988363249s

• [SLOW TEST:28.514 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:44:46.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 13 14:44:47.358: INFO: Waiting up to 5m0s for pod "pod-a0c73f63-1d61-4745-bf61-bb413b517007" in namespace "emptydir-3832" to be "success or failure"
Feb 13 14:44:47.834: INFO: Pod "pod-a0c73f63-1d61-4745-bf61-bb413b517007": Phase="Pending", Reason="", readiness=false. Elapsed: 474.738448ms
Feb 13 14:44:50.296: INFO: Pod "pod-a0c73f63-1d61-4745-bf61-bb413b517007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.936928245s
Feb 13 14:44:52.308: INFO: Pod "pod-a0c73f63-1d61-4745-bf61-bb413b517007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.949183571s
Feb 13 14:44:54.317: INFO: Pod "pod-a0c73f63-1d61-4745-bf61-bb413b517007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.958165429s
Feb 13 14:44:56.327: INFO: Pod "pod-a0c73f63-1d61-4745-bf61-bb413b517007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.967889003s
Feb 13 14:44:58.336: INFO: Pod "pod-a0c73f63-1d61-4745-bf61-bb413b517007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.977528435s
Feb 13 14:45:00.344: INFO: Pod "pod-a0c73f63-1d61-4745-bf61-bb413b517007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.985456111s
STEP: Saw pod success
Feb 13 14:45:00.344: INFO: Pod "pod-a0c73f63-1d61-4745-bf61-bb413b517007" satisfied condition "success or failure"
Feb 13 14:45:00.348: INFO: Trying to get logs from node iruya-node pod pod-a0c73f63-1d61-4745-bf61-bb413b517007 container test-container: 
STEP: delete the pod
Feb 13 14:45:00.410: INFO: Waiting for pod pod-a0c73f63-1d61-4745-bf61-bb413b517007 to disappear
Feb 13 14:45:00.426: INFO: Pod pod-a0c73f63-1d61-4745-bf61-bb413b517007 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:45:00.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3832" for this suite.
Feb 13 14:45:06.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:45:06.739: INFO: namespace emptydir-3832 deletion completed in 6.288605619s

• [SLOW TEST:20.548 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:45:06.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Feb 13 14:45:06.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4131'
Feb 13 14:45:07.407: INFO: stderr: ""
Feb 13 14:45:07.407: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 13 14:45:07.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4131'
Feb 13 14:45:07.689: INFO: stderr: ""
Feb 13 14:45:07.689: INFO: stdout: "update-demo-nautilus-np4q5 update-demo-nautilus-qx7lg "
Feb 13 14:45:07.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-np4q5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4131'
Feb 13 14:45:07.802: INFO: stderr: ""
Feb 13 14:45:07.802: INFO: stdout: ""
Feb 13 14:45:07.802: INFO: update-demo-nautilus-np4q5 is created but not running
Feb 13 14:45:12.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4131'
Feb 13 14:45:15.389: INFO: stderr: ""
Feb 13 14:45:15.390: INFO: stdout: "update-demo-nautilus-np4q5 update-demo-nautilus-qx7lg "
Feb 13 14:45:15.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-np4q5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4131'
Feb 13 14:45:15.667: INFO: stderr: ""
Feb 13 14:45:15.668: INFO: stdout: ""
Feb 13 14:45:15.668: INFO: update-demo-nautilus-np4q5 is created but not running
Feb 13 14:45:20.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4131'
Feb 13 14:45:20.853: INFO: stderr: ""
Feb 13 14:45:20.853: INFO: stdout: "update-demo-nautilus-np4q5 update-demo-nautilus-qx7lg "
Feb 13 14:45:20.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-np4q5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4131'
Feb 13 14:45:21.018: INFO: stderr: ""
Feb 13 14:45:21.018: INFO: stdout: "true"
Feb 13 14:45:21.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-np4q5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4131'
Feb 13 14:45:21.142: INFO: stderr: ""
Feb 13 14:45:21.142: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 13 14:45:21.142: INFO: validating pod update-demo-nautilus-np4q5
Feb 13 14:45:21.167: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 13 14:45:21.167: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 13 14:45:21.167: INFO: update-demo-nautilus-np4q5 is verified up and running
Feb 13 14:45:21.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qx7lg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4131'
Feb 13 14:45:21.249: INFO: stderr: ""
Feb 13 14:45:21.249: INFO: stdout: ""
Feb 13 14:45:21.249: INFO: update-demo-nautilus-qx7lg is created but not running
Feb 13 14:45:26.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4131'
Feb 13 14:45:26.436: INFO: stderr: ""
Feb 13 14:45:26.437: INFO: stdout: "update-demo-nautilus-np4q5 update-demo-nautilus-qx7lg "
Feb 13 14:45:26.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-np4q5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4131'
Feb 13 14:45:26.567: INFO: stderr: ""
Feb 13 14:45:26.567: INFO: stdout: "true"
Feb 13 14:45:26.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-np4q5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4131'
Feb 13 14:45:26.677: INFO: stderr: ""
Feb 13 14:45:26.677: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 13 14:45:26.677: INFO: validating pod update-demo-nautilus-np4q5
Feb 13 14:45:26.687: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 13 14:45:26.687: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 13 14:45:26.687: INFO: update-demo-nautilus-np4q5 is verified up and running
Feb 13 14:45:26.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qx7lg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4131'
Feb 13 14:45:26.788: INFO: stderr: ""
Feb 13 14:45:26.788: INFO: stdout: "true"
Feb 13 14:45:26.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qx7lg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4131'
Feb 13 14:45:26.919: INFO: stderr: ""
Feb 13 14:45:26.919: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 13 14:45:26.919: INFO: validating pod update-demo-nautilus-qx7lg
Feb 13 14:45:26.925: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 13 14:45:26.925: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 13 14:45:26.925: INFO: update-demo-nautilus-qx7lg is verified up and running
STEP: rolling-update to new replication controller
Feb 13 14:45:26.928: INFO: scanned /root for discovery docs: 
Feb 13 14:45:26.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-4131'
Feb 13 14:46:02.079: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 13 14:46:02.079: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 13 14:46:02.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4131'
Feb 13 14:46:02.275: INFO: stderr: ""
Feb 13 14:46:02.275: INFO: stdout: "update-demo-kitten-8jt5k update-demo-kitten-gcs8p update-demo-nautilus-np4q5 "
STEP: Replicas for name=update-demo: expected=2 actual=3
Feb 13 14:46:07.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4131'
Feb 13 14:46:07.391: INFO: stderr: ""
Feb 13 14:46:07.391: INFO: stdout: "update-demo-kitten-8jt5k update-demo-kitten-gcs8p "
Feb 13 14:46:07.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-8jt5k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4131'
Feb 13 14:46:07.504: INFO: stderr: ""
Feb 13 14:46:07.504: INFO: stdout: "true"
Feb 13 14:46:07.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-8jt5k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4131'
Feb 13 14:46:07.668: INFO: stderr: ""
Feb 13 14:46:07.668: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 13 14:46:07.668: INFO: validating pod update-demo-kitten-8jt5k
Feb 13 14:46:07.687: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 13 14:46:07.687: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 13 14:46:07.687: INFO: update-demo-kitten-8jt5k is verified up and running
Feb 13 14:46:07.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gcs8p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4131'
Feb 13 14:46:07.872: INFO: stderr: ""
Feb 13 14:46:07.872: INFO: stdout: "true"
Feb 13 14:46:07.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gcs8p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4131'
Feb 13 14:46:08.019: INFO: stderr: ""
Feb 13 14:46:08.019: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 13 14:46:08.019: INFO: validating pod update-demo-kitten-gcs8p
Feb 13 14:46:08.037: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 13 14:46:08.037: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 13 14:46:08.037: INFO: update-demo-kitten-gcs8p is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:46:08.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4131" for this suite.
Feb 13 14:46:30.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:46:30.162: INFO: namespace kubectl-4131 deletion completed in 22.11740904s

• [SLOW TEST:83.422 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:46:30.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-d240c653-fa0b-4a2a-ba94-51aec4c53dd8
STEP: Creating a pod to test consume secrets
Feb 13 14:46:30.350: INFO: Waiting up to 5m0s for pod "pod-secrets-4aece094-4822-4f87-a676-9061e81e94b0" in namespace "secrets-2782" to be "success or failure"
Feb 13 14:46:30.360: INFO: Pod "pod-secrets-4aece094-4822-4f87-a676-9061e81e94b0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.366113ms
Feb 13 14:46:32.367: INFO: Pod "pod-secrets-4aece094-4822-4f87-a676-9061e81e94b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017326023s
Feb 13 14:46:34.375: INFO: Pod "pod-secrets-4aece094-4822-4f87-a676-9061e81e94b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024560638s
Feb 13 14:46:36.383: INFO: Pod "pod-secrets-4aece094-4822-4f87-a676-9061e81e94b0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033404936s
Feb 13 14:46:38.395: INFO: Pod "pod-secrets-4aece094-4822-4f87-a676-9061e81e94b0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045277714s
Feb 13 14:46:40.404: INFO: Pod "pod-secrets-4aece094-4822-4f87-a676-9061e81e94b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.054475527s
STEP: Saw pod success
Feb 13 14:46:40.405: INFO: Pod "pod-secrets-4aece094-4822-4f87-a676-9061e81e94b0" satisfied condition "success or failure"
Feb 13 14:46:40.409: INFO: Trying to get logs from node iruya-node pod pod-secrets-4aece094-4822-4f87-a676-9061e81e94b0 container secret-volume-test: 
STEP: delete the pod
Feb 13 14:46:40.470: INFO: Waiting for pod pod-secrets-4aece094-4822-4f87-a676-9061e81e94b0 to disappear
Feb 13 14:46:40.480: INFO: Pod pod-secrets-4aece094-4822-4f87-a676-9061e81e94b0 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:46:40.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2782" for this suite.
Feb 13 14:46:46.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:46:46.603: INFO: namespace secrets-2782 deletion completed in 6.116827823s

• [SLOW TEST:16.441 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:46:46.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 13 14:47:06.763: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 13 14:47:06.789: INFO: Pod pod-with-prestop-http-hook still exists
Feb 13 14:47:08.789: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 13 14:47:08.797: INFO: Pod pod-with-prestop-http-hook still exists
Feb 13 14:47:10.789: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 13 14:47:10.797: INFO: Pod pod-with-prestop-http-hook still exists
Feb 13 14:47:12.790: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 13 14:47:12.800: INFO: Pod pod-with-prestop-http-hook still exists
Feb 13 14:47:14.790: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 13 14:47:14.801: INFO: Pod pod-with-prestop-http-hook still exists
Feb 13 14:47:16.789: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 13 14:47:16.796: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:47:16.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9885" for this suite.
Feb 13 14:47:38.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:47:38.993: INFO: namespace container-lifecycle-hook-9885 deletion completed in 22.156354608s

• [SLOW TEST:52.390 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:47:38.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 13 14:47:39.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb 13 14:47:39.251: INFO: stderr: ""
Feb 13 14:47:39.251: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:47:39.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7830" for this suite.
Feb 13 14:47:45.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:47:45.410: INFO: namespace kubectl-7830 deletion completed in 6.15112659s

• [SLOW TEST:6.417 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:47:45.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-j2cq
STEP: Creating a pod to test atomic-volume-subpath
Feb 13 14:47:45.641: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-j2cq" in namespace "subpath-9183" to be "success or failure"
Feb 13 14:47:45.647: INFO: Pod "pod-subpath-test-projected-j2cq": Phase="Pending", Reason="", readiness=false. Elapsed: 5.369697ms
Feb 13 14:47:47.655: INFO: Pod "pod-subpath-test-projected-j2cq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013344216s
Feb 13 14:47:49.664: INFO: Pod "pod-subpath-test-projected-j2cq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022732064s
Feb 13 14:47:51.670: INFO: Pod "pod-subpath-test-projected-j2cq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029189141s
Feb 13 14:47:53.679: INFO: Pod "pod-subpath-test-projected-j2cq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.037836343s
Feb 13 14:47:55.690: INFO: Pod "pod-subpath-test-projected-j2cq": Phase="Running", Reason="", readiness=true. Elapsed: 10.048592899s
Feb 13 14:47:57.699: INFO: Pod "pod-subpath-test-projected-j2cq": Phase="Running", Reason="", readiness=true. Elapsed: 12.057658393s
Feb 13 14:47:59.707: INFO: Pod "pod-subpath-test-projected-j2cq": Phase="Running", Reason="", readiness=true. Elapsed: 14.066302903s
Feb 13 14:48:01.717: INFO: Pod "pod-subpath-test-projected-j2cq": Phase="Running", Reason="", readiness=true. Elapsed: 16.076044873s
Feb 13 14:48:03.728: INFO: Pod "pod-subpath-test-projected-j2cq": Phase="Running", Reason="", readiness=true. Elapsed: 18.086435804s
Feb 13 14:48:05.735: INFO: Pod "pod-subpath-test-projected-j2cq": Phase="Running", Reason="", readiness=true. Elapsed: 20.094098805s
Feb 13 14:48:07.745: INFO: Pod "pod-subpath-test-projected-j2cq": Phase="Running", Reason="", readiness=true. Elapsed: 22.10420977s
Feb 13 14:48:09.752: INFO: Pod "pod-subpath-test-projected-j2cq": Phase="Running", Reason="", readiness=true. Elapsed: 24.111193998s
Feb 13 14:48:11.761: INFO: Pod "pod-subpath-test-projected-j2cq": Phase="Running", Reason="", readiness=true. Elapsed: 26.12017461s
Feb 13 14:48:13.776: INFO: Pod "pod-subpath-test-projected-j2cq": Phase="Running", Reason="", readiness=true. Elapsed: 28.135013155s
Feb 13 14:48:15.791: INFO: Pod "pod-subpath-test-projected-j2cq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.149924157s
STEP: Saw pod success
Feb 13 14:48:15.791: INFO: Pod "pod-subpath-test-projected-j2cq" satisfied condition "success or failure"
Feb 13 14:48:15.804: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-j2cq container test-container-subpath-projected-j2cq: 
STEP: delete the pod
Feb 13 14:48:15.941: INFO: Waiting for pod pod-subpath-test-projected-j2cq to disappear
Feb 13 14:48:15.958: INFO: Pod pod-subpath-test-projected-j2cq no longer exists
STEP: Deleting pod pod-subpath-test-projected-j2cq
Feb 13 14:48:15.958: INFO: Deleting pod "pod-subpath-test-projected-j2cq" in namespace "subpath-9183"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:48:15.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9183" for this suite.
Feb 13 14:48:22.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:48:22.123: INFO: namespace subpath-9183 deletion completed in 6.154996584s

• [SLOW TEST:36.712 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:48:22.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-663
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 13 14:48:22.263: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 13 14:48:54.522: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-663 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 13 14:48:54.522: INFO: >>> kubeConfig: /root/.kube/config
I0213 14:48:54.613784       8 log.go:172] (0xc0001cb600) (0xc003228640) Create stream
I0213 14:48:54.613870       8 log.go:172] (0xc0001cb600) (0xc003228640) Stream added, broadcasting: 1
I0213 14:48:54.651547       8 log.go:172] (0xc0001cb600) Reply frame received for 1
I0213 14:48:54.651805       8 log.go:172] (0xc0001cb600) (0xc000ab2640) Create stream
I0213 14:48:54.651853       8 log.go:172] (0xc0001cb600) (0xc000ab2640) Stream added, broadcasting: 3
I0213 14:48:54.656167       8 log.go:172] (0xc0001cb600) Reply frame received for 3
I0213 14:48:54.656195       8 log.go:172] (0xc0001cb600) (0xc0013aab40) Create stream
I0213 14:48:54.656202       8 log.go:172] (0xc0001cb600) (0xc0013aab40) Stream added, broadcasting: 5
I0213 14:48:54.661421       8 log.go:172] (0xc0001cb600) Reply frame received for 5
I0213 14:48:54.868143       8 log.go:172] (0xc0001cb600) Data frame received for 3
I0213 14:48:54.868215       8 log.go:172] (0xc000ab2640) (3) Data frame handling
I0213 14:48:54.868246       8 log.go:172] (0xc000ab2640) (3) Data frame sent
I0213 14:48:55.019221       8 log.go:172] (0xc0001cb600) Data frame received for 1
I0213 14:48:55.019287       8 log.go:172] (0xc0001cb600) (0xc0013aab40) Stream removed, broadcasting: 5
I0213 14:48:55.019350       8 log.go:172] (0xc003228640) (1) Data frame handling
I0213 14:48:55.019375       8 log.go:172] (0xc003228640) (1) Data frame sent
I0213 14:48:55.019406       8 log.go:172] (0xc0001cb600) (0xc000ab2640) Stream removed, broadcasting: 3
I0213 14:48:55.019482       8 log.go:172] (0xc0001cb600) (0xc003228640) Stream removed, broadcasting: 1
I0213 14:48:55.019587       8 log.go:172] (0xc0001cb600) (0xc003228640) Stream removed, broadcasting: 1
I0213 14:48:55.019602       8 log.go:172] (0xc0001cb600) (0xc000ab2640) Stream removed, broadcasting: 3
I0213 14:48:55.019608       8 log.go:172] (0xc0001cb600) (0xc0013aab40) Stream removed, broadcasting: 5
I0213 14:48:55.019920       8 log.go:172] (0xc0001cb600) Go away received
Feb 13 14:48:55.020: INFO: Waiting for endpoints: map[]
Feb 13 14:48:55.030: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-663 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 13 14:48:55.030: INFO: >>> kubeConfig: /root/.kube/config
I0213 14:48:55.096499       8 log.go:172] (0xc0009f80b0) (0xc000ab30e0) Create stream
I0213 14:48:55.096691       8 log.go:172] (0xc0009f80b0) (0xc000ab30e0) Stream added, broadcasting: 1
I0213 14:48:55.103313       8 log.go:172] (0xc0009f80b0) Reply frame received for 1
I0213 14:48:55.103372       8 log.go:172] (0xc0009f80b0) (0xc0018a4000) Create stream
I0213 14:48:55.103393       8 log.go:172] (0xc0009f80b0) (0xc0018a4000) Stream added, broadcasting: 3
I0213 14:48:55.108613       8 log.go:172] (0xc0009f80b0) Reply frame received for 3
I0213 14:48:55.108649       8 log.go:172] (0xc0009f80b0) (0xc000ab3180) Create stream
I0213 14:48:55.108660       8 log.go:172] (0xc0009f80b0) (0xc000ab3180) Stream added, broadcasting: 5
I0213 14:48:55.111378       8 log.go:172] (0xc0009f80b0) Reply frame received for 5
I0213 14:48:55.225873       8 log.go:172] (0xc0009f80b0) Data frame received for 3
I0213 14:48:55.225930       8 log.go:172] (0xc0018a4000) (3) Data frame handling
I0213 14:48:55.225972       8 log.go:172] (0xc0018a4000) (3) Data frame sent
I0213 14:48:55.339963       8 log.go:172] (0xc0009f80b0) Data frame received for 1
I0213 14:48:55.340134       8 log.go:172] (0xc000ab30e0) (1) Data frame handling
I0213 14:48:55.340159       8 log.go:172] (0xc000ab30e0) (1) Data frame sent
I0213 14:48:55.340193       8 log.go:172] (0xc0009f80b0) (0xc000ab30e0) Stream removed, broadcasting: 1
I0213 14:48:55.340644       8 log.go:172] (0xc0009f80b0) (0xc0018a4000) Stream removed, broadcasting: 3
I0213 14:48:55.340692       8 log.go:172] (0xc0009f80b0) (0xc000ab3180) Stream removed, broadcasting: 5
I0213 14:48:55.340718       8 log.go:172] (0xc0009f80b0) Go away received
I0213 14:48:55.340875       8 log.go:172] (0xc0009f80b0) (0xc000ab30e0) Stream removed, broadcasting: 1
I0213 14:48:55.340905       8 log.go:172] (0xc0009f80b0) (0xc0018a4000) Stream removed, broadcasting: 3
I0213 14:48:55.340919       8 log.go:172] (0xc0009f80b0) (0xc000ab3180) Stream removed, broadcasting: 5
Feb 13 14:48:55.341: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:48:55.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-663" for this suite.
Feb 13 14:49:19.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:49:19.578: INFO: namespace pod-network-test-663 deletion completed in 24.22468647s

• [SLOW TEST:57.455 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:49:19.579: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb 13 14:49:19.720: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6149,SelfLink:/api/v1/namespaces/watch-6149/configmaps/e2e-watch-test-resource-version,UID:21e4db0b-ac0f-4a76-83ad-befb97f9c6f4,ResourceVersion:24210992,Generation:0,CreationTimestamp:2020-02-13 14:49:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 13 14:49:19.720: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6149,SelfLink:/api/v1/namespaces/watch-6149/configmaps/e2e-watch-test-resource-version,UID:21e4db0b-ac0f-4a76-83ad-befb97f9c6f4,ResourceVersion:24210993,Generation:0,CreationTimestamp:2020-02-13 14:49:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:49:19.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6149" for this suite.
Feb 13 14:49:25.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:49:25.921: INFO: namespace watch-6149 deletion completed in 6.189852992s

• [SLOW TEST:6.343 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:49:25.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-4419/configmap-test-61bcddc9-010c-4d5b-9ac5-879db0f9ab33
STEP: Creating a pod to test consume configMaps
Feb 13 14:49:26.122: INFO: Waiting up to 5m0s for pod "pod-configmaps-857e0b1d-e523-4a27-b4a3-46ff85f3f015" in namespace "configmap-4419" to be "success or failure"
Feb 13 14:49:26.137: INFO: Pod "pod-configmaps-857e0b1d-e523-4a27-b4a3-46ff85f3f015": Phase="Pending", Reason="", readiness=false. Elapsed: 15.497724ms
Feb 13 14:49:28.181: INFO: Pod "pod-configmaps-857e0b1d-e523-4a27-b4a3-46ff85f3f015": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05883839s
Feb 13 14:49:30.232: INFO: Pod "pod-configmaps-857e0b1d-e523-4a27-b4a3-46ff85f3f015": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11005336s
Feb 13 14:49:32.240: INFO: Pod "pod-configmaps-857e0b1d-e523-4a27-b4a3-46ff85f3f015": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117997935s
Feb 13 14:49:34.254: INFO: Pod "pod-configmaps-857e0b1d-e523-4a27-b4a3-46ff85f3f015": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.132219189s
STEP: Saw pod success
Feb 13 14:49:34.254: INFO: Pod "pod-configmaps-857e0b1d-e523-4a27-b4a3-46ff85f3f015" satisfied condition "success or failure"
Feb 13 14:49:34.258: INFO: Trying to get logs from node iruya-node pod pod-configmaps-857e0b1d-e523-4a27-b4a3-46ff85f3f015 container env-test: 
STEP: delete the pod
Feb 13 14:49:34.375: INFO: Waiting for pod pod-configmaps-857e0b1d-e523-4a27-b4a3-46ff85f3f015 to disappear
Feb 13 14:49:34.379: INFO: Pod pod-configmaps-857e0b1d-e523-4a27-b4a3-46ff85f3f015 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:49:34.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4419" for this suite.
Feb 13 14:49:40.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:49:40.553: INFO: namespace configmap-4419 deletion completed in 6.170575826s

• [SLOW TEST:14.631 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:49:40.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 13 14:49:40.654: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.454676ms)
Feb 13 14:49:40.677: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 22.500677ms)
Feb 13 14:49:40.684: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.434769ms)
Feb 13 14:49:40.694: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.506725ms)
Feb 13 14:49:40.699: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.114436ms)
Feb 13 14:49:40.704: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.212658ms)
Feb 13 14:49:40.708: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.991258ms)
Feb 13 14:49:40.713: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.780159ms)
Feb 13 14:49:40.718: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.11016ms)
Feb 13 14:49:40.722: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.998804ms)
Feb 13 14:49:40.727: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.423502ms)
Feb 13 14:49:40.730: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.672473ms)
Feb 13 14:49:40.735: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.463186ms)
Feb 13 14:49:40.741: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.935256ms)
Feb 13 14:49:40.747: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.434712ms)
Feb 13 14:49:40.752: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.075216ms)
Feb 13 14:49:40.757: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.506265ms)
Feb 13 14:49:40.762: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.98043ms)
Feb 13 14:49:40.766: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.767561ms)
Feb 13 14:49:40.770: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.85359ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:49:40.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5277" for this suite.
Feb 13 14:49:46.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:49:46.942: INFO: namespace proxy-5277 deletion completed in 6.167313953s

• [SLOW TEST:6.389 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:49:46.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 13 14:49:55.693: INFO: Successfully updated pod "labelsupdateba7dde11-16b7-466b-813a-8185395d8dc5"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:49:59.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3093" for this suite.
Feb 13 14:50:21.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:50:21.969: INFO: namespace downward-api-3093 deletion completed in 22.177652301s

• [SLOW TEST:35.026 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:50:21.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb 13 14:50:22.143: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 13 14:50:22.184: INFO: Waiting for terminating namespaces to be deleted...
Feb 13 14:50:22.239: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb 13 14:50:22.259: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded)
Feb 13 14:50:22.259: INFO: 	Container kube-bench ready: false, restart count 0
Feb 13 14:50:22.259: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb 13 14:50:22.259: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 13 14:50:22.259: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb 13 14:50:22.259: INFO: 	Container weave ready: true, restart count 0
Feb 13 14:50:22.259: INFO: 	Container weave-npc ready: true, restart count 0
Feb 13 14:50:22.259: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb 13 14:50:22.274: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb 13 14:50:22.274: INFO: 	Container kube-controller-manager ready: true, restart count 21
Feb 13 14:50:22.274: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb 13 14:50:22.274: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 13 14:50:22.274: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb 13 14:50:22.274: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb 13 14:50:22.274: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb 13 14:50:22.274: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb 13 14:50:22.274: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 13 14:50:22.274: INFO: 	Container coredns ready: true, restart count 0
Feb 13 14:50:22.274: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb 13 14:50:22.274: INFO: 	Container etcd ready: true, restart count 0
Feb 13 14:50:22.274: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb 13 14:50:22.274: INFO: 	Container weave ready: true, restart count 0
Feb 13 14:50:22.274: INFO: 	Container weave-npc ready: true, restart count 0
Feb 13 14:50:22.274: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 13 14:50:22.274: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f2fdc07ae7ee73], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:50:23.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4158" for this suite.
Feb 13 14:50:29.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:50:29.467: INFO: namespace sched-pred-4158 deletion completed in 6.137383828s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.498 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:50:29.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-00fac92f-54d0-46f3-b546-bd3a53b7cf4c
STEP: Creating a pod to test consume secrets
Feb 13 14:50:29.587: INFO: Waiting up to 5m0s for pod "pod-secrets-8b524f8d-1517-480e-8277-30fa30e47998" in namespace "secrets-4362" to be "success or failure"
Feb 13 14:50:29.614: INFO: Pod "pod-secrets-8b524f8d-1517-480e-8277-30fa30e47998": Phase="Pending", Reason="", readiness=false. Elapsed: 26.441387ms
Feb 13 14:50:31.622: INFO: Pod "pod-secrets-8b524f8d-1517-480e-8277-30fa30e47998": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03461423s
Feb 13 14:50:33.638: INFO: Pod "pod-secrets-8b524f8d-1517-480e-8277-30fa30e47998": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051276855s
Feb 13 14:50:35.649: INFO: Pod "pod-secrets-8b524f8d-1517-480e-8277-30fa30e47998": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061686086s
Feb 13 14:50:37.657: INFO: Pod "pod-secrets-8b524f8d-1517-480e-8277-30fa30e47998": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069779711s
Feb 13 14:50:39.666: INFO: Pod "pod-secrets-8b524f8d-1517-480e-8277-30fa30e47998": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.07840058s
STEP: Saw pod success
Feb 13 14:50:39.666: INFO: Pod "pod-secrets-8b524f8d-1517-480e-8277-30fa30e47998" satisfied condition "success or failure"
Feb 13 14:50:39.670: INFO: Trying to get logs from node iruya-node pod pod-secrets-8b524f8d-1517-480e-8277-30fa30e47998 container secret-volume-test: 
STEP: delete the pod
Feb 13 14:50:39.794: INFO: Waiting for pod pod-secrets-8b524f8d-1517-480e-8277-30fa30e47998 to disappear
Feb 13 14:50:39.802: INFO: Pod pod-secrets-8b524f8d-1517-480e-8277-30fa30e47998 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:50:39.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4362" for this suite.
Feb 13 14:50:45.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:50:46.017: INFO: namespace secrets-4362 deletion completed in 6.208879749s

• [SLOW TEST:16.550 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:50:46.018: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3914.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3914.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 13 14:50:58.212: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-3914/dns-test-44e4cfcd-2905-4f6f-bea8-aea54aceb66b: the server could not find the requested resource (get pods dns-test-44e4cfcd-2905-4f6f-bea8-aea54aceb66b)
Feb 13 14:50:58.223: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-3914/dns-test-44e4cfcd-2905-4f6f-bea8-aea54aceb66b: the server could not find the requested resource (get pods dns-test-44e4cfcd-2905-4f6f-bea8-aea54aceb66b)
Feb 13 14:50:58.231: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3914/dns-test-44e4cfcd-2905-4f6f-bea8-aea54aceb66b: the server could not find the requested resource (get pods dns-test-44e4cfcd-2905-4f6f-bea8-aea54aceb66b)
Feb 13 14:50:58.237: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3914/dns-test-44e4cfcd-2905-4f6f-bea8-aea54aceb66b: the server could not find the requested resource (get pods dns-test-44e4cfcd-2905-4f6f-bea8-aea54aceb66b)
Feb 13 14:50:58.243: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-3914/dns-test-44e4cfcd-2905-4f6f-bea8-aea54aceb66b: the server could not find the requested resource (get pods dns-test-44e4cfcd-2905-4f6f-bea8-aea54aceb66b)
Feb 13 14:50:58.249: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-3914/dns-test-44e4cfcd-2905-4f6f-bea8-aea54aceb66b: the server could not find the requested resource (get pods dns-test-44e4cfcd-2905-4f6f-bea8-aea54aceb66b)
Feb 13 14:50:58.255: INFO: Unable to read jessie_udp@PodARecord from pod dns-3914/dns-test-44e4cfcd-2905-4f6f-bea8-aea54aceb66b: the server could not find the requested resource (get pods dns-test-44e4cfcd-2905-4f6f-bea8-aea54aceb66b)
Feb 13 14:50:58.260: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3914/dns-test-44e4cfcd-2905-4f6f-bea8-aea54aceb66b: the server could not find the requested resource (get pods dns-test-44e4cfcd-2905-4f6f-bea8-aea54aceb66b)
Feb 13 14:50:58.260: INFO: Lookups using dns-3914/dns-test-44e4cfcd-2905-4f6f-bea8-aea54aceb66b failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 13 14:51:03.328: INFO: DNS probes using dns-3914/dns-test-44e4cfcd-2905-4f6f-bea8-aea54aceb66b succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:51:03.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3914" for this suite.
Feb 13 14:51:09.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:51:09.758: INFO: namespace dns-3914 deletion completed in 6.337002008s

• [SLOW TEST:23.741 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:51:09.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 13 14:51:09.867: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e5c738aa-f684-472f-9d2b-e9ca89f5646f" in namespace "downward-api-9047" to be "success or failure"
Feb 13 14:51:09.890: INFO: Pod "downwardapi-volume-e5c738aa-f684-472f-9d2b-e9ca89f5646f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.9529ms
Feb 13 14:51:11.914: INFO: Pod "downwardapi-volume-e5c738aa-f684-472f-9d2b-e9ca89f5646f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046243687s
Feb 13 14:51:13.924: INFO: Pod "downwardapi-volume-e5c738aa-f684-472f-9d2b-e9ca89f5646f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056750406s
Feb 13 14:51:15.932: INFO: Pod "downwardapi-volume-e5c738aa-f684-472f-9d2b-e9ca89f5646f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064410175s
Feb 13 14:51:17.966: INFO: Pod "downwardapi-volume-e5c738aa-f684-472f-9d2b-e9ca89f5646f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098646631s
Feb 13 14:51:19.977: INFO: Pod "downwardapi-volume-e5c738aa-f684-472f-9d2b-e9ca89f5646f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.109239117s
STEP: Saw pod success
Feb 13 14:51:19.977: INFO: Pod "downwardapi-volume-e5c738aa-f684-472f-9d2b-e9ca89f5646f" satisfied condition "success or failure"
Feb 13 14:51:19.983: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e5c738aa-f684-472f-9d2b-e9ca89f5646f container client-container: 
STEP: delete the pod
Feb 13 14:51:20.051: INFO: Waiting for pod downwardapi-volume-e5c738aa-f684-472f-9d2b-e9ca89f5646f to disappear
Feb 13 14:51:20.054: INFO: Pod downwardapi-volume-e5c738aa-f684-472f-9d2b-e9ca89f5646f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:51:20.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9047" for this suite.
Feb 13 14:51:26.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:51:26.863: INFO: namespace downward-api-9047 deletion completed in 6.804577615s

• [SLOW TEST:17.103 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:51:26.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 13 14:51:34.711: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:51:34.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2454" for this suite.
Feb 13 14:51:40.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:51:40.977: INFO: namespace container-runtime-2454 deletion completed in 6.182597092s

• [SLOW TEST:14.114 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:51:40.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 13 14:51:41.136: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4eee880a-911e-4512-a7d8-af45775eace2" in namespace "downward-api-7044" to be "success or failure"
Feb 13 14:51:41.146: INFO: Pod "downwardapi-volume-4eee880a-911e-4512-a7d8-af45775eace2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.097325ms
Feb 13 14:51:43.155: INFO: Pod "downwardapi-volume-4eee880a-911e-4512-a7d8-af45775eace2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018724463s
Feb 13 14:51:45.165: INFO: Pod "downwardapi-volume-4eee880a-911e-4512-a7d8-af45775eace2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028601693s
Feb 13 14:51:47.203: INFO: Pod "downwardapi-volume-4eee880a-911e-4512-a7d8-af45775eace2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067161449s
Feb 13 14:51:49.210: INFO: Pod "downwardapi-volume-4eee880a-911e-4512-a7d8-af45775eace2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074235984s
STEP: Saw pod success
Feb 13 14:51:49.211: INFO: Pod "downwardapi-volume-4eee880a-911e-4512-a7d8-af45775eace2" satisfied condition "success or failure"
Feb 13 14:51:49.214: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4eee880a-911e-4512-a7d8-af45775eace2 container client-container: 
STEP: delete the pod
Feb 13 14:51:49.288: INFO: Waiting for pod downwardapi-volume-4eee880a-911e-4512-a7d8-af45775eace2 to disappear
Feb 13 14:51:49.373: INFO: Pod downwardapi-volume-4eee880a-911e-4512-a7d8-af45775eace2 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:51:49.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7044" for this suite.
Feb 13 14:51:55.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:51:55.635: INFO: namespace downward-api-7044 deletion completed in 6.254621402s

• [SLOW TEST:14.658 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:51:55.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Feb 13 14:52:07.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-6a60a038-dc69-495c-a669-dce6010b1fb9 -c busybox-main-container --namespace=emptydir-7792 -- cat /usr/share/volumeshare/shareddata.txt'
Feb 13 14:52:10.963: INFO: stderr: "I0213 14:52:10.455351    3844 log.go:172] (0xc00012ae70) (0xc00064abe0) Create stream\nI0213 14:52:10.455564    3844 log.go:172] (0xc00012ae70) (0xc00064abe0) Stream added, broadcasting: 1\nI0213 14:52:10.470289    3844 log.go:172] (0xc00012ae70) Reply frame received for 1\nI0213 14:52:10.470356    3844 log.go:172] (0xc00012ae70) (0xc00064ac80) Create stream\nI0213 14:52:10.470378    3844 log.go:172] (0xc00012ae70) (0xc00064ac80) Stream added, broadcasting: 3\nI0213 14:52:10.472359    3844 log.go:172] (0xc00012ae70) Reply frame received for 3\nI0213 14:52:10.472429    3844 log.go:172] (0xc00012ae70) (0xc000890000) Create stream\nI0213 14:52:10.472469    3844 log.go:172] (0xc00012ae70) (0xc000890000) Stream added, broadcasting: 5\nI0213 14:52:10.481832    3844 log.go:172] (0xc00012ae70) Reply frame received for 5\nI0213 14:52:10.818078    3844 log.go:172] (0xc00012ae70) Data frame received for 3\nI0213 14:52:10.818277    3844 log.go:172] (0xc00064ac80) (3) Data frame handling\nI0213 14:52:10.818310    3844 log.go:172] (0xc00064ac80) (3) Data frame sent\nI0213 14:52:10.952004    3844 log.go:172] (0xc00012ae70) (0xc00064ac80) Stream removed, broadcasting: 3\nI0213 14:52:10.952200    3844 log.go:172] (0xc00012ae70) Data frame received for 1\nI0213 14:52:10.952231    3844 log.go:172] (0xc00064abe0) (1) Data frame handling\nI0213 14:52:10.952264    3844 log.go:172] (0xc00064abe0) (1) Data frame sent\nI0213 14:52:10.952280    3844 log.go:172] (0xc00012ae70) (0xc000890000) Stream removed, broadcasting: 5\nI0213 14:52:10.952347    3844 log.go:172] (0xc00012ae70) (0xc00064abe0) Stream removed, broadcasting: 1\nI0213 14:52:10.952366    3844 log.go:172] (0xc00012ae70) Go away received\nI0213 14:52:10.953236    3844 log.go:172] (0xc00012ae70) (0xc00064abe0) Stream removed, broadcasting: 1\nI0213 14:52:10.953260    3844 log.go:172] (0xc00012ae70) (0xc00064ac80) Stream removed, broadcasting: 3\nI0213 14:52:10.953269    3844 log.go:172] (0xc00012ae70) (0xc000890000) Stream removed, broadcasting: 5\n"
Feb 13 14:52:10.963: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:52:10.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7792" for this suite.
Feb 13 14:52:17.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:52:17.103: INFO: namespace emptydir-7792 deletion completed in 6.132568418s

• [SLOW TEST:21.468 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:52:17.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0213 14:52:20.965943       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 13 14:52:20.966: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:52:20.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1325" for this suite.
Feb 13 14:52:27.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:52:27.455: INFO: namespace gc-1325 deletion completed in 6.48648406s

• [SLOW TEST:10.352 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:52:27.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-2bcf3a23-fb88-473b-a702-6c45a490cdfb
STEP: Creating a pod to test consume configMaps
Feb 13 14:52:27.576: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-930997ef-e4bd-4e73-aea7-c65366f7d04c" in namespace "projected-2533" to be "success or failure"
Feb 13 14:52:27.608: INFO: Pod "pod-projected-configmaps-930997ef-e4bd-4e73-aea7-c65366f7d04c": Phase="Pending", Reason="", readiness=false. Elapsed: 31.912697ms
Feb 13 14:52:29.618: INFO: Pod "pod-projected-configmaps-930997ef-e4bd-4e73-aea7-c65366f7d04c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042063828s
Feb 13 14:52:31.634: INFO: Pod "pod-projected-configmaps-930997ef-e4bd-4e73-aea7-c65366f7d04c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057926895s
Feb 13 14:52:33.642: INFO: Pod "pod-projected-configmaps-930997ef-e4bd-4e73-aea7-c65366f7d04c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066043747s
Feb 13 14:52:35.653: INFO: Pod "pod-projected-configmaps-930997ef-e4bd-4e73-aea7-c65366f7d04c": Phase="Running", Reason="", readiness=true. Elapsed: 8.076270347s
Feb 13 14:52:37.664: INFO: Pod "pod-projected-configmaps-930997ef-e4bd-4e73-aea7-c65366f7d04c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.087602617s
STEP: Saw pod success
Feb 13 14:52:37.664: INFO: Pod "pod-projected-configmaps-930997ef-e4bd-4e73-aea7-c65366f7d04c" satisfied condition "success or failure"
Feb 13 14:52:37.669: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-930997ef-e4bd-4e73-aea7-c65366f7d04c container projected-configmap-volume-test: 
STEP: delete the pod
Feb 13 14:52:37.833: INFO: Waiting for pod pod-projected-configmaps-930997ef-e4bd-4e73-aea7-c65366f7d04c to disappear
Feb 13 14:52:37.890: INFO: Pod pod-projected-configmaps-930997ef-e4bd-4e73-aea7-c65366f7d04c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:52:37.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2533" for this suite.
Feb 13 14:52:43.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:52:44.029: INFO: namespace projected-2533 deletion completed in 6.127195652s

• [SLOW TEST:16.574 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:52:44.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-0b5a8782-5c04-4ef4-87ab-cdd16a4a52a0
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:52:44.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4504" for this suite.
Feb 13 14:52:50.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:52:50.314: INFO: namespace secrets-4504 deletion completed in 6.18890395s

• [SLOW TEST:6.284 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:52:50.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-wp7k
STEP: Creating a pod to test atomic-volume-subpath
Feb 13 14:52:50.588: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-wp7k" in namespace "subpath-2437" to be "success or failure"
Feb 13 14:52:50.619: INFO: Pod "pod-subpath-test-configmap-wp7k": Phase="Pending", Reason="", readiness=false. Elapsed: 30.142269ms
Feb 13 14:52:52.624: INFO: Pod "pod-subpath-test-configmap-wp7k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03502934s
Feb 13 14:52:54.642: INFO: Pod "pod-subpath-test-configmap-wp7k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053541582s
Feb 13 14:52:57.455: INFO: Pod "pod-subpath-test-configmap-wp7k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.865948419s
Feb 13 14:52:59.466: INFO: Pod "pod-subpath-test-configmap-wp7k": Phase="Running", Reason="", readiness=true. Elapsed: 8.877699558s
Feb 13 14:53:01.474: INFO: Pod "pod-subpath-test-configmap-wp7k": Phase="Running", Reason="", readiness=true. Elapsed: 10.88507459s
Feb 13 14:53:03.482: INFO: Pod "pod-subpath-test-configmap-wp7k": Phase="Running", Reason="", readiness=true. Elapsed: 12.892980046s
Feb 13 14:53:05.492: INFO: Pod "pod-subpath-test-configmap-wp7k": Phase="Running", Reason="", readiness=true. Elapsed: 14.903587983s
Feb 13 14:53:07.502: INFO: Pod "pod-subpath-test-configmap-wp7k": Phase="Running", Reason="", readiness=true. Elapsed: 16.913353541s
Feb 13 14:53:09.517: INFO: Pod "pod-subpath-test-configmap-wp7k": Phase="Running", Reason="", readiness=true. Elapsed: 18.928698322s
Feb 13 14:53:11.525: INFO: Pod "pod-subpath-test-configmap-wp7k": Phase="Running", Reason="", readiness=true. Elapsed: 20.936013033s
Feb 13 14:53:13.532: INFO: Pod "pod-subpath-test-configmap-wp7k": Phase="Running", Reason="", readiness=true. Elapsed: 22.943478883s
Feb 13 14:53:15.540: INFO: Pod "pod-subpath-test-configmap-wp7k": Phase="Running", Reason="", readiness=true. Elapsed: 24.951213302s
Feb 13 14:53:17.547: INFO: Pod "pod-subpath-test-configmap-wp7k": Phase="Running", Reason="", readiness=true. Elapsed: 26.958860941s
Feb 13 14:53:19.567: INFO: Pod "pod-subpath-test-configmap-wp7k": Phase="Running", Reason="", readiness=true. Elapsed: 28.978231617s
Feb 13 14:53:21.580: INFO: Pod "pod-subpath-test-configmap-wp7k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.991751517s
STEP: Saw pod success
Feb 13 14:53:21.581: INFO: Pod "pod-subpath-test-configmap-wp7k" satisfied condition "success or failure"
Feb 13 14:53:21.586: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-wp7k container test-container-subpath-configmap-wp7k: 
STEP: delete the pod
Feb 13 14:53:21.932: INFO: Waiting for pod pod-subpath-test-configmap-wp7k to disappear
Feb 13 14:53:21.937: INFO: Pod pod-subpath-test-configmap-wp7k no longer exists
STEP: Deleting pod pod-subpath-test-configmap-wp7k
Feb 13 14:53:21.937: INFO: Deleting pod "pod-subpath-test-configmap-wp7k" in namespace "subpath-2437"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:53:21.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2437" for this suite.
Feb 13 14:53:27.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:53:28.150: INFO: namespace subpath-2437 deletion completed in 6.203736173s

• [SLOW TEST:37.835 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:53:28.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-fd703c9d-94db-4e13-93c7-32d4826adbfd
STEP: Creating a pod to test consume configMaps
Feb 13 14:53:28.331: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0be0b995-add9-49ab-aa3c-08dd76161359" in namespace "projected-4558" to be "success or failure"
Feb 13 14:53:28.339: INFO: Pod "pod-projected-configmaps-0be0b995-add9-49ab-aa3c-08dd76161359": Phase="Pending", Reason="", readiness=false. Elapsed: 7.193707ms
Feb 13 14:53:30.350: INFO: Pod "pod-projected-configmaps-0be0b995-add9-49ab-aa3c-08dd76161359": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018804351s
Feb 13 14:53:32.359: INFO: Pod "pod-projected-configmaps-0be0b995-add9-49ab-aa3c-08dd76161359": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027282563s
Feb 13 14:53:34.369: INFO: Pod "pod-projected-configmaps-0be0b995-add9-49ab-aa3c-08dd76161359": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037675251s
Feb 13 14:53:36.376: INFO: Pod "pod-projected-configmaps-0be0b995-add9-49ab-aa3c-08dd76161359": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.044927499s
STEP: Saw pod success
Feb 13 14:53:36.377: INFO: Pod "pod-projected-configmaps-0be0b995-add9-49ab-aa3c-08dd76161359" satisfied condition "success or failure"
Feb 13 14:53:36.381: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-0be0b995-add9-49ab-aa3c-08dd76161359 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 13 14:53:36.563: INFO: Waiting for pod pod-projected-configmaps-0be0b995-add9-49ab-aa3c-08dd76161359 to disappear
Feb 13 14:53:36.570: INFO: Pod pod-projected-configmaps-0be0b995-add9-49ab-aa3c-08dd76161359 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:53:36.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4558" for this suite.
Feb 13 14:53:42.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:53:42.698: INFO: namespace projected-4558 deletion completed in 6.121809216s

• [SLOW TEST:14.548 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:53:42.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb 13 14:53:42.819: INFO: Waiting up to 5m0s for pod "pod-fcd3cc78-55a7-4553-8d59-71c6cf43dbd0" in namespace "emptydir-6783" to be "success or failure"
Feb 13 14:53:42.974: INFO: Pod "pod-fcd3cc78-55a7-4553-8d59-71c6cf43dbd0": Phase="Pending", Reason="", readiness=false. Elapsed: 154.102453ms
Feb 13 14:53:44.980: INFO: Pod "pod-fcd3cc78-55a7-4553-8d59-71c6cf43dbd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16022366s
Feb 13 14:53:46.993: INFO: Pod "pod-fcd3cc78-55a7-4553-8d59-71c6cf43dbd0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.172973975s
Feb 13 14:53:48.999: INFO: Pod "pod-fcd3cc78-55a7-4553-8d59-71c6cf43dbd0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.179560195s
Feb 13 14:53:51.019: INFO: Pod "pod-fcd3cc78-55a7-4553-8d59-71c6cf43dbd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.19895701s
STEP: Saw pod success
Feb 13 14:53:51.019: INFO: Pod "pod-fcd3cc78-55a7-4553-8d59-71c6cf43dbd0" satisfied condition "success or failure"
Feb 13 14:53:51.028: INFO: Trying to get logs from node iruya-node pod pod-fcd3cc78-55a7-4553-8d59-71c6cf43dbd0 container test-container: 
STEP: delete the pod
Feb 13 14:53:51.105: INFO: Waiting for pod pod-fcd3cc78-55a7-4553-8d59-71c6cf43dbd0 to disappear
Feb 13 14:53:51.199: INFO: Pod pod-fcd3cc78-55a7-4553-8d59-71c6cf43dbd0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:53:51.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6783" for this suite.
Feb 13 14:53:57.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:53:57.362: INFO: namespace emptydir-6783 deletion completed in 6.156696762s

• [SLOW TEST:14.663 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:53:57.362: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-86d3ecb6-6902-4917-8b45-cea68cb95784
STEP: Creating a pod to test consume secrets
Feb 13 14:53:57.501: INFO: Waiting up to 5m0s for pod "pod-secrets-35c8bc79-cdec-44b2-9010-d5367b00efc4" in namespace "secrets-7972" to be "success or failure"
Feb 13 14:53:57.540: INFO: Pod "pod-secrets-35c8bc79-cdec-44b2-9010-d5367b00efc4": Phase="Pending", Reason="", readiness=false. Elapsed: 38.769903ms
Feb 13 14:53:59.869: INFO: Pod "pod-secrets-35c8bc79-cdec-44b2-9010-d5367b00efc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.366938915s
Feb 13 14:54:01.893: INFO: Pod "pod-secrets-35c8bc79-cdec-44b2-9010-d5367b00efc4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.390989084s
Feb 13 14:54:03.906: INFO: Pod "pod-secrets-35c8bc79-cdec-44b2-9010-d5367b00efc4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.404420227s
Feb 13 14:54:05.923: INFO: Pod "pod-secrets-35c8bc79-cdec-44b2-9010-d5367b00efc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.421421498s
STEP: Saw pod success
Feb 13 14:54:05.923: INFO: Pod "pod-secrets-35c8bc79-cdec-44b2-9010-d5367b00efc4" satisfied condition "success or failure"
Feb 13 14:54:05.929: INFO: Trying to get logs from node iruya-node pod pod-secrets-35c8bc79-cdec-44b2-9010-d5367b00efc4 container secret-env-test: 
STEP: delete the pod
Feb 13 14:54:06.021: INFO: Waiting for pod pod-secrets-35c8bc79-cdec-44b2-9010-d5367b00efc4 to disappear
Feb 13 14:54:06.041: INFO: Pod pod-secrets-35c8bc79-cdec-44b2-9010-d5367b00efc4 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:54:06.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7972" for this suite.
Feb 13 14:54:12.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:54:12.199: INFO: namespace secrets-7972 deletion completed in 6.150682094s

• [SLOW TEST:14.837 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:54:12.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-2635
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 13 14:54:12.296: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 13 14:54:54.509: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2635 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 13 14:54:54.509: INFO: >>> kubeConfig: /root/.kube/config
I0213 14:54:54.607760       8 log.go:172] (0xc000a99600) (0xc0026581e0) Create stream
I0213 14:54:54.607879       8 log.go:172] (0xc000a99600) (0xc0026581e0) Stream added, broadcasting: 1
I0213 14:54:54.734235       8 log.go:172] (0xc000a99600) Reply frame received for 1
I0213 14:54:54.734400       8 log.go:172] (0xc000a99600) (0xc001180000) Create stream
I0213 14:54:54.734428       8 log.go:172] (0xc000a99600) (0xc001180000) Stream added, broadcasting: 3
I0213 14:54:54.740629       8 log.go:172] (0xc000a99600) Reply frame received for 3
I0213 14:54:54.740731       8 log.go:172] (0xc000a99600) (0xc001180280) Create stream
I0213 14:54:54.740744       8 log.go:172] (0xc000a99600) (0xc001180280) Stream added, broadcasting: 5
I0213 14:54:54.747266       8 log.go:172] (0xc000a99600) Reply frame received for 5
I0213 14:54:54.902666       8 log.go:172] (0xc000a99600) Data frame received for 3
I0213 14:54:54.902724       8 log.go:172] (0xc001180000) (3) Data frame handling
I0213 14:54:54.902749       8 log.go:172] (0xc001180000) (3) Data frame sent
I0213 14:54:55.059009       8 log.go:172] (0xc000a99600) Data frame received for 1
I0213 14:54:55.059122       8 log.go:172] (0xc000a99600) (0xc001180000) Stream removed, broadcasting: 3
I0213 14:54:55.059214       8 log.go:172] (0xc0026581e0) (1) Data frame handling
I0213 14:54:55.059237       8 log.go:172] (0xc0026581e0) (1) Data frame sent
I0213 14:54:55.059261       8 log.go:172] (0xc000a99600) (0xc001180280) Stream removed, broadcasting: 5
I0213 14:54:55.059303       8 log.go:172] (0xc000a99600) (0xc0026581e0) Stream removed, broadcasting: 1
I0213 14:54:55.059328       8 log.go:172] (0xc000a99600) Go away received
I0213 14:54:55.060076       8 log.go:172] (0xc000a99600) (0xc0026581e0) Stream removed, broadcasting: 1
I0213 14:54:55.060091       8 log.go:172] (0xc000a99600) (0xc001180000) Stream removed, broadcasting: 3
I0213 14:54:55.060100       8 log.go:172] (0xc000a99600) (0xc001180280) Stream removed, broadcasting: 5
Feb 13 14:54:55.060: INFO: Found all expected endpoints: [netserver-0]
Feb 13 14:54:55.069: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2635 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 13 14:54:55.069: INFO: >>> kubeConfig: /root/.kube/config
I0213 14:54:55.116869       8 log.go:172] (0xc0009f8370) (0xc000ea8640) Create stream
I0213 14:54:55.116927       8 log.go:172] (0xc0009f8370) (0xc000ea8640) Stream added, broadcasting: 1
I0213 14:54:55.123813       8 log.go:172] (0xc0009f8370) Reply frame received for 1
I0213 14:54:55.123839       8 log.go:172] (0xc0009f8370) (0xc000ea8820) Create stream
I0213 14:54:55.123847       8 log.go:172] (0xc0009f8370) (0xc000ea8820) Stream added, broadcasting: 3
I0213 14:54:55.125358       8 log.go:172] (0xc0009f8370) Reply frame received for 3
I0213 14:54:55.125379       8 log.go:172] (0xc0009f8370) (0xc0011803c0) Create stream
I0213 14:54:55.125390       8 log.go:172] (0xc0009f8370) (0xc0011803c0) Stream added, broadcasting: 5
I0213 14:54:55.126328       8 log.go:172] (0xc0009f8370) Reply frame received for 5
I0213 14:54:55.216552       8 log.go:172] (0xc0009f8370) Data frame received for 3
I0213 14:54:55.216581       8 log.go:172] (0xc000ea8820) (3) Data frame handling
I0213 14:54:55.216607       8 log.go:172] (0xc000ea8820) (3) Data frame sent
I0213 14:54:55.359965       8 log.go:172] (0xc0009f8370) (0xc000ea8820) Stream removed, broadcasting: 3
I0213 14:54:55.360123       8 log.go:172] (0xc0009f8370) (0xc0011803c0) Stream removed, broadcasting: 5
I0213 14:54:55.360172       8 log.go:172] (0xc0009f8370) Data frame received for 1
I0213 14:54:55.360224       8 log.go:172] (0xc000ea8640) (1) Data frame handling
I0213 14:54:55.360258       8 log.go:172] (0xc000ea8640) (1) Data frame sent
I0213 14:54:55.360279       8 log.go:172] (0xc0009f8370) (0xc000ea8640) Stream removed, broadcasting: 1
I0213 14:54:55.360327       8 log.go:172] (0xc0009f8370) Go away received
I0213 14:54:55.360876       8 log.go:172] (0xc0009f8370) (0xc000ea8640) Stream removed, broadcasting: 1
I0213 14:54:55.360901       8 log.go:172] (0xc0009f8370) (0xc000ea8820) Stream removed, broadcasting: 3
I0213 14:54:55.360978       8 log.go:172] (0xc0009f8370) (0xc0011803c0) Stream removed, broadcasting: 5
Feb 13 14:54:55.361: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:54:55.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2635" for this suite.
Feb 13 14:55:19.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:55:19.534: INFO: namespace pod-network-test-2635 deletion completed in 24.156624594s

• [SLOW TEST:67.334 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:55:19.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0213 14:55:49.735788       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 13 14:55:49.735: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:55:49.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3163" for this suite.
Feb 13 14:56:00.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:56:00.871: INFO: namespace gc-3163 deletion completed in 11.13061233s

• [SLOW TEST:41.337 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:56:00.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 13 14:56:00.998: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b7f2e96b-c230-4792-9ddd-fc5946ecdccc" in namespace "projected-9444" to be "success or failure"
Feb 13 14:56:01.013: INFO: Pod "downwardapi-volume-b7f2e96b-c230-4792-9ddd-fc5946ecdccc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.099421ms
Feb 13 14:56:03.021: INFO: Pod "downwardapi-volume-b7f2e96b-c230-4792-9ddd-fc5946ecdccc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022846394s
Feb 13 14:56:05.028: INFO: Pod "downwardapi-volume-b7f2e96b-c230-4792-9ddd-fc5946ecdccc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030050488s
Feb 13 14:56:07.056: INFO: Pod "downwardapi-volume-b7f2e96b-c230-4792-9ddd-fc5946ecdccc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057682941s
Feb 13 14:56:09.062: INFO: Pod "downwardapi-volume-b7f2e96b-c230-4792-9ddd-fc5946ecdccc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064057452s
Feb 13 14:56:11.069: INFO: Pod "downwardapi-volume-b7f2e96b-c230-4792-9ddd-fc5946ecdccc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.07023821s
STEP: Saw pod success
Feb 13 14:56:11.069: INFO: Pod "downwardapi-volume-b7f2e96b-c230-4792-9ddd-fc5946ecdccc" satisfied condition "success or failure"
Feb 13 14:56:11.072: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b7f2e96b-c230-4792-9ddd-fc5946ecdccc container client-container: 
STEP: delete the pod
Feb 13 14:56:11.112: INFO: Waiting for pod downwardapi-volume-b7f2e96b-c230-4792-9ddd-fc5946ecdccc to disappear
Feb 13 14:56:11.119: INFO: Pod downwardapi-volume-b7f2e96b-c230-4792-9ddd-fc5946ecdccc no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 14:56:11.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9444" for this suite.
Feb 13 14:56:17.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 14:56:17.230: INFO: namespace projected-9444 deletion completed in 6.080133697s

• [SLOW TEST:16.358 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 14:56:17.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-475e2801-d658-4588-aef9-e9b3eaadfafa in namespace container-probe-3333
Feb 13 14:56:25.388: INFO: Started pod test-webserver-475e2801-d658-4588-aef9-e9b3eaadfafa in namespace container-probe-3333
STEP: checking the pod's current state and verifying that restartCount is present
Feb 13 14:56:25.400: INFO: Initial restart count of pod test-webserver-475e2801-d658-4588-aef9-e9b3eaadfafa is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:00:25.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3333" for this suite.
Feb 13 15:00:31.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:00:31.846: INFO: namespace container-probe-3333 deletion completed in 6.299646844s

• [SLOW TEST:254.616 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:00:31.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 13 15:00:31.982: INFO: Waiting up to 5m0s for pod "downward-api-c4e3f50c-decc-4d70-9d46-02992373779b" in namespace "downward-api-701" to be "success or failure"
Feb 13 15:00:31.997: INFO: Pod "downward-api-c4e3f50c-decc-4d70-9d46-02992373779b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.384534ms
Feb 13 15:00:34.014: INFO: Pod "downward-api-c4e3f50c-decc-4d70-9d46-02992373779b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032034043s
Feb 13 15:00:36.025: INFO: Pod "downward-api-c4e3f50c-decc-4d70-9d46-02992373779b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043584635s
Feb 13 15:00:38.036: INFO: Pod "downward-api-c4e3f50c-decc-4d70-9d46-02992373779b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054030007s
Feb 13 15:00:40.075: INFO: Pod "downward-api-c4e3f50c-decc-4d70-9d46-02992373779b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093498533s
Feb 13 15:00:42.082: INFO: Pod "downward-api-c4e3f50c-decc-4d70-9d46-02992373779b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.099840687s
STEP: Saw pod success
Feb 13 15:00:42.082: INFO: Pod "downward-api-c4e3f50c-decc-4d70-9d46-02992373779b" satisfied condition "success or failure"
Feb 13 15:00:42.084: INFO: Trying to get logs from node iruya-node pod downward-api-c4e3f50c-decc-4d70-9d46-02992373779b container dapi-container: 
STEP: delete the pod
Feb 13 15:00:42.190: INFO: Waiting for pod downward-api-c4e3f50c-decc-4d70-9d46-02992373779b to disappear
Feb 13 15:00:42.197: INFO: Pod downward-api-c4e3f50c-decc-4d70-9d46-02992373779b no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:00:42.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-701" for this suite.
Feb 13 15:00:48.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:00:48.360: INFO: namespace downward-api-701 deletion completed in 6.156268174s

• [SLOW TEST:16.514 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:00:48.361: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6163.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6163.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6163.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6163.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6163.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6163.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6163.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6163.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6163.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6163.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6163.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6163.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6163.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 79.175.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.175.79_udp@PTR;check="$$(dig +tcp +noall +answer +search 79.175.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.175.79_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6163.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6163.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6163.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6163.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6163.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6163.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6163.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6163.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6163.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6163.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6163.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6163.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6163.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 79.175.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.175.79_udp@PTR;check="$$(dig +tcp +noall +answer +search 79.175.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.175.79_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 13 15:01:00.604: INFO: Unable to read wheezy_udp@dns-test-service.dns-6163.svc.cluster.local from pod dns-6163/dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37: the server could not find the requested resource (get pods dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37)
Feb 13 15:01:00.617: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6163.svc.cluster.local from pod dns-6163/dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37: the server could not find the requested resource (get pods dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37)
Feb 13 15:01:00.625: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6163.svc.cluster.local from pod dns-6163/dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37: the server could not find the requested resource (get pods dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37)
Feb 13 15:01:00.640: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6163.svc.cluster.local from pod dns-6163/dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37: the server could not find the requested resource (get pods dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37)
Feb 13 15:01:00.650: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-6163.svc.cluster.local from pod dns-6163/dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37: the server could not find the requested resource (get pods dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37)
Feb 13 15:01:00.656: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-6163.svc.cluster.local from pod dns-6163/dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37: the server could not find the requested resource (get pods dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37)
Feb 13 15:01:00.661: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6163/dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37: the server could not find the requested resource (get pods dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37)
Feb 13 15:01:00.668: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6163/dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37: the server could not find the requested resource (get pods dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37)
Feb 13 15:01:00.679: INFO: Unable to read 10.108.175.79_udp@PTR from pod dns-6163/dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37: the server could not find the requested resource (get pods dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37)
Feb 13 15:01:00.691: INFO: Unable to read 10.108.175.79_tcp@PTR from pod dns-6163/dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37: the server could not find the requested resource (get pods dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37)
Feb 13 15:01:00.698: INFO: Unable to read jessie_udp@dns-test-service.dns-6163.svc.cluster.local from pod dns-6163/dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37: the server could not find the requested resource (get pods dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37)
Feb 13 15:01:00.703: INFO: Unable to read jessie_tcp@dns-test-service.dns-6163.svc.cluster.local from pod dns-6163/dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37: the server could not find the requested resource (get pods dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37)
Feb 13 15:01:00.707: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6163.svc.cluster.local from pod dns-6163/dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37: the server could not find the requested resource (get pods dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37)
Feb 13 15:01:00.711: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6163.svc.cluster.local from pod dns-6163/dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37: the server could not find the requested resource (get pods dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37)
Feb 13 15:01:00.714: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-6163.svc.cluster.local from pod dns-6163/dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37: the server could not find the requested resource (get pods dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37)
Feb 13 15:01:00.717: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-6163.svc.cluster.local from pod dns-6163/dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37: the server could not find the requested resource (get pods dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37)
Feb 13 15:01:00.721: INFO: Unable to read jessie_udp@PodARecord from pod dns-6163/dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37: the server could not find the requested resource (get pods dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37)
Feb 13 15:01:00.726: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6163/dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37: the server could not find the requested resource (get pods dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37)
Feb 13 15:01:00.730: INFO: Unable to read 10.108.175.79_udp@PTR from pod dns-6163/dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37: the server could not find the requested resource (get pods dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37)
Feb 13 15:01:00.733: INFO: Unable to read 10.108.175.79_tcp@PTR from pod dns-6163/dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37: the server could not find the requested resource (get pods dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37)
Feb 13 15:01:00.733: INFO: Lookups using dns-6163/dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37 failed for: [wheezy_udp@dns-test-service.dns-6163.svc.cluster.local wheezy_tcp@dns-test-service.dns-6163.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6163.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6163.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-6163.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-6163.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.108.175.79_udp@PTR 10.108.175.79_tcp@PTR jessie_udp@dns-test-service.dns-6163.svc.cluster.local jessie_tcp@dns-test-service.dns-6163.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6163.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6163.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-6163.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-6163.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.108.175.79_udp@PTR 10.108.175.79_tcp@PTR]

Feb 13 15:01:05.939: INFO: DNS probes using dns-6163/dns-test-b5fc035a-791c-4a61-8a3c-ee1c9def7e37 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:01:06.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6163" for this suite.
Feb 13 15:01:12.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:01:12.878: INFO: namespace dns-6163 deletion completed in 6.222310623s

• [SLOW TEST:24.517 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:01:12.879: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 13 15:01:12.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6247'
Feb 13 15:01:13.151: INFO: stderr: ""
Feb 13 15:01:13.151: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Feb 13 15:01:13.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-6247'
Feb 13 15:01:18.045: INFO: stderr: ""
Feb 13 15:01:18.046: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:01:18.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6247" for this suite.
Feb 13 15:01:24.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:01:24.185: INFO: namespace kubectl-6247 deletion completed in 6.130357415s

• [SLOW TEST:11.306 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:01:24.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Feb 13 15:01:24.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6728 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb 13 15:01:33.958: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0213 15:01:32.753479    3913 log.go:172] (0xc0008de630) (0xc000526960) Create stream\nI0213 15:01:32.753771    3913 log.go:172] (0xc0008de630) (0xc000526960) Stream added, broadcasting: 1\nI0213 15:01:32.767286    3913 log.go:172] (0xc0008de630) Reply frame received for 1\nI0213 15:01:32.767328    3913 log.go:172] (0xc0008de630) (0xc000526140) Create stream\nI0213 15:01:32.767338    3913 log.go:172] (0xc0008de630) (0xc000526140) Stream added, broadcasting: 3\nI0213 15:01:32.769908    3913 log.go:172] (0xc0008de630) Reply frame received for 3\nI0213 15:01:32.769946    3913 log.go:172] (0xc0008de630) (0xc0000fc000) Create stream\nI0213 15:01:32.769955    3913 log.go:172] (0xc0008de630) (0xc0000fc000) Stream added, broadcasting: 5\nI0213 15:01:32.771662    3913 log.go:172] (0xc0008de630) Reply frame received for 5\nI0213 15:01:32.771687    3913 log.go:172] (0xc0008de630) (0xc0005261e0) Create stream\nI0213 15:01:32.771694    3913 log.go:172] (0xc0008de630) (0xc0005261e0) Stream added, broadcasting: 7\nI0213 15:01:32.773341    3913 log.go:172] (0xc0008de630) Reply frame received for 7\nI0213 15:01:32.773935    3913 log.go:172] (0xc000526140) (3) Writing data frame\nI0213 15:01:32.774333    3913 log.go:172] (0xc000526140) (3) Writing data frame\nI0213 15:01:32.783365    3913 log.go:172] (0xc0008de630) Data frame received for 5\nI0213 15:01:32.783399    3913 log.go:172] (0xc0000fc000) (5) Data frame handling\nI0213 15:01:32.783412    3913 log.go:172] (0xc0000fc000) (5) Data frame sent\nI0213 15:01:32.788290    3913 log.go:172] (0xc0008de630) Data frame received for 5\nI0213 15:01:32.788307    3913 log.go:172] (0xc0000fc000) (5) Data frame handling\nI0213 15:01:32.788320    3913 log.go:172] (0xc0000fc000) (5) Data frame sent\nI0213 15:01:33.910272    3913 log.go:172] (0xc0008de630) (0xc000526140) Stream removed, broadcasting: 3\nI0213 15:01:33.910681    3913 log.go:172] (0xc0008de630) Data frame received for 1\nI0213 15:01:33.910704    3913 log.go:172] (0xc000526960) (1) Data frame handling\nI0213 15:01:33.910741    3913 log.go:172] (0xc000526960) (1) Data frame sent\nI0213 15:01:33.910853    3913 log.go:172] (0xc0008de630) (0xc000526960) Stream removed, broadcasting: 1\nI0213 15:01:33.911354    3913 log.go:172] (0xc0008de630) (0xc0000fc000) Stream removed, broadcasting: 5\nI0213 15:01:33.911500    3913 log.go:172] (0xc0008de630) (0xc0005261e0) Stream removed, broadcasting: 7\nI0213 15:01:33.911554    3913 log.go:172] (0xc0008de630) Go away received\nI0213 15:01:33.911678    3913 log.go:172] (0xc0008de630) (0xc000526960) Stream removed, broadcasting: 1\nI0213 15:01:33.911723    3913 log.go:172] (0xc0008de630) (0xc000526140) Stream removed, broadcasting: 3\nI0213 15:01:33.911771    3913 log.go:172] (0xc0008de630) (0xc0000fc000) Stream removed, broadcasting: 5\nI0213 15:01:33.911841    3913 log.go:172] (0xc0008de630) (0xc0005261e0) Stream removed, broadcasting: 7\n"
Feb 13 15:01:33.959: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:01:35.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6728" for this suite.
Feb 13 15:01:42.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:01:42.148: INFO: namespace kubectl-6728 deletion completed in 6.163395157s

• [SLOW TEST:17.963 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:01:42.148: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 13 15:01:42.324: INFO: Waiting up to 5m0s for pod "downwardapi-volume-35e8a4b8-3521-45be-84fd-9cf8bf224bfa" in namespace "downward-api-5816" to be "success or failure"
Feb 13 15:01:42.340: INFO: Pod "downwardapi-volume-35e8a4b8-3521-45be-84fd-9cf8bf224bfa": Phase="Pending", Reason="", readiness=false. Elapsed: 16.179147ms
Feb 13 15:01:44.349: INFO: Pod "downwardapi-volume-35e8a4b8-3521-45be-84fd-9cf8bf224bfa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024930439s
Feb 13 15:01:46.357: INFO: Pod "downwardapi-volume-35e8a4b8-3521-45be-84fd-9cf8bf224bfa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033541501s
Feb 13 15:01:48.381: INFO: Pod "downwardapi-volume-35e8a4b8-3521-45be-84fd-9cf8bf224bfa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057227662s
Feb 13 15:01:50.394: INFO: Pod "downwardapi-volume-35e8a4b8-3521-45be-84fd-9cf8bf224bfa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.070133066s
STEP: Saw pod success
Feb 13 15:01:50.394: INFO: Pod "downwardapi-volume-35e8a4b8-3521-45be-84fd-9cf8bf224bfa" satisfied condition "success or failure"
Feb 13 15:01:50.398: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-35e8a4b8-3521-45be-84fd-9cf8bf224bfa container client-container: 
STEP: delete the pod
Feb 13 15:01:50.475: INFO: Waiting for pod downwardapi-volume-35e8a4b8-3521-45be-84fd-9cf8bf224bfa to disappear
Feb 13 15:01:50.488: INFO: Pod downwardapi-volume-35e8a4b8-3521-45be-84fd-9cf8bf224bfa no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:01:50.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5816" for this suite.
Feb 13 15:01:56.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:01:56.799: INFO: namespace downward-api-5816 deletion completed in 6.301396458s

• [SLOW TEST:14.651 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:01:56.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-c1d12dfd-2bd4-451f-8eb3-a51916a54485 in namespace container-probe-9888
Feb 13 15:02:06.961: INFO: Started pod busybox-c1d12dfd-2bd4-451f-8eb3-a51916a54485 in namespace container-probe-9888
STEP: checking the pod's current state and verifying that restartCount is present
Feb 13 15:02:06.966: INFO: Initial restart count of pod busybox-c1d12dfd-2bd4-451f-8eb3-a51916a54485 is 0
Feb 13 15:02:59.238: INFO: Restart count of pod container-probe-9888/busybox-c1d12dfd-2bd4-451f-8eb3-a51916a54485 is now 1 (52.272230577s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:02:59.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9888" for this suite.
Feb 13 15:03:05.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:03:05.580: INFO: namespace container-probe-9888 deletion completed in 6.196322489s

• [SLOW TEST:68.780 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:03:05.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-2823f3b1-f0f7-4b3d-a7d2-c7108dfa5040
STEP: Creating a pod to test consume secrets
Feb 13 15:03:05.723: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-51d1e574-b151-4a2a-adc4-24a14ef8c7bb" in namespace "projected-6359" to be "success or failure"
Feb 13 15:03:05.731: INFO: Pod "pod-projected-secrets-51d1e574-b151-4a2a-adc4-24a14ef8c7bb": Phase="Pending", Reason="", readiness=false. Elapsed: 7.385508ms
Feb 13 15:03:07.743: INFO: Pod "pod-projected-secrets-51d1e574-b151-4a2a-adc4-24a14ef8c7bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019453857s
Feb 13 15:03:09.751: INFO: Pod "pod-projected-secrets-51d1e574-b151-4a2a-adc4-24a14ef8c7bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027296205s
Feb 13 15:03:11.761: INFO: Pod "pod-projected-secrets-51d1e574-b151-4a2a-adc4-24a14ef8c7bb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037341775s
Feb 13 15:03:13.773: INFO: Pod "pod-projected-secrets-51d1e574-b151-4a2a-adc4-24a14ef8c7bb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049512542s
Feb 13 15:03:15.782: INFO: Pod "pod-projected-secrets-51d1e574-b151-4a2a-adc4-24a14ef8c7bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.058122409s
STEP: Saw pod success
Feb 13 15:03:15.782: INFO: Pod "pod-projected-secrets-51d1e574-b151-4a2a-adc4-24a14ef8c7bb" satisfied condition "success or failure"
Feb 13 15:03:15.786: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-51d1e574-b151-4a2a-adc4-24a14ef8c7bb container projected-secret-volume-test: 
STEP: delete the pod
Feb 13 15:03:15.995: INFO: Waiting for pod pod-projected-secrets-51d1e574-b151-4a2a-adc4-24a14ef8c7bb to disappear
Feb 13 15:03:16.002: INFO: Pod pod-projected-secrets-51d1e574-b151-4a2a-adc4-24a14ef8c7bb no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:03:16.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6359" for this suite.
Feb 13 15:03:22.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:03:22.211: INFO: namespace projected-6359 deletion completed in 6.203258081s

• [SLOW TEST:16.631 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:03:22.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 13 15:03:22.427: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:03:35.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8029" for this suite.
Feb 13 15:03:41.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:03:41.758: INFO: namespace init-container-8029 deletion completed in 6.190845553s

• [SLOW TEST:19.547 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:03:41.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-964da5ba-ee1b-4a27-837b-73639c2532a3
STEP: Creating a pod to test consume configMaps
Feb 13 15:03:44.139: INFO: Waiting up to 5m0s for pod "pod-configmaps-1a535e72-e7bf-40bb-979a-34089b77eef9" in namespace "configmap-8844" to be "success or failure"
Feb 13 15:03:44.146: INFO: Pod "pod-configmaps-1a535e72-e7bf-40bb-979a-34089b77eef9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.803537ms
Feb 13 15:03:46.154: INFO: Pod "pod-configmaps-1a535e72-e7bf-40bb-979a-34089b77eef9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015092014s
Feb 13 15:03:48.166: INFO: Pod "pod-configmaps-1a535e72-e7bf-40bb-979a-34089b77eef9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026618558s
Feb 13 15:03:50.195: INFO: Pod "pod-configmaps-1a535e72-e7bf-40bb-979a-34089b77eef9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056231074s
Feb 13 15:03:52.240: INFO: Pod "pod-configmaps-1a535e72-e7bf-40bb-979a-34089b77eef9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.101221908s
STEP: Saw pod success
Feb 13 15:03:52.241: INFO: Pod "pod-configmaps-1a535e72-e7bf-40bb-979a-34089b77eef9" satisfied condition "success or failure"
Feb 13 15:03:52.259: INFO: Trying to get logs from node iruya-node pod pod-configmaps-1a535e72-e7bf-40bb-979a-34089b77eef9 container configmap-volume-test: 
STEP: delete the pod
Feb 13 15:03:52.368: INFO: Waiting for pod pod-configmaps-1a535e72-e7bf-40bb-979a-34089b77eef9 to disappear
Feb 13 15:03:52.383: INFO: Pod pod-configmaps-1a535e72-e7bf-40bb-979a-34089b77eef9 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:03:52.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8844" for this suite.
Feb 13 15:03:58.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:03:58.637: INFO: namespace configmap-8844 deletion completed in 6.226091469s

• [SLOW TEST:16.878 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:03:58.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 13 15:03:58.737: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ac1dafac-6015-48c6-9ed1-e9cddf8b2783" in namespace "projected-1045" to be "success or failure"
Feb 13 15:03:58.770: INFO: Pod "downwardapi-volume-ac1dafac-6015-48c6-9ed1-e9cddf8b2783": Phase="Pending", Reason="", readiness=false. Elapsed: 33.173792ms
Feb 13 15:04:00.779: INFO: Pod "downwardapi-volume-ac1dafac-6015-48c6-9ed1-e9cddf8b2783": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041967236s
Feb 13 15:04:02.784: INFO: Pod "downwardapi-volume-ac1dafac-6015-48c6-9ed1-e9cddf8b2783": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047111208s
Feb 13 15:04:04.888: INFO: Pod "downwardapi-volume-ac1dafac-6015-48c6-9ed1-e9cddf8b2783": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151052927s
Feb 13 15:04:06.894: INFO: Pod "downwardapi-volume-ac1dafac-6015-48c6-9ed1-e9cddf8b2783": Phase="Pending", Reason="", readiness=false. Elapsed: 8.157287159s
Feb 13 15:04:08.905: INFO: Pod "downwardapi-volume-ac1dafac-6015-48c6-9ed1-e9cddf8b2783": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.168291178s
STEP: Saw pod success
Feb 13 15:04:08.905: INFO: Pod "downwardapi-volume-ac1dafac-6015-48c6-9ed1-e9cddf8b2783" satisfied condition "success or failure"
Feb 13 15:04:08.910: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ac1dafac-6015-48c6-9ed1-e9cddf8b2783 container client-container: 
STEP: delete the pod
Feb 13 15:04:09.057: INFO: Waiting for pod downwardapi-volume-ac1dafac-6015-48c6-9ed1-e9cddf8b2783 to disappear
Feb 13 15:04:09.061: INFO: Pod downwardapi-volume-ac1dafac-6015-48c6-9ed1-e9cddf8b2783 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:04:09.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1045" for this suite.
Feb 13 15:04:15.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:04:15.336: INFO: namespace projected-1045 deletion completed in 6.270404203s

• [SLOW TEST:16.698 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:04:15.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-ef4489ac-9012-4f14-9a39-25b42b2bfeae
STEP: Creating a pod to test consume configMaps
Feb 13 15:04:15.450: INFO: Waiting up to 5m0s for pod "pod-configmaps-140efab5-4ccd-4797-9fd2-b50a4c7aba00" in namespace "configmap-5231" to be "success or failure"
Feb 13 15:04:15.463: INFO: Pod "pod-configmaps-140efab5-4ccd-4797-9fd2-b50a4c7aba00": Phase="Pending", Reason="", readiness=false. Elapsed: 13.163509ms
Feb 13 15:04:17.477: INFO: Pod "pod-configmaps-140efab5-4ccd-4797-9fd2-b50a4c7aba00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027400147s
Feb 13 15:04:19.491: INFO: Pod "pod-configmaps-140efab5-4ccd-4797-9fd2-b50a4c7aba00": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040927348s
Feb 13 15:04:21.501: INFO: Pod "pod-configmaps-140efab5-4ccd-4797-9fd2-b50a4c7aba00": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051436696s
Feb 13 15:04:23.509: INFO: Pod "pod-configmaps-140efab5-4ccd-4797-9fd2-b50a4c7aba00": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058856661s
Feb 13 15:04:25.517: INFO: Pod "pod-configmaps-140efab5-4ccd-4797-9fd2-b50a4c7aba00": Phase="Pending", Reason="", readiness=false. Elapsed: 10.066913894s
Feb 13 15:04:27.528: INFO: Pod "pod-configmaps-140efab5-4ccd-4797-9fd2-b50a4c7aba00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.078276811s
STEP: Saw pod success
Feb 13 15:04:27.528: INFO: Pod "pod-configmaps-140efab5-4ccd-4797-9fd2-b50a4c7aba00" satisfied condition "success or failure"
Feb 13 15:04:27.535: INFO: Trying to get logs from node iruya-node pod pod-configmaps-140efab5-4ccd-4797-9fd2-b50a4c7aba00 container configmap-volume-test: 
STEP: delete the pod
Feb 13 15:04:27.608: INFO: Waiting for pod pod-configmaps-140efab5-4ccd-4797-9fd2-b50a4c7aba00 to disappear
Feb 13 15:04:27.707: INFO: Pod pod-configmaps-140efab5-4ccd-4797-9fd2-b50a4c7aba00 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:04:27.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5231" for this suite.
Feb 13 15:04:33.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:04:33.874: INFO: namespace configmap-5231 deletion completed in 6.160110125s

• [SLOW TEST:18.539 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:04:33.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 13 15:04:34.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-2081'
Feb 13 15:04:36.304: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 13 15:04:36.304: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Feb 13 15:04:36.318: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Feb 13 15:04:36.318: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb 13 15:04:36.342: INFO: scanned /root for discovery docs: 
Feb 13 15:04:36.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-2081'
Feb 13 15:04:59.845: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 13 15:04:59.845: INFO: stdout: "Created e2e-test-nginx-rc-41811e15cef896f7a3c9a1079fb3db20\nScaling up e2e-test-nginx-rc-41811e15cef896f7a3c9a1079fb3db20 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-41811e15cef896f7a3c9a1079fb3db20 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-41811e15cef896f7a3c9a1079fb3db20 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Feb 13 15:04:59.845: INFO: stdout: "Created e2e-test-nginx-rc-41811e15cef896f7a3c9a1079fb3db20\nScaling up e2e-test-nginx-rc-41811e15cef896f7a3c9a1079fb3db20 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-41811e15cef896f7a3c9a1079fb3db20 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-41811e15cef896f7a3c9a1079fb3db20 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Feb 13 15:04:59.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2081'
Feb 13 15:05:00.050: INFO: stderr: ""
Feb 13 15:05:00.050: INFO: stdout: "e2e-test-nginx-rc-41811e15cef896f7a3c9a1079fb3db20-kzp6l e2e-test-nginx-rc-zlb97 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb 13 15:05:05.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2081'
Feb 13 15:05:05.188: INFO: stderr: ""
Feb 13 15:05:05.189: INFO: stdout: "e2e-test-nginx-rc-41811e15cef896f7a3c9a1079fb3db20-kzp6l e2e-test-nginx-rc-zlb97 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb 13 15:05:10.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2081'
Feb 13 15:05:10.347: INFO: stderr: ""
Feb 13 15:05:10.347: INFO: stdout: "e2e-test-nginx-rc-41811e15cef896f7a3c9a1079fb3db20-kzp6l "
Feb 13 15:05:10.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-41811e15cef896f7a3c9a1079fb3db20-kzp6l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2081'
Feb 13 15:05:10.443: INFO: stderr: ""
Feb 13 15:05:10.443: INFO: stdout: "true"
Feb 13 15:05:10.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-41811e15cef896f7a3c9a1079fb3db20-kzp6l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2081'
Feb 13 15:05:10.547: INFO: stderr: ""
Feb 13 15:05:10.547: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Feb 13 15:05:10.547: INFO: e2e-test-nginx-rc-41811e15cef896f7a3c9a1079fb3db20-kzp6l is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Feb 13 15:05:10.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-2081'
Feb 13 15:05:10.706: INFO: stderr: ""
Feb 13 15:05:10.707: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:05:10.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2081" for this suite.
Feb 13 15:05:32.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:05:32.965: INFO: namespace kubectl-2081 deletion completed in 22.237926888s

• [SLOW TEST:59.090 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:05:32.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 13 15:05:59.137: INFO: Container started at 2020-02-13 15:05:42 +0000 UTC, pod became ready at 2020-02-13 15:05:57 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:05:59.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8500" for this suite.
Feb 13 15:06:21.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:06:21.352: INFO: namespace container-probe-8500 deletion completed in 22.208273782s

• [SLOW TEST:48.386 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:06:21.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-6/configmap-test-f81d35f2-717d-497e-b561-c91043d2c317
STEP: Creating a pod to test consume configMaps
Feb 13 15:06:21.469: INFO: Waiting up to 5m0s for pod "pod-configmaps-dffcadd4-6514-44a5-94ab-785120865a4c" in namespace "configmap-6" to be "success or failure"
Feb 13 15:06:21.473: INFO: Pod "pod-configmaps-dffcadd4-6514-44a5-94ab-785120865a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.913401ms
Feb 13 15:06:23.482: INFO: Pod "pod-configmaps-dffcadd4-6514-44a5-94ab-785120865a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012206406s
Feb 13 15:06:25.488: INFO: Pod "pod-configmaps-dffcadd4-6514-44a5-94ab-785120865a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018855408s
Feb 13 15:06:27.499: INFO: Pod "pod-configmaps-dffcadd4-6514-44a5-94ab-785120865a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029844178s
Feb 13 15:06:29.510: INFO: Pod "pod-configmaps-dffcadd4-6514-44a5-94ab-785120865a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.040362878s
Feb 13 15:06:31.522: INFO: Pod "pod-configmaps-dffcadd4-6514-44a5-94ab-785120865a4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.05294898s
STEP: Saw pod success
Feb 13 15:06:31.523: INFO: Pod "pod-configmaps-dffcadd4-6514-44a5-94ab-785120865a4c" satisfied condition "success or failure"
Feb 13 15:06:31.528: INFO: Trying to get logs from node iruya-node pod pod-configmaps-dffcadd4-6514-44a5-94ab-785120865a4c container env-test: 
STEP: delete the pod
Feb 13 15:06:31.754: INFO: Waiting for pod pod-configmaps-dffcadd4-6514-44a5-94ab-785120865a4c to disappear
Feb 13 15:06:31.769: INFO: Pod pod-configmaps-dffcadd4-6514-44a5-94ab-785120865a4c no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:06:31.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6" for this suite.
Feb 13 15:06:37.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:06:37.982: INFO: namespace configmap-6 deletion completed in 6.192324104s

• [SLOW TEST:16.630 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:06:37.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Feb 13 15:06:38.148: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Feb 13 15:06:38.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7562'
Feb 13 15:06:38.908: INFO: stderr: ""
Feb 13 15:06:38.908: INFO: stdout: "service/redis-slave created\n"
Feb 13 15:06:38.910: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Feb 13 15:06:38.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7562'
Feb 13 15:06:39.417: INFO: stderr: ""
Feb 13 15:06:39.417: INFO: stdout: "service/redis-master created\n"
Feb 13 15:06:39.418: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb 13 15:06:39.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7562'
Feb 13 15:06:40.033: INFO: stderr: ""
Feb 13 15:06:40.034: INFO: stdout: "service/frontend created\n"
Feb 13 15:06:40.036: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Feb 13 15:06:40.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7562'
Feb 13 15:06:40.462: INFO: stderr: ""
Feb 13 15:06:40.462: INFO: stdout: "deployment.apps/frontend created\n"
Feb 13 15:06:40.464: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb 13 15:06:40.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7562'
Feb 13 15:06:41.862: INFO: stderr: ""
Feb 13 15:06:41.862: INFO: stdout: "deployment.apps/redis-master created\n"
Feb 13 15:06:41.864: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Feb 13 15:06:41.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7562'
Feb 13 15:06:42.905: INFO: stderr: ""
Feb 13 15:06:42.906: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Feb 13 15:06:42.906: INFO: Waiting for all frontend pods to be Running.
Feb 13 15:07:07.958: INFO: Waiting for frontend to serve content.
Feb 13 15:07:08.076: INFO: Trying to add a new entry to the guestbook.
Feb 13 15:07:08.120: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Feb 13 15:07:08.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7562'
Feb 13 15:07:08.500: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 13 15:07:08.500: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb 13 15:07:08.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7562'
Feb 13 15:07:08.808: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 13 15:07:08.808: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 13 15:07:08.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7562'
Feb 13 15:07:08.999: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 13 15:07:08.999: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 13 15:07:09.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7562'
Feb 13 15:07:09.135: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 13 15:07:09.135: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 13 15:07:09.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7562'
Feb 13 15:07:09.254: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 13 15:07:09.255: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 13 15:07:09.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7562'
Feb 13 15:07:09.645: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 13 15:07:09.645: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:07:09.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7562" for this suite.
Feb 13 15:08:01.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:08:02.075: INFO: namespace kubectl-7562 deletion completed in 52.42005104s

• [SLOW TEST:84.091 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:08:02.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 13 15:08:02.213: INFO: Waiting up to 5m0s for pod "downwardapi-volume-07849546-f3fb-486d-a71e-5b4b0c0a8c25" in namespace "downward-api-8241" to be "success or failure"
Feb 13 15:08:02.217: INFO: Pod "downwardapi-volume-07849546-f3fb-486d-a71e-5b4b0c0a8c25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.707241ms
Feb 13 15:08:04.228: INFO: Pod "downwardapi-volume-07849546-f3fb-486d-a71e-5b4b0c0a8c25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015119468s
Feb 13 15:08:06.235: INFO: Pod "downwardapi-volume-07849546-f3fb-486d-a71e-5b4b0c0a8c25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022629758s
Feb 13 15:08:08.254: INFO: Pod "downwardapi-volume-07849546-f3fb-486d-a71e-5b4b0c0a8c25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041023062s
Feb 13 15:08:10.269: INFO: Pod "downwardapi-volume-07849546-f3fb-486d-a71e-5b4b0c0a8c25": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056335232s
Feb 13 15:08:12.279: INFO: Pod "downwardapi-volume-07849546-f3fb-486d-a71e-5b4b0c0a8c25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06605835s
STEP: Saw pod success
Feb 13 15:08:12.279: INFO: Pod "downwardapi-volume-07849546-f3fb-486d-a71e-5b4b0c0a8c25" satisfied condition "success or failure"
Feb 13 15:08:12.282: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-07849546-f3fb-486d-a71e-5b4b0c0a8c25 container client-container: 
STEP: delete the pod
Feb 13 15:08:12.409: INFO: Waiting for pod downwardapi-volume-07849546-f3fb-486d-a71e-5b4b0c0a8c25 to disappear
Feb 13 15:08:12.419: INFO: Pod downwardapi-volume-07849546-f3fb-486d-a71e-5b4b0c0a8c25 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:08:12.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8241" for this suite.
Feb 13 15:08:18.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:08:18.579: INFO: namespace downward-api-8241 deletion completed in 6.149163816s

• [SLOW TEST:16.503 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:08:18.579: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-4249
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4249 to expose endpoints map[]
Feb 13 15:08:18.770: INFO: successfully validated that service multi-endpoint-test in namespace services-4249 exposes endpoints map[] (32.618737ms elapsed)
STEP: Creating pod pod1 in namespace services-4249
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4249 to expose endpoints map[pod1:[100]]
Feb 13 15:08:22.982: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.181243162s elapsed, will retry)
Feb 13 15:08:27.040: INFO: successfully validated that service multi-endpoint-test in namespace services-4249 exposes endpoints map[pod1:[100]] (8.239046174s elapsed)
STEP: Creating pod pod2 in namespace services-4249
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4249 to expose endpoints map[pod1:[100] pod2:[101]]
Feb 13 15:08:32.168: INFO: Unexpected endpoints: found map[9e9e8435-e4f7-42bd-bb5f-c3f1c10d23bb:[100]], expected map[pod1:[100] pod2:[101]] (5.115504806s elapsed, will retry)
Feb 13 15:08:35.365: INFO: successfully validated that service multi-endpoint-test in namespace services-4249 exposes endpoints map[pod1:[100] pod2:[101]] (8.312776621s elapsed)
STEP: Deleting pod pod1 in namespace services-4249
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4249 to expose endpoints map[pod2:[101]]
Feb 13 15:08:36.412: INFO: successfully validated that service multi-endpoint-test in namespace services-4249 exposes endpoints map[pod2:[101]] (1.039480752s elapsed)
STEP: Deleting pod pod2 in namespace services-4249
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4249 to expose endpoints map[]
Feb 13 15:08:37.465: INFO: successfully validated that service multi-endpoint-test in namespace services-4249 exposes endpoints map[] (1.043456956s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:08:38.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4249" for this suite.
Feb 13 15:08:44.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:08:44.697: INFO: namespace services-4249 deletion completed in 6.16469842s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:26.118 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:08:44.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 13 15:08:53.429: INFO: Successfully updated pod "pod-update-activedeadlineseconds-c8d9961e-0716-4409-a6b9-84d74b8ebd0e"
Feb 13 15:08:53.430: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-c8d9961e-0716-4409-a6b9-84d74b8ebd0e" in namespace "pods-2982" to be "terminated due to deadline exceeded"
Feb 13 15:08:53.497: INFO: Pod "pod-update-activedeadlineseconds-c8d9961e-0716-4409-a6b9-84d74b8ebd0e": Phase="Running", Reason="", readiness=true. Elapsed: 67.078338ms
Feb 13 15:08:55.510: INFO: Pod "pod-update-activedeadlineseconds-c8d9961e-0716-4409-a6b9-84d74b8ebd0e": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.079965745s
Feb 13 15:08:55.510: INFO: Pod "pod-update-activedeadlineseconds-c8d9961e-0716-4409-a6b9-84d74b8ebd0e" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:08:55.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2982" for this suite.
Feb 13 15:09:01.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:09:01.709: INFO: namespace pods-2982 deletion completed in 6.180943938s

• [SLOW TEST:17.011 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:09:01.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb 13 15:09:03.192: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3458,SelfLink:/api/v1/namespaces/watch-3458/configmaps/e2e-watch-test-label-changed,UID:8f554606-cd41-4e71-b4c4-78e20c12f84e,ResourceVersion:24213898,Generation:0,CreationTimestamp:2020-02-13 15:09:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 13 15:09:03.192: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3458,SelfLink:/api/v1/namespaces/watch-3458/configmaps/e2e-watch-test-label-changed,UID:8f554606-cd41-4e71-b4c4-78e20c12f84e,ResourceVersion:24213899,Generation:0,CreationTimestamp:2020-02-13 15:09:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 13 15:09:03.192: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3458,SelfLink:/api/v1/namespaces/watch-3458/configmaps/e2e-watch-test-label-changed,UID:8f554606-cd41-4e71-b4c4-78e20c12f84e,ResourceVersion:24213900,Generation:0,CreationTimestamp:2020-02-13 15:09:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb 13 15:09:13.311: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3458,SelfLink:/api/v1/namespaces/watch-3458/configmaps/e2e-watch-test-label-changed,UID:8f554606-cd41-4e71-b4c4-78e20c12f84e,ResourceVersion:24213914,Generation:0,CreationTimestamp:2020-02-13 15:09:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 13 15:09:13.311: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3458,SelfLink:/api/v1/namespaces/watch-3458/configmaps/e2e-watch-test-label-changed,UID:8f554606-cd41-4e71-b4c4-78e20c12f84e,ResourceVersion:24213915,Generation:0,CreationTimestamp:2020-02-13 15:09:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Feb 13 15:09:13.311: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3458,SelfLink:/api/v1/namespaces/watch-3458/configmaps/e2e-watch-test-label-changed,UID:8f554606-cd41-4e71-b4c4-78e20c12f84e,ResourceVersion:24213916,Generation:0,CreationTimestamp:2020-02-13 15:09:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:09:13.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3458" for this suite.
Feb 13 15:09:19.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:09:19.443: INFO: namespace watch-3458 deletion completed in 6.125878117s

• [SLOW TEST:17.733 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:09:19.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 13 15:09:19.558: INFO: Waiting up to 5m0s for pod "downward-api-fe485519-a9ae-4239-8753-0cfd195b8e7e" in namespace "downward-api-7874" to be "success or failure"
Feb 13 15:09:19.566: INFO: Pod "downward-api-fe485519-a9ae-4239-8753-0cfd195b8e7e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.159357ms
Feb 13 15:09:21.572: INFO: Pod "downward-api-fe485519-a9ae-4239-8753-0cfd195b8e7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014547187s
Feb 13 15:09:23.605: INFO: Pod "downward-api-fe485519-a9ae-4239-8753-0cfd195b8e7e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047511267s
Feb 13 15:09:25.613: INFO: Pod "downward-api-fe485519-a9ae-4239-8753-0cfd195b8e7e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054886366s
Feb 13 15:09:27.625: INFO: Pod "downward-api-fe485519-a9ae-4239-8753-0cfd195b8e7e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06748339s
Feb 13 15:09:29.722: INFO: Pod "downward-api-fe485519-a9ae-4239-8753-0cfd195b8e7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.164465297s
STEP: Saw pod success
Feb 13 15:09:29.723: INFO: Pod "downward-api-fe485519-a9ae-4239-8753-0cfd195b8e7e" satisfied condition "success or failure"
Feb 13 15:09:29.728: INFO: Trying to get logs from node iruya-node pod downward-api-fe485519-a9ae-4239-8753-0cfd195b8e7e container dapi-container: 
STEP: delete the pod
Feb 13 15:09:29.901: INFO: Waiting for pod downward-api-fe485519-a9ae-4239-8753-0cfd195b8e7e to disappear
Feb 13 15:09:29.908: INFO: Pod downward-api-fe485519-a9ae-4239-8753-0cfd195b8e7e no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:09:29.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7874" for this suite.
Feb 13 15:09:35.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:09:36.040: INFO: namespace downward-api-7874 deletion completed in 6.125157953s

• [SLOW TEST:16.597 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:09:36.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-14632269-1ad2-4ee4-b980-7b84cc5e9cf2
STEP: Creating secret with name secret-projected-all-test-volume-bfa93e3a-661c-4309-abf1-4148d16c1d93
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb 13 15:09:36.162: INFO: Waiting up to 5m0s for pod "projected-volume-27ffc590-c0a3-43aa-a27a-d3b6f3a73ac0" in namespace "projected-3260" to be "success or failure"
Feb 13 15:09:36.171: INFO: Pod "projected-volume-27ffc590-c0a3-43aa-a27a-d3b6f3a73ac0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.50803ms
Feb 13 15:09:38.180: INFO: Pod "projected-volume-27ffc590-c0a3-43aa-a27a-d3b6f3a73ac0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017928911s
Feb 13 15:09:40.189: INFO: Pod "projected-volume-27ffc590-c0a3-43aa-a27a-d3b6f3a73ac0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027063223s
Feb 13 15:09:42.194: INFO: Pod "projected-volume-27ffc590-c0a3-43aa-a27a-d3b6f3a73ac0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032349525s
Feb 13 15:09:44.200: INFO: Pod "projected-volume-27ffc590-c0a3-43aa-a27a-d3b6f3a73ac0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.037557577s
Feb 13 15:09:46.209: INFO: Pod "projected-volume-27ffc590-c0a3-43aa-a27a-d3b6f3a73ac0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.04697789s
STEP: Saw pod success
Feb 13 15:09:46.209: INFO: Pod "projected-volume-27ffc590-c0a3-43aa-a27a-d3b6f3a73ac0" satisfied condition "success or failure"
Feb 13 15:09:46.214: INFO: Trying to get logs from node iruya-node pod projected-volume-27ffc590-c0a3-43aa-a27a-d3b6f3a73ac0 container projected-all-volume-test: 
STEP: delete the pod
Feb 13 15:09:47.111: INFO: Waiting for pod projected-volume-27ffc590-c0a3-43aa-a27a-d3b6f3a73ac0 to disappear
Feb 13 15:09:47.120: INFO: Pod projected-volume-27ffc590-c0a3-43aa-a27a-d3b6f3a73ac0 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:09:47.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3260" for this suite.
Feb 13 15:09:53.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:09:53.346: INFO: namespace projected-3260 deletion completed in 6.215444918s

• [SLOW TEST:17.306 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:09:53.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 13 15:09:53.488: INFO: Waiting up to 5m0s for pod "downward-api-1b16c64e-5796-4b1c-bb8c-c5fb16d826c6" in namespace "downward-api-2876" to be "success or failure"
Feb 13 15:09:53.492: INFO: Pod "downward-api-1b16c64e-5796-4b1c-bb8c-c5fb16d826c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.859712ms
Feb 13 15:09:55.499: INFO: Pod "downward-api-1b16c64e-5796-4b1c-bb8c-c5fb16d826c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01088536s
Feb 13 15:09:57.513: INFO: Pod "downward-api-1b16c64e-5796-4b1c-bb8c-c5fb16d826c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024625139s
Feb 13 15:09:59.517: INFO: Pod "downward-api-1b16c64e-5796-4b1c-bb8c-c5fb16d826c6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029382412s
Feb 13 15:10:01.533: INFO: Pod "downward-api-1b16c64e-5796-4b1c-bb8c-c5fb16d826c6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044793791s
Feb 13 15:10:03.542: INFO: Pod "downward-api-1b16c64e-5796-4b1c-bb8c-c5fb16d826c6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.053969151s
Feb 13 15:10:05.549: INFO: Pod "downward-api-1b16c64e-5796-4b1c-bb8c-c5fb16d826c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.06105214s
STEP: Saw pod success
Feb 13 15:10:05.549: INFO: Pod "downward-api-1b16c64e-5796-4b1c-bb8c-c5fb16d826c6" satisfied condition "success or failure"
Feb 13 15:10:05.553: INFO: Trying to get logs from node iruya-node pod downward-api-1b16c64e-5796-4b1c-bb8c-c5fb16d826c6 container dapi-container: 
STEP: delete the pod
Feb 13 15:10:05.663: INFO: Waiting for pod downward-api-1b16c64e-5796-4b1c-bb8c-c5fb16d826c6 to disappear
Feb 13 15:10:05.685: INFO: Pod downward-api-1b16c64e-5796-4b1c-bb8c-c5fb16d826c6 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:10:05.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2876" for this suite.
Feb 13 15:10:11.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:10:11.961: INFO: namespace downward-api-2876 deletion completed in 6.225439058s

• [SLOW TEST:18.614 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:10:11.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Feb 13 15:10:12.073: INFO: Waiting up to 5m0s for pod "var-expansion-e797f292-c695-4076-8332-093eae987946" in namespace "var-expansion-2980" to be "success or failure"
Feb 13 15:10:12.091: INFO: Pod "var-expansion-e797f292-c695-4076-8332-093eae987946": Phase="Pending", Reason="", readiness=false. Elapsed: 17.69476ms
Feb 13 15:10:14.098: INFO: Pod "var-expansion-e797f292-c695-4076-8332-093eae987946": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024721305s
Feb 13 15:10:17.440: INFO: Pod "var-expansion-e797f292-c695-4076-8332-093eae987946": Phase="Pending", Reason="", readiness=false. Elapsed: 5.367029688s
Feb 13 15:10:19.450: INFO: Pod "var-expansion-e797f292-c695-4076-8332-093eae987946": Phase="Pending", Reason="", readiness=false. Elapsed: 7.376966351s
Feb 13 15:10:21.466: INFO: Pod "var-expansion-e797f292-c695-4076-8332-093eae987946": Phase="Pending", Reason="", readiness=false. Elapsed: 9.392447268s
Feb 13 15:10:23.476: INFO: Pod "var-expansion-e797f292-c695-4076-8332-093eae987946": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.402678855s
STEP: Saw pod success
Feb 13 15:10:23.476: INFO: Pod "var-expansion-e797f292-c695-4076-8332-093eae987946" satisfied condition "success or failure"
Feb 13 15:10:23.481: INFO: Trying to get logs from node iruya-node pod var-expansion-e797f292-c695-4076-8332-093eae987946 container dapi-container: 
STEP: delete the pod
Feb 13 15:10:23.623: INFO: Waiting for pod var-expansion-e797f292-c695-4076-8332-093eae987946 to disappear
Feb 13 15:10:23.642: INFO: Pod var-expansion-e797f292-c695-4076-8332-093eae987946 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:10:23.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2980" for this suite.
Feb 13 15:10:29.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:10:29.977: INFO: namespace var-expansion-2980 deletion completed in 6.325595033s

• [SLOW TEST:18.015 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:10:29.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:10:42.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3481" for this suite.
Feb 13 15:10:48.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:10:48.360: INFO: namespace kubelet-test-3481 deletion completed in 6.166460963s

• [SLOW TEST:18.382 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:10:48.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 13 15:10:48.605: INFO: Number of nodes with available pods: 0
Feb 13 15:10:48.605: INFO: Node iruya-node is running more than one daemon pod
Feb 13 15:10:49.619: INFO: Number of nodes with available pods: 0
Feb 13 15:10:49.619: INFO: Node iruya-node is running more than one daemon pod
Feb 13 15:10:50.717: INFO: Number of nodes with available pods: 0
Feb 13 15:10:50.717: INFO: Node iruya-node is running more than one daemon pod
Feb 13 15:10:51.628: INFO: Number of nodes with available pods: 0
Feb 13 15:10:51.629: INFO: Node iruya-node is running more than one daemon pod
Feb 13 15:10:52.623: INFO: Number of nodes with available pods: 0
Feb 13 15:10:52.623: INFO: Node iruya-node is running more than one daemon pod
Feb 13 15:10:53.633: INFO: Number of nodes with available pods: 0
Feb 13 15:10:53.633: INFO: Node iruya-node is running more than one daemon pod
Feb 13 15:10:54.651: INFO: Number of nodes with available pods: 0
Feb 13 15:10:54.651: INFO: Node iruya-node is running more than one daemon pod
Feb 13 15:10:55.626: INFO: Number of nodes with available pods: 0
Feb 13 15:10:55.626: INFO: Node iruya-node is running more than one daemon pod
Feb 13 15:10:57.345: INFO: Number of nodes with available pods: 0
Feb 13 15:10:57.345: INFO: Node iruya-node is running more than one daemon pod
Feb 13 15:10:57.690: INFO: Number of nodes with available pods: 0
Feb 13 15:10:57.690: INFO: Node iruya-node is running more than one daemon pod
Feb 13 15:10:58.632: INFO: Number of nodes with available pods: 1
Feb 13 15:10:58.632: INFO: Node iruya-node is running more than one daemon pod
Feb 13 15:10:59.616: INFO: Number of nodes with available pods: 1
Feb 13 15:10:59.616: INFO: Node iruya-node is running more than one daemon pod
Feb 13 15:11:00.676: INFO: Number of nodes with available pods: 2
Feb 13 15:11:00.676: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb 13 15:11:00.827: INFO: Number of nodes with available pods: 1
Feb 13 15:11:00.827: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 13 15:11:01.854: INFO: Number of nodes with available pods: 1
Feb 13 15:11:01.854: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 13 15:11:03.036: INFO: Number of nodes with available pods: 1
Feb 13 15:11:03.036: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 13 15:11:03.841: INFO: Number of nodes with available pods: 1
Feb 13 15:11:03.841: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 13 15:11:04.848: INFO: Number of nodes with available pods: 1
Feb 13 15:11:04.848: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 13 15:11:05.843: INFO: Number of nodes with available pods: 1
Feb 13 15:11:05.843: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 13 15:11:06.864: INFO: Number of nodes with available pods: 1
Feb 13 15:11:06.864: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 13 15:11:07.882: INFO: Number of nodes with available pods: 1
Feb 13 15:11:07.882: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 13 15:11:09.101: INFO: Number of nodes with available pods: 1
Feb 13 15:11:09.101: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 13 15:11:10.371: INFO: Number of nodes with available pods: 1
Feb 13 15:11:10.371: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 13 15:11:10.854: INFO: Number of nodes with available pods: 1
Feb 13 15:11:10.854: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 13 15:11:11.872: INFO: Number of nodes with available pods: 1
Feb 13 15:11:11.872: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 13 15:11:13.166: INFO: Number of nodes with available pods: 1
Feb 13 15:11:13.166: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 13 15:11:13.932: INFO: Number of nodes with available pods: 1
Feb 13 15:11:13.932: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 13 15:11:14.839: INFO: Number of nodes with available pods: 1
Feb 13 15:11:14.839: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 13 15:11:15.855: INFO: Number of nodes with available pods: 2
Feb 13 15:11:15.855: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7773, will wait for the garbage collector to delete the pods
Feb 13 15:11:15.936: INFO: Deleting DaemonSet.extensions daemon-set took: 19.044325ms
Feb 13 15:11:16.237: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.842877ms
Feb 13 15:11:27.943: INFO: Number of nodes with available pods: 0
Feb 13 15:11:27.943: INFO: Number of running nodes: 0, number of available pods: 0
Feb 13 15:11:27.947: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7773/daemonsets","resourceVersion":"24214271"},"items":null}

Feb 13 15:11:27.951: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7773/pods","resourceVersion":"24214271"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:11:27.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7773" for this suite.
Feb 13 15:11:33.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:11:34.087: INFO: namespace daemonsets-7773 deletion completed in 6.1201667s

• [SLOW TEST:45.727 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:11:34.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 13 15:11:44.782: INFO: Successfully updated pod "labelsupdatedf0036f0-b4bc-4ea4-92a6-55b1e298bdd6"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:11:46.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8629" for this suite.
Feb 13 15:12:08.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:12:09.049: INFO: namespace projected-8629 deletion completed in 22.18409703s

• [SLOW TEST:34.961 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:12:09.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-7596, will wait for the garbage collector to delete the pods
Feb 13 15:12:21.223: INFO: Deleting Job.batch foo took: 16.480805ms
Feb 13 15:12:21.523: INFO: Terminating Job.batch foo pods took: 300.530925ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:12:59.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7596" for this suite.
Feb 13 15:13:05.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:13:05.177: INFO: namespace job-7596 deletion completed in 6.143739419s

• [SLOW TEST:56.128 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:13:05.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 13 15:13:05.266: INFO: Waiting up to 5m0s for pod "pod-04006984-7d7a-41bb-8798-1258cae62a60" in namespace "emptydir-3885" to be "success or failure"
Feb 13 15:13:05.318: INFO: Pod "pod-04006984-7d7a-41bb-8798-1258cae62a60": Phase="Pending", Reason="", readiness=false. Elapsed: 51.86977ms
Feb 13 15:13:07.330: INFO: Pod "pod-04006984-7d7a-41bb-8798-1258cae62a60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063983649s
Feb 13 15:13:09.341: INFO: Pod "pod-04006984-7d7a-41bb-8798-1258cae62a60": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074911898s
Feb 13 15:13:11.355: INFO: Pod "pod-04006984-7d7a-41bb-8798-1258cae62a60": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088687142s
Feb 13 15:13:13.366: INFO: Pod "pod-04006984-7d7a-41bb-8798-1258cae62a60": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100123957s
Feb 13 15:13:15.823: INFO: Pod "pod-04006984-7d7a-41bb-8798-1258cae62a60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.556439614s
STEP: Saw pod success
Feb 13 15:13:15.823: INFO: Pod "pod-04006984-7d7a-41bb-8798-1258cae62a60" satisfied condition "success or failure"
Feb 13 15:13:15.839: INFO: Trying to get logs from node iruya-node pod pod-04006984-7d7a-41bb-8798-1258cae62a60 container test-container: 
STEP: delete the pod
Feb 13 15:13:16.010: INFO: Waiting for pod pod-04006984-7d7a-41bb-8798-1258cae62a60 to disappear
Feb 13 15:13:16.044: INFO: Pod pod-04006984-7d7a-41bb-8798-1258cae62a60 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:13:16.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3885" for this suite.
Feb 13 15:13:22.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:13:22.190: INFO: namespace emptydir-3885 deletion completed in 6.136333254s

• [SLOW TEST:17.013 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:13:22.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Feb 13 15:13:22.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb 13 15:13:22.455: INFO: stderr: ""
Feb 13 15:13:22.455: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:13:22.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1657" for this suite.
Feb 13 15:13:28.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:13:28.670: INFO: namespace kubectl-1657 deletion completed in 6.210441s

• [SLOW TEST:6.479 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:13:28.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-217c103a-f90f-4abc-8f11-44b81eab3950
STEP: Creating a pod to test consume secrets
Feb 13 15:13:28.802: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3b8780d3-9b03-4420-90d2-e56eb6108356" in namespace "projected-8224" to be "success or failure"
Feb 13 15:13:28.806: INFO: Pod "pod-projected-secrets-3b8780d3-9b03-4420-90d2-e56eb6108356": Phase="Pending", Reason="", readiness=false. Elapsed: 3.936298ms
Feb 13 15:13:30.814: INFO: Pod "pod-projected-secrets-3b8780d3-9b03-4420-90d2-e56eb6108356": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011809042s
Feb 13 15:13:32.825: INFO: Pod "pod-projected-secrets-3b8780d3-9b03-4420-90d2-e56eb6108356": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023636181s
Feb 13 15:13:34.832: INFO: Pod "pod-projected-secrets-3b8780d3-9b03-4420-90d2-e56eb6108356": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030643127s
Feb 13 15:13:36.841: INFO: Pod "pod-projected-secrets-3b8780d3-9b03-4420-90d2-e56eb6108356": Phase="Pending", Reason="", readiness=false. Elapsed: 8.03960973s
Feb 13 15:13:38.856: INFO: Pod "pod-projected-secrets-3b8780d3-9b03-4420-90d2-e56eb6108356": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.05410386s
STEP: Saw pod success
Feb 13 15:13:38.856: INFO: Pod "pod-projected-secrets-3b8780d3-9b03-4420-90d2-e56eb6108356" satisfied condition "success or failure"
Feb 13 15:13:38.870: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-3b8780d3-9b03-4420-90d2-e56eb6108356 container projected-secret-volume-test: 
STEP: delete the pod
Feb 13 15:13:39.029: INFO: Waiting for pod pod-projected-secrets-3b8780d3-9b03-4420-90d2-e56eb6108356 to disappear
Feb 13 15:13:39.043: INFO: Pod pod-projected-secrets-3b8780d3-9b03-4420-90d2-e56eb6108356 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:13:39.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8224" for this suite.
Feb 13 15:13:45.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:13:45.215: INFO: namespace projected-8224 deletion completed in 6.151806107s

• [SLOW TEST:16.544 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:13:45.216: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 13 15:13:45.381: INFO: Waiting up to 5m0s for pod "downward-api-ed820007-4e42-450c-a254-b45c53385f95" in namespace "downward-api-5587" to be "success or failure"
Feb 13 15:13:45.417: INFO: Pod "downward-api-ed820007-4e42-450c-a254-b45c53385f95": Phase="Pending", Reason="", readiness=false. Elapsed: 35.978632ms
Feb 13 15:13:47.425: INFO: Pod "downward-api-ed820007-4e42-450c-a254-b45c53385f95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04384913s
Feb 13 15:13:49.435: INFO: Pod "downward-api-ed820007-4e42-450c-a254-b45c53385f95": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054467167s
Feb 13 15:13:51.441: INFO: Pod "downward-api-ed820007-4e42-450c-a254-b45c53385f95": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060480542s
Feb 13 15:13:53.454: INFO: Pod "downward-api-ed820007-4e42-450c-a254-b45c53385f95": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072617554s
Feb 13 15:13:55.461: INFO: Pod "downward-api-ed820007-4e42-450c-a254-b45c53385f95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.080495198s
STEP: Saw pod success
Feb 13 15:13:55.462: INFO: Pod "downward-api-ed820007-4e42-450c-a254-b45c53385f95" satisfied condition "success or failure"
Feb 13 15:13:55.465: INFO: Trying to get logs from node iruya-node pod downward-api-ed820007-4e42-450c-a254-b45c53385f95 container dapi-container: 
STEP: delete the pod
Feb 13 15:13:55.735: INFO: Waiting for pod downward-api-ed820007-4e42-450c-a254-b45c53385f95 to disappear
Feb 13 15:13:55.830: INFO: Pod downward-api-ed820007-4e42-450c-a254-b45c53385f95 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:13:55.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5587" for this suite.
Feb 13 15:14:01.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:14:02.050: INFO: namespace downward-api-5587 deletion completed in 6.213555509s

• [SLOW TEST:16.835 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:14:02.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb 13 15:14:02.373: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4694,SelfLink:/api/v1/namespaces/watch-4694/configmaps/e2e-watch-test-configmap-a,UID:87d04ac0-432d-4dfb-9d9b-44aaa9224a94,ResourceVersion:24214657,Generation:0,CreationTimestamp:2020-02-13 15:14:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 13 15:14:02.373: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4694,SelfLink:/api/v1/namespaces/watch-4694/configmaps/e2e-watch-test-configmap-a,UID:87d04ac0-432d-4dfb-9d9b-44aaa9224a94,ResourceVersion:24214657,Generation:0,CreationTimestamp:2020-02-13 15:14:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb 13 15:14:12.391: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4694,SelfLink:/api/v1/namespaces/watch-4694/configmaps/e2e-watch-test-configmap-a,UID:87d04ac0-432d-4dfb-9d9b-44aaa9224a94,ResourceVersion:24214671,Generation:0,CreationTimestamp:2020-02-13 15:14:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 13 15:14:12.392: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4694,SelfLink:/api/v1/namespaces/watch-4694/configmaps/e2e-watch-test-configmap-a,UID:87d04ac0-432d-4dfb-9d9b-44aaa9224a94,ResourceVersion:24214671,Generation:0,CreationTimestamp:2020-02-13 15:14:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb 13 15:14:22.404: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4694,SelfLink:/api/v1/namespaces/watch-4694/configmaps/e2e-watch-test-configmap-a,UID:87d04ac0-432d-4dfb-9d9b-44aaa9224a94,ResourceVersion:24214686,Generation:0,CreationTimestamp:2020-02-13 15:14:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 13 15:14:22.405: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4694,SelfLink:/api/v1/namespaces/watch-4694/configmaps/e2e-watch-test-configmap-a,UID:87d04ac0-432d-4dfb-9d9b-44aaa9224a94,ResourceVersion:24214686,Generation:0,CreationTimestamp:2020-02-13 15:14:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb 13 15:14:32.416: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4694,SelfLink:/api/v1/namespaces/watch-4694/configmaps/e2e-watch-test-configmap-a,UID:87d04ac0-432d-4dfb-9d9b-44aaa9224a94,ResourceVersion:24214701,Generation:0,CreationTimestamp:2020-02-13 15:14:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 13 15:14:32.416: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4694,SelfLink:/api/v1/namespaces/watch-4694/configmaps/e2e-watch-test-configmap-a,UID:87d04ac0-432d-4dfb-9d9b-44aaa9224a94,ResourceVersion:24214701,Generation:0,CreationTimestamp:2020-02-13 15:14:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb 13 15:14:42.439: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4694,SelfLink:/api/v1/namespaces/watch-4694/configmaps/e2e-watch-test-configmap-b,UID:10007859-f568-4e11-8951-03350cce8234,ResourceVersion:24214715,Generation:0,CreationTimestamp:2020-02-13 15:14:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 13 15:14:42.439: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4694,SelfLink:/api/v1/namespaces/watch-4694/configmaps/e2e-watch-test-configmap-b,UID:10007859-f568-4e11-8951-03350cce8234,ResourceVersion:24214715,Generation:0,CreationTimestamp:2020-02-13 15:14:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb 13 15:14:52.459: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4694,SelfLink:/api/v1/namespaces/watch-4694/configmaps/e2e-watch-test-configmap-b,UID:10007859-f568-4e11-8951-03350cce8234,ResourceVersion:24214729,Generation:0,CreationTimestamp:2020-02-13 15:14:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 13 15:14:52.459: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4694,SelfLink:/api/v1/namespaces/watch-4694/configmaps/e2e-watch-test-configmap-b,UID:10007859-f568-4e11-8951-03350cce8234,ResourceVersion:24214729,Generation:0,CreationTimestamp:2020-02-13 15:14:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:15:02.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4694" for this suite.
Feb 13 15:15:08.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:15:08.636: INFO: namespace watch-4694 deletion completed in 6.163129947s

• [SLOW TEST:66.585 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:15:08.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 13 15:15:08.763: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:15:09.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7845" for this suite.
Feb 13 15:15:15.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:15:15.995: INFO: namespace custom-resource-definition-7845 deletion completed in 6.136868618s

• [SLOW TEST:7.359 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:15:15.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 13 15:15:16.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-2238'
Feb 13 15:15:18.107: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 13 15:15:18.107: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb 13 15:15:18.162: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-4mqwh]
Feb 13 15:15:18.163: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-4mqwh" in namespace "kubectl-2238" to be "running and ready"
Feb 13 15:15:18.215: INFO: Pod "e2e-test-nginx-rc-4mqwh": Phase="Pending", Reason="", readiness=false. Elapsed: 52.087522ms
Feb 13 15:15:20.225: INFO: Pod "e2e-test-nginx-rc-4mqwh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061961605s
Feb 13 15:15:22.231: INFO: Pod "e2e-test-nginx-rc-4mqwh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0682287s
Feb 13 15:15:24.237: INFO: Pod "e2e-test-nginx-rc-4mqwh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074505904s
Feb 13 15:15:26.249: INFO: Pod "e2e-test-nginx-rc-4mqwh": Phase="Running", Reason="", readiness=true. Elapsed: 8.086394125s
Feb 13 15:15:26.249: INFO: Pod "e2e-test-nginx-rc-4mqwh" satisfied condition "running and ready"
Feb 13 15:15:26.249: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-4mqwh]
Feb 13 15:15:26.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-2238'
Feb 13 15:15:26.577: INFO: stderr: ""
Feb 13 15:15:26.577: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Feb 13 15:15:26.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-2238'
Feb 13 15:15:26.708: INFO: stderr: ""
Feb 13 15:15:26.708: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:15:26.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2238" for this suite.
Feb 13 15:15:48.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:15:48.904: INFO: namespace kubectl-2238 deletion completed in 22.191792056s

• [SLOW TEST:32.909 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:15:48.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 13 15:15:48.990: INFO: Waiting up to 5m0s for pod "downwardapi-volume-513f881d-3a09-449f-832a-3f0a85bfc5ae" in namespace "projected-2372" to be "success or failure"
Feb 13 15:15:49.081: INFO: Pod "downwardapi-volume-513f881d-3a09-449f-832a-3f0a85bfc5ae": Phase="Pending", Reason="", readiness=false. Elapsed: 90.44564ms
Feb 13 15:15:51.092: INFO: Pod "downwardapi-volume-513f881d-3a09-449f-832a-3f0a85bfc5ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101842843s
Feb 13 15:15:53.098: INFO: Pod "downwardapi-volume-513f881d-3a09-449f-832a-3f0a85bfc5ae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107528949s
Feb 13 15:15:55.108: INFO: Pod "downwardapi-volume-513f881d-3a09-449f-832a-3f0a85bfc5ae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117336349s
Feb 13 15:15:57.114: INFO: Pod "downwardapi-volume-513f881d-3a09-449f-832a-3f0a85bfc5ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.123660092s
STEP: Saw pod success
Feb 13 15:15:57.114: INFO: Pod "downwardapi-volume-513f881d-3a09-449f-832a-3f0a85bfc5ae" satisfied condition "success or failure"
Feb 13 15:15:57.117: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-513f881d-3a09-449f-832a-3f0a85bfc5ae container client-container: 
STEP: delete the pod
Feb 13 15:15:57.202: INFO: Waiting for pod downwardapi-volume-513f881d-3a09-449f-832a-3f0a85bfc5ae to disappear
Feb 13 15:15:57.209: INFO: Pod downwardapi-volume-513f881d-3a09-449f-832a-3f0a85bfc5ae no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:15:57.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2372" for this suite.
Feb 13 15:16:03.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:16:03.369: INFO: namespace projected-2372 deletion completed in 6.155021662s

• [SLOW TEST:14.464 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:16:03.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 13 15:16:03.496: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dd9498a3-715d-4d59-be6e-b99542395b57" in namespace "projected-5589" to be "success or failure"
Feb 13 15:16:03.513: INFO: Pod "downwardapi-volume-dd9498a3-715d-4d59-be6e-b99542395b57": Phase="Pending", Reason="", readiness=false. Elapsed: 17.081747ms
Feb 13 15:16:05.528: INFO: Pod "downwardapi-volume-dd9498a3-715d-4d59-be6e-b99542395b57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031355126s
Feb 13 15:16:07.591: INFO: Pod "downwardapi-volume-dd9498a3-715d-4d59-be6e-b99542395b57": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095293211s
Feb 13 15:16:09.600: INFO: Pod "downwardapi-volume-dd9498a3-715d-4d59-be6e-b99542395b57": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104131205s
Feb 13 15:16:11.643: INFO: Pod "downwardapi-volume-dd9498a3-715d-4d59-be6e-b99542395b57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.146448018s
STEP: Saw pod success
Feb 13 15:16:11.643: INFO: Pod "downwardapi-volume-dd9498a3-715d-4d59-be6e-b99542395b57" satisfied condition "success or failure"
Feb 13 15:16:11.646: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-dd9498a3-715d-4d59-be6e-b99542395b57 container client-container: 
STEP: delete the pod
Feb 13 15:16:11.692: INFO: Waiting for pod downwardapi-volume-dd9498a3-715d-4d59-be6e-b99542395b57 to disappear
Feb 13 15:16:11.699: INFO: Pod downwardapi-volume-dd9498a3-715d-4d59-be6e-b99542395b57 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:16:11.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5589" for this suite.
Feb 13 15:16:17.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:16:17.963: INFO: namespace projected-5589 deletion completed in 6.254481913s

• [SLOW TEST:14.594 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:16:17.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 13 15:16:18.072: INFO: Waiting up to 5m0s for pod "downward-api-7a7c8f22-3ca9-440a-b35d-2f2dd36ef2a7" in namespace "downward-api-1367" to be "success or failure"
Feb 13 15:16:18.091: INFO: Pod "downward-api-7a7c8f22-3ca9-440a-b35d-2f2dd36ef2a7": Phase="Pending", Reason="", readiness=false. Elapsed: 19.031443ms
Feb 13 15:16:20.102: INFO: Pod "downward-api-7a7c8f22-3ca9-440a-b35d-2f2dd36ef2a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030020279s
Feb 13 15:16:22.155: INFO: Pod "downward-api-7a7c8f22-3ca9-440a-b35d-2f2dd36ef2a7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083241317s
Feb 13 15:16:24.164: INFO: Pod "downward-api-7a7c8f22-3ca9-440a-b35d-2f2dd36ef2a7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091622411s
Feb 13 15:16:26.173: INFO: Pod "downward-api-7a7c8f22-3ca9-440a-b35d-2f2dd36ef2a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.100727046s
STEP: Saw pod success
Feb 13 15:16:26.173: INFO: Pod "downward-api-7a7c8f22-3ca9-440a-b35d-2f2dd36ef2a7" satisfied condition "success or failure"
Feb 13 15:16:26.179: INFO: Trying to get logs from node iruya-node pod downward-api-7a7c8f22-3ca9-440a-b35d-2f2dd36ef2a7 container dapi-container: 
STEP: delete the pod
Feb 13 15:16:26.249: INFO: Waiting for pod downward-api-7a7c8f22-3ca9-440a-b35d-2f2dd36ef2a7 to disappear
Feb 13 15:16:26.253: INFO: Pod downward-api-7a7c8f22-3ca9-440a-b35d-2f2dd36ef2a7 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:16:26.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1367" for this suite.
Feb 13 15:16:32.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:16:32.373: INFO: namespace downward-api-1367 deletion completed in 6.115152355s

• [SLOW TEST:14.409 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:16:32.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 13 15:16:32.485: INFO: Waiting up to 5m0s for pod "pod-5c81df18-c2fc-46a2-a1ed-cdcd050600cf" in namespace "emptydir-6422" to be "success or failure"
Feb 13 15:16:32.495: INFO: Pod "pod-5c81df18-c2fc-46a2-a1ed-cdcd050600cf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.33431ms
Feb 13 15:16:34.509: INFO: Pod "pod-5c81df18-c2fc-46a2-a1ed-cdcd050600cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02451376s
Feb 13 15:16:36.519: INFO: Pod "pod-5c81df18-c2fc-46a2-a1ed-cdcd050600cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03400478s
Feb 13 15:16:38.533: INFO: Pod "pod-5c81df18-c2fc-46a2-a1ed-cdcd050600cf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047991365s
Feb 13 15:16:40.544: INFO: Pod "pod-5c81df18-c2fc-46a2-a1ed-cdcd050600cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059468578s
STEP: Saw pod success
Feb 13 15:16:40.545: INFO: Pod "pod-5c81df18-c2fc-46a2-a1ed-cdcd050600cf" satisfied condition "success or failure"
Feb 13 15:16:40.552: INFO: Trying to get logs from node iruya-node pod pod-5c81df18-c2fc-46a2-a1ed-cdcd050600cf container test-container: 
STEP: delete the pod
Feb 13 15:16:40.720: INFO: Waiting for pod pod-5c81df18-c2fc-46a2-a1ed-cdcd050600cf to disappear
Feb 13 15:16:40.730: INFO: Pod pod-5c81df18-c2fc-46a2-a1ed-cdcd050600cf no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:16:40.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6422" for this suite.
Feb 13 15:16:46.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:16:46.907: INFO: namespace emptydir-6422 deletion completed in 6.170772189s

• [SLOW TEST:14.534 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:16:46.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-0df91011-c3e5-43ed-b97f-56011f98f562
STEP: Creating configMap with name cm-test-opt-upd-4de788f5-08c0-444c-86f9-c34f9660ea7c
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-0df91011-c3e5-43ed-b97f-56011f98f562
STEP: Updating configmap cm-test-opt-upd-4de788f5-08c0-444c-86f9-c34f9660ea7c
STEP: Creating configMap with name cm-test-opt-create-6374a68f-ff26-4d61-a14c-b30e9fd6a9bc
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:17:01.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5374" for this suite.
Feb 13 15:17:23.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:17:23.517: INFO: namespace projected-5374 deletion completed in 22.153021564s

• [SLOW TEST:36.609 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:17:23.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 13 15:17:23.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4366'
Feb 13 15:17:23.758: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 13 15:17:23.758: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Feb 13 15:17:25.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-4366'
Feb 13 15:17:25.990: INFO: stderr: ""
Feb 13 15:17:25.990: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:17:25.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4366" for this suite.
Feb 13 15:17:32.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:17:32.152: INFO: namespace kubectl-4366 deletion completed in 6.117690067s

• [SLOW TEST:8.635 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:17:32.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Feb 13 15:17:32.267: INFO: Waiting up to 5m0s for pod "client-containers-73d3aa7e-8cfe-4ea2-8899-f6ac4e10edf3" in namespace "containers-3336" to be "success or failure"
Feb 13 15:17:32.292: INFO: Pod "client-containers-73d3aa7e-8cfe-4ea2-8899-f6ac4e10edf3": Phase="Pending", Reason="", readiness=false. Elapsed: 25.002555ms
Feb 13 15:17:34.317: INFO: Pod "client-containers-73d3aa7e-8cfe-4ea2-8899-f6ac4e10edf3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050251377s
Feb 13 15:17:36.324: INFO: Pod "client-containers-73d3aa7e-8cfe-4ea2-8899-f6ac4e10edf3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057015662s
Feb 13 15:17:38.332: INFO: Pod "client-containers-73d3aa7e-8cfe-4ea2-8899-f6ac4e10edf3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065252141s
Feb 13 15:17:40.343: INFO: Pod "client-containers-73d3aa7e-8cfe-4ea2-8899-f6ac4e10edf3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075994914s
Feb 13 15:17:42.387: INFO: Pod "client-containers-73d3aa7e-8cfe-4ea2-8899-f6ac4e10edf3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.119795627s
STEP: Saw pod success
Feb 13 15:17:42.387: INFO: Pod "client-containers-73d3aa7e-8cfe-4ea2-8899-f6ac4e10edf3" satisfied condition "success or failure"
Feb 13 15:17:42.391: INFO: Trying to get logs from node iruya-node pod client-containers-73d3aa7e-8cfe-4ea2-8899-f6ac4e10edf3 container test-container: 
STEP: delete the pod
Feb 13 15:17:42.419: INFO: Waiting for pod client-containers-73d3aa7e-8cfe-4ea2-8899-f6ac4e10edf3 to disappear
Feb 13 15:17:42.438: INFO: Pod client-containers-73d3aa7e-8cfe-4ea2-8899-f6ac4e10edf3 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:17:42.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3336" for this suite.
Feb 13 15:17:48.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:17:48.650: INFO: namespace containers-3336 deletion completed in 6.208738593s

• [SLOW TEST:16.498 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:17:48.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:17:56.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2801" for this suite.
Feb 13 15:18:48.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:18:49.023: INFO: namespace kubelet-test-2801 deletion completed in 52.128533694s

• [SLOW TEST:60.373 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:18:49.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-fc712c95-fbd7-4de0-8abd-3069f7c2ba06
STEP: Creating a pod to test consume configMaps
Feb 13 15:18:49.149: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7540c723-1778-4cf7-b52a-2384cd0736f5" in namespace "projected-4862" to be "success or failure"
Feb 13 15:18:49.159: INFO: Pod "pod-projected-configmaps-7540c723-1778-4cf7-b52a-2384cd0736f5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.182248ms
Feb 13 15:18:51.170: INFO: Pod "pod-projected-configmaps-7540c723-1778-4cf7-b52a-2384cd0736f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019973102s
Feb 13 15:18:53.177: INFO: Pod "pod-projected-configmaps-7540c723-1778-4cf7-b52a-2384cd0736f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027219506s
Feb 13 15:18:55.227: INFO: Pod "pod-projected-configmaps-7540c723-1778-4cf7-b52a-2384cd0736f5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077470315s
Feb 13 15:18:57.235: INFO: Pod "pod-projected-configmaps-7540c723-1778-4cf7-b52a-2384cd0736f5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.085640228s
Feb 13 15:18:59.241: INFO: Pod "pod-projected-configmaps-7540c723-1778-4cf7-b52a-2384cd0736f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.09166872s
STEP: Saw pod success
Feb 13 15:18:59.241: INFO: Pod "pod-projected-configmaps-7540c723-1778-4cf7-b52a-2384cd0736f5" satisfied condition "success or failure"
Feb 13 15:18:59.246: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-7540c723-1778-4cf7-b52a-2384cd0736f5 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 13 15:18:59.361: INFO: Waiting for pod pod-projected-configmaps-7540c723-1778-4cf7-b52a-2384cd0736f5 to disappear
Feb 13 15:18:59.407: INFO: Pod pod-projected-configmaps-7540c723-1778-4cf7-b52a-2384cd0736f5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:18:59.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4862" for this suite.
Feb 13 15:19:05.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:19:05.627: INFO: namespace projected-4862 deletion completed in 6.2128942s

• [SLOW TEST:16.603 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 13 15:19:05.628: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 13 15:19:05.782: INFO: Waiting up to 5m0s for pod "pod-1d7a74d9-ee22-4912-a3d7-b0be01f21133" in namespace "emptydir-1730" to be "success or failure"
Feb 13 15:19:05.795: INFO: Pod "pod-1d7a74d9-ee22-4912-a3d7-b0be01f21133": Phase="Pending", Reason="", readiness=false. Elapsed: 13.186019ms
Feb 13 15:19:07.806: INFO: Pod "pod-1d7a74d9-ee22-4912-a3d7-b0be01f21133": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024503972s
Feb 13 15:19:09.823: INFO: Pod "pod-1d7a74d9-ee22-4912-a3d7-b0be01f21133": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040748712s
Feb 13 15:19:11.830: INFO: Pod "pod-1d7a74d9-ee22-4912-a3d7-b0be01f21133": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048032288s
Feb 13 15:19:13.845: INFO: Pod "pod-1d7a74d9-ee22-4912-a3d7-b0be01f21133": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063112358s
Feb 13 15:19:15.861: INFO: Pod "pod-1d7a74d9-ee22-4912-a3d7-b0be01f21133": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.079106782s
STEP: Saw pod success
Feb 13 15:19:15.861: INFO: Pod "pod-1d7a74d9-ee22-4912-a3d7-b0be01f21133" satisfied condition "success or failure"
Feb 13 15:19:15.867: INFO: Trying to get logs from node iruya-node pod pod-1d7a74d9-ee22-4912-a3d7-b0be01f21133 container test-container: 
STEP: delete the pod
Feb 13 15:19:16.075: INFO: Waiting for pod pod-1d7a74d9-ee22-4912-a3d7-b0be01f21133 to disappear
Feb 13 15:19:16.082: INFO: Pod pod-1d7a74d9-ee22-4912-a3d7-b0be01f21133 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 15:19:16.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1730" for this suite.
Feb 13 15:19:22.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 15:19:22.222: INFO: namespace emptydir-1730 deletion completed in 6.133758781s

• [SLOW TEST:16.595 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSFeb 13 15:19:22.223: INFO: Running AfterSuite actions on all nodes
Feb 13 15:19:22.223: INFO: Running AfterSuite actions on node 1
Feb 13 15:19:22.223: INFO: Skipping dumping logs from cluster


Summarizing 1 Failure:

[Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769

Ran 215 of 4412 Specs in 8591.089 seconds
FAIL! -- 214 Passed | 1 Failed | 0 Pending | 4197 Skipped
--- FAIL: TestE2E (8591.71s)
FAIL