I1228 12:56:49.991401 8 e2e.go:243] Starting e2e run "fbeb1dac-0546-4439-981b-b1a7fb506aa6" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1577537808 - Will randomize all specs Will run 215 of 4412 specs Dec 28 12:56:50.283: INFO: >>> kubeConfig: /root/.kube/config Dec 28 12:56:50.286: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 28 12:56:50.323: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 28 12:56:50.370: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 28 12:56:50.370: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 28 12:56:50.370: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 28 12:56:50.387: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 28 12:56:50.387: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 28 12:56:50.387: INFO: e2e test version: v1.15.7 Dec 28 12:56:50.388: INFO: kube-apiserver version: v1.15.1 SSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 12:56:50.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe Dec 28 12:56:50.538: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-6051e9eb-489f-45cd-9cd3-78b85381bd56 in namespace container-probe-1597 Dec 28 12:56:58.659: INFO: Started pod busybox-6051e9eb-489f-45cd-9cd3-78b85381bd56 in namespace container-probe-1597 STEP: checking the pod's current state and verifying that restartCount is present Dec 28 12:56:58.662: INFO: Initial restart count of pod busybox-6051e9eb-489f-45cd-9cd3-78b85381bd56 is 0 Dec 28 12:57:55.067: INFO: Restart count of pod container-probe-1597/busybox-6051e9eb-489f-45cd-9cd3-78b85381bd56 is now 1 (56.4049305s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 12:57:55.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1597" for this suite. Dec 28 12:58:01.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 12:58:01.250: INFO: namespace container-probe-1597 deletion completed in 6.140051327s • [SLOW TEST:70.861 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 12:58:01.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9161.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9161.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9161.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9161.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9161.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9161.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9161.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9161.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9161.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9161.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9161.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9161.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9161.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 158.223.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.223.158_udp@PTR;check="$$(dig +tcp +noall +answer +search 158.223.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.223.158_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9161.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9161.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9161.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9161.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9161.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9161.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9161.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9161.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9161.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9161.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9161.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9161.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9161.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 158.223.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.223.158_udp@PTR;check="$$(dig +tcp +noall +answer +search 158.223.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.223.158_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 28 12:58:15.759: INFO: Unable to read wheezy_udp@dns-test-service.dns-9161.svc.cluster.local from pod dns-9161/dns-test-79cae5ab-4691-43dd-85dc-ebafc1a997c0: the server could not find the requested resource (get pods dns-test-79cae5ab-4691-43dd-85dc-ebafc1a997c0) Dec 28 12:58:15.773: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9161.svc.cluster.local from pod dns-9161/dns-test-79cae5ab-4691-43dd-85dc-ebafc1a997c0: the server could not find the requested resource (get pods dns-test-79cae5ab-4691-43dd-85dc-ebafc1a997c0) Dec 28 12:58:15.782: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9161.svc.cluster.local from pod dns-9161/dns-test-79cae5ab-4691-43dd-85dc-ebafc1a997c0: the server could not find the requested resource (get pods dns-test-79cae5ab-4691-43dd-85dc-ebafc1a997c0) Dec 28 12:58:15.794: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9161.svc.cluster.local from pod dns-9161/dns-test-79cae5ab-4691-43dd-85dc-ebafc1a997c0: the server could not find the requested resource (get pods dns-test-79cae5ab-4691-43dd-85dc-ebafc1a997c0) Dec 28 12:58:15.804: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-9161.svc.cluster.local from pod dns-9161/dns-test-79cae5ab-4691-43dd-85dc-ebafc1a997c0: the server could not find the requested resource (get pods dns-test-79cae5ab-4691-43dd-85dc-ebafc1a997c0) Dec 28 12:58:15.813: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-9161.svc.cluster.local from pod dns-9161/dns-test-79cae5ab-4691-43dd-85dc-ebafc1a997c0: the server could not find the requested resource (get pods dns-test-79cae5ab-4691-43dd-85dc-ebafc1a997c0) Dec 28 12:58:15.820: INFO: Unable to read wheezy_udp@PodARecord from pod dns-9161/dns-test-79cae5ab-4691-43dd-85dc-ebafc1a997c0: the server could not find the requested resource (get pods dns-test-79cae5ab-4691-43dd-85dc-ebafc1a997c0) Dec 28 12:58:15.828: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-9161/dns-test-79cae5ab-4691-43dd-85dc-ebafc1a997c0: the server could not find the requested resource (get pods dns-test-79cae5ab-4691-43dd-85dc-ebafc1a997c0) Dec 28 12:58:15.842: INFO: Unable to read 10.102.223.158_udp@PTR from pod dns-9161/dns-test-79cae5ab-4691-43dd-85dc-ebafc1a997c0: the server could not find the requested resource (get pods dns-test-79cae5ab-4691-43dd-85dc-ebafc1a997c0) Dec 28 12:58:15.855: INFO: Unable to read 10.102.223.158_tcp@PTR from pod dns-9161/dns-test-79cae5ab-4691-43dd-85dc-ebafc1a997c0: the server could not find the requested resource (get pods dns-test-79cae5ab-4691-43dd-85dc-ebafc1a997c0) Dec 28 12:58:16.042: INFO: Lookups using dns-9161/dns-test-79cae5ab-4691-43dd-85dc-ebafc1a997c0 failed for: [wheezy_udp@dns-test-service.dns-9161.svc.cluster.local wheezy_tcp@dns-test-service.dns-9161.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9161.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9161.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-9161.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-9161.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.102.223.158_udp@PTR 10.102.223.158_tcp@PTR] Dec 28 12:58:21.152: INFO: DNS probes using dns-9161/dns-test-79cae5ab-4691-43dd-85dc-ebafc1a997c0 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 12:58:21.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9161" for this suite. Dec 28 12:58:27.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 12:58:27.938: INFO: namespace dns-9161 deletion completed in 6.189977571s • [SLOW TEST:26.688 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 12:58:27.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Dec 28 12:58:38.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-f460781d-3b09-4f70-8869-3a605785a3ac -c busybox-main-container --namespace=emptydir-1462 -- cat /usr/share/volumeshare/shareddata.txt' Dec 28 12:58:41.055: INFO: stderr: "" Dec 28 12:58:41.056: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 12:58:41.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1462" for this suite. Dec 28 12:58:48.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 12:58:48.438: INFO: namespace emptydir-1462 deletion completed in 6.319631499s • [SLOW TEST:20.500 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 12:58:48.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Dec 28 12:58:48.512: INFO: Waiting up to 5m0s for pod "client-containers-75855a00-e5a0-44e2-bf0b-9e2e832d612d" in namespace "containers-2067" to be "success or failure" Dec 28 12:58:48.519: INFO: Pod "client-containers-75855a00-e5a0-44e2-bf0b-9e2e832d612d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.501918ms Dec 28 12:58:50.546: INFO: Pod "client-containers-75855a00-e5a0-44e2-bf0b-9e2e832d612d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03459788s Dec 28 12:58:52.559: INFO: Pod "client-containers-75855a00-e5a0-44e2-bf0b-9e2e832d612d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047889101s Dec 28 12:58:54.574: INFO: Pod "client-containers-75855a00-e5a0-44e2-bf0b-9e2e832d612d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062506662s Dec 28 12:58:56.655: INFO: Pod "client-containers-75855a00-e5a0-44e2-bf0b-9e2e832d612d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.143662403s Dec 28 12:58:58.673: INFO: Pod "client-containers-75855a00-e5a0-44e2-bf0b-9e2e832d612d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.161736314s Dec 28 12:59:00.680: INFO: Pod "client-containers-75855a00-e5a0-44e2-bf0b-9e2e832d612d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.168716294s Dec 28 12:59:02.709: INFO: Pod "client-containers-75855a00-e5a0-44e2-bf0b-9e2e832d612d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.197455537s STEP: Saw pod success Dec 28 12:59:02.709: INFO: Pod "client-containers-75855a00-e5a0-44e2-bf0b-9e2e832d612d" satisfied condition "success or failure" Dec 28 12:59:02.723: INFO: Trying to get logs from node iruya-node pod client-containers-75855a00-e5a0-44e2-bf0b-9e2e832d612d container test-container: STEP: delete the pod Dec 28 12:59:02.958: INFO: Waiting for pod client-containers-75855a00-e5a0-44e2-bf0b-9e2e832d612d to disappear Dec 28 12:59:02.972: INFO: Pod client-containers-75855a00-e5a0-44e2-bf0b-9e2e832d612d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 12:59:02.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2067" for this suite. Dec 28 12:59:09.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 12:59:09.118: INFO: namespace containers-2067 deletion completed in 6.136342725s • [SLOW TEST:20.678 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 12:59:09.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-4850/configmap-test-ad39b003-18b4-48a0-9ecd-3b60db088c56 STEP: Creating a pod to test consume configMaps Dec 28 12:59:09.306: INFO: Waiting up to 5m0s for pod "pod-configmaps-b85d8614-5185-40dd-b4a5-4f771b006ea3" in namespace "configmap-4850" to be "success or failure" Dec 28 12:59:09.320: INFO: Pod "pod-configmaps-b85d8614-5185-40dd-b4a5-4f771b006ea3": Phase="Pending", Reason="", readiness=false. Elapsed: 13.816226ms Dec 28 12:59:11.338: INFO: Pod "pod-configmaps-b85d8614-5185-40dd-b4a5-4f771b006ea3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032356552s Dec 28 12:59:13.360: INFO: Pod "pod-configmaps-b85d8614-5185-40dd-b4a5-4f771b006ea3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053946034s Dec 28 12:59:15.371: INFO: Pod "pod-configmaps-b85d8614-5185-40dd-b4a5-4f771b006ea3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064619299s Dec 28 12:59:17.380: INFO: Pod "pod-configmaps-b85d8614-5185-40dd-b4a5-4f771b006ea3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074180047s STEP: Saw pod success Dec 28 12:59:17.380: INFO: Pod "pod-configmaps-b85d8614-5185-40dd-b4a5-4f771b006ea3" satisfied condition "success or failure" Dec 28 12:59:17.386: INFO: Trying to get logs from node iruya-node pod pod-configmaps-b85d8614-5185-40dd-b4a5-4f771b006ea3 container env-test: STEP: delete the pod Dec 28 12:59:17.464: INFO: Waiting for pod pod-configmaps-b85d8614-5185-40dd-b4a5-4f771b006ea3 to disappear Dec 28 12:59:17.470: INFO: Pod pod-configmaps-b85d8614-5185-40dd-b4a5-4f771b006ea3 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 12:59:17.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4850" for this suite. Dec 28 12:59:23.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 12:59:23.734: INFO: namespace configmap-4850 deletion completed in 6.254696016s • [SLOW TEST:14.616 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 12:59:23.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 28 12:59:23.920: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a0df5adb-b4b6-47b6-b49c-d186d20e30e0" in namespace "projected-1234" to be "success or failure" Dec 28 12:59:23.937: INFO: Pod "downwardapi-volume-a0df5adb-b4b6-47b6-b49c-d186d20e30e0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.395328ms Dec 28 12:59:25.941: INFO: Pod "downwardapi-volume-a0df5adb-b4b6-47b6-b49c-d186d20e30e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020806661s Dec 28 12:59:27.952: INFO: Pod "downwardapi-volume-a0df5adb-b4b6-47b6-b49c-d186d20e30e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031434732s Dec 28 12:59:29.962: INFO: Pod "downwardapi-volume-a0df5adb-b4b6-47b6-b49c-d186d20e30e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041927476s Dec 28 12:59:31.972: INFO: Pod "downwardapi-volume-a0df5adb-b4b6-47b6-b49c-d186d20e30e0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051252403s Dec 28 12:59:33.981: INFO: Pod "downwardapi-volume-a0df5adb-b4b6-47b6-b49c-d186d20e30e0": Phase="Running", Reason="", readiness=true. Elapsed: 10.060915884s Dec 28 12:59:35.998: INFO: Pod "downwardapi-volume-a0df5adb-b4b6-47b6-b49c-d186d20e30e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.077360926s STEP: Saw pod success Dec 28 12:59:35.998: INFO: Pod "downwardapi-volume-a0df5adb-b4b6-47b6-b49c-d186d20e30e0" satisfied condition "success or failure" Dec 28 12:59:36.005: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a0df5adb-b4b6-47b6-b49c-d186d20e30e0 container client-container: STEP: delete the pod Dec 28 12:59:36.074: INFO: Waiting for pod downwardapi-volume-a0df5adb-b4b6-47b6-b49c-d186d20e30e0 to disappear Dec 28 12:59:36.079: INFO: Pod downwardapi-volume-a0df5adb-b4b6-47b6-b49c-d186d20e30e0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 12:59:36.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1234" for this suite. Dec 28 12:59:42.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 12:59:42.204: INFO: namespace projected-1234 deletion completed in 6.112215143s • [SLOW TEST:18.467 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 12:59:42.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 28 12:59:42.344: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4433ed79-459f-41d5-b9e9-9c60766ef8ff" in namespace "downward-api-2000" to be "success or failure" Dec 28 12:59:42.391: INFO: Pod "downwardapi-volume-4433ed79-459f-41d5-b9e9-9c60766ef8ff": Phase="Pending", Reason="", readiness=false. Elapsed: 47.480606ms Dec 28 12:59:44.402: INFO: Pod "downwardapi-volume-4433ed79-459f-41d5-b9e9-9c60766ef8ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057966635s Dec 28 12:59:46.416: INFO: Pod "downwardapi-volume-4433ed79-459f-41d5-b9e9-9c60766ef8ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071949601s Dec 28 12:59:48.426: INFO: Pod "downwardapi-volume-4433ed79-459f-41d5-b9e9-9c60766ef8ff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082564245s Dec 28 12:59:50.435: INFO: Pod "downwardapi-volume-4433ed79-459f-41d5-b9e9-9c60766ef8ff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.091107869s Dec 28 12:59:52.443: INFO: Pod "downwardapi-volume-4433ed79-459f-41d5-b9e9-9c60766ef8ff": Phase="Pending", Reason="", readiness=false. Elapsed: 10.098838642s Dec 28 12:59:54.451: INFO: Pod "downwardapi-volume-4433ed79-459f-41d5-b9e9-9c60766ef8ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.107063834s STEP: Saw pod success Dec 28 12:59:54.451: INFO: Pod "downwardapi-volume-4433ed79-459f-41d5-b9e9-9c60766ef8ff" satisfied condition "success or failure" Dec 28 12:59:54.458: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4433ed79-459f-41d5-b9e9-9c60766ef8ff container client-container: STEP: delete the pod Dec 28 12:59:54.523: INFO: Waiting for pod downwardapi-volume-4433ed79-459f-41d5-b9e9-9c60766ef8ff to disappear Dec 28 12:59:54.544: INFO: Pod downwardapi-volume-4433ed79-459f-41d5-b9e9-9c60766ef8ff no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 12:59:54.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2000" for this suite. Dec 28 13:00:00.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:00:00.779: INFO: namespace downward-api-2000 deletion completed in 6.224987182s • [SLOW TEST:18.574 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:00:00.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-1359 I1228 13:00:00.904662 8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1359, replica count: 1 I1228 13:00:01.955774 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1228 13:00:02.956102 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1228 13:00:03.956844 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1228 13:00:04.957306 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1228 13:00:05.957926 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1228 13:00:06.958533 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1228 13:00:07.959028 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1228 13:00:08.959484 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1228 13:00:09.959991 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 28 13:00:10.171: INFO: Created: latency-svc-8r9j9 Dec 28 13:00:10.284: INFO: Got endpoints: latency-svc-8r9j9 [223.882818ms] Dec 28 13:00:10.487: INFO: Created: latency-svc-8vgqk Dec 28 13:00:10.499: INFO: Got endpoints: latency-svc-8vgqk [214.253749ms] Dec 28 13:00:10.710: INFO: Created: latency-svc-dlx49 Dec 28 13:00:10.715: INFO: Got endpoints: latency-svc-dlx49 [429.83428ms] Dec 28 13:00:10.789: INFO: Created: latency-svc-glk4d Dec 28 13:00:10.982: INFO: Got endpoints: latency-svc-glk4d [696.390806ms] Dec 28 13:00:11.071: INFO: Created: latency-svc-rkwnl Dec 28 13:00:11.071: INFO: Created: latency-svc-d2f4v Dec 28 13:00:11.170: INFO: Got endpoints: latency-svc-d2f4v [884.512099ms] Dec 28 13:00:11.171: INFO: Got endpoints: latency-svc-rkwnl [886.218929ms] Dec 28 13:00:11.212: INFO: Created: latency-svc-qmf7x Dec 28 13:00:11.235: INFO: Got endpoints: latency-svc-qmf7x [949.280139ms] Dec 28 13:00:11.418: INFO: Created: latency-svc-x52fp Dec 28 13:00:11.424: INFO: Got endpoints: latency-svc-x52fp [1.139014324s] Dec 28 13:00:11.491: INFO: Created: latency-svc-r5klx Dec 28 13:00:11.500: INFO: Got endpoints: latency-svc-r5klx [1.214321819s] Dec 28 13:00:11.689: INFO: Created: latency-svc-g4tld Dec 28 13:00:11.695: INFO: Got endpoints: latency-svc-g4tld [1.409087401s] Dec 28 13:00:11.765: INFO: Created: latency-svc-42984 Dec 28 13:00:11.779: INFO: Got endpoints: latency-svc-42984 [1.493876826s] Dec 28 13:00:11.952: INFO: Created: latency-svc-gfst8 Dec 28 13:00:12.003: INFO: Got endpoints: latency-svc-gfst8 [1.717443s] Dec 28 13:00:12.036: INFO: Created: latency-svc-kg7xt Dec 28 13:00:12.157: INFO: Got endpoints: latency-svc-kg7xt [1.871270022s] Dec 28 13:00:12.193: INFO: Created: latency-svc-g2lv2 Dec 28 13:00:12.418: INFO: Got endpoints: latency-svc-g2lv2 [2.131693615s] Dec 28 13:00:12.508: INFO: Created: latency-svc-nxwpr Dec 28 13:00:12.613: INFO: Got endpoints: latency-svc-nxwpr [2.327365455s] Dec 28 13:00:12.623: INFO: Created: latency-svc-46f56 Dec 28 13:00:12.662: INFO: Got endpoints: latency-svc-46f56 [2.37601805s] Dec 28 13:00:12.776: INFO: Created: latency-svc-fnqjh Dec 28 13:00:12.801: INFO: Got endpoints: latency-svc-fnqjh [2.30193131s] Dec 28 13:00:12.956: INFO: Created: latency-svc-4p6fh Dec 28 13:00:13.143: INFO: Got endpoints: latency-svc-4p6fh [2.428191891s] Dec 28 13:00:13.147: INFO: Created: latency-svc-rq6nt Dec 28 13:00:13.162: INFO: Got endpoints: latency-svc-rq6nt [360.120785ms] Dec 28 13:00:13.221: INFO: Created: latency-svc-r5vv6 Dec 28 13:00:13.223: INFO: Got endpoints: latency-svc-r5vv6 [2.241375383s] Dec 28 13:00:13.336: INFO: Created: latency-svc-m8288 Dec 28 13:00:13.354: INFO: Got endpoints: latency-svc-m8288 [2.182925957s] Dec 28 13:00:13.412: INFO: Created: latency-svc-v69xb Dec 28 13:00:13.467: INFO: Got endpoints: latency-svc-v69xb [2.297284167s] Dec 28 13:00:13.493: INFO: Created: latency-svc-crnh8 Dec 28 13:00:13.500: INFO: Got endpoints: latency-svc-crnh8 [2.264258583s] Dec 28 13:00:13.543: INFO: Created: latency-svc-qxc9z Dec 28 13:00:13.553: INFO: Got endpoints: latency-svc-qxc9z [2.128894721s] Dec 28 13:00:13.657: INFO: Created: latency-svc-vt48p Dec 28 13:00:13.669: INFO: Got endpoints: latency-svc-vt48p [2.168629805s] Dec 28 13:00:13.735: INFO: Created: latency-svc-866w2 Dec 28 13:00:13.833: INFO: Got endpoints: latency-svc-866w2 [2.13788231s] Dec 28 13:00:13.863: INFO: Created: latency-svc-f4c89 Dec 28 13:00:13.886: INFO: Got endpoints: latency-svc-f4c89 [2.106245512s] Dec 28 13:00:14.037: INFO: Created: latency-svc-hp766 Dec 28 13:00:14.056: INFO: Got endpoints: latency-svc-hp766 [2.052002707s] Dec 28 13:00:14.084: INFO: Created: latency-svc-wwxzx Dec 28 13:00:14.097: INFO: Got endpoints: latency-svc-wwxzx [1.939482926s] Dec 28 13:00:14.295: INFO: Created: latency-svc-qnwc8 Dec 28 13:00:14.307: INFO: Got endpoints: latency-svc-qnwc8 [1.888975056s] Dec 28 13:00:14.332: INFO: Created: latency-svc-r4492 Dec 28 13:00:14.344: INFO: Got endpoints: latency-svc-r4492 [1.729802371s] Dec 28 13:00:14.373: INFO: Created: latency-svc-d544w Dec 28 13:00:14.380: INFO: Got endpoints: latency-svc-d544w [1.717167303s] Dec 28 13:00:14.501: INFO: Created: latency-svc-lmxmk Dec 28 13:00:14.518: INFO: Got endpoints: latency-svc-lmxmk [1.374682999s] Dec 28 13:00:14.600: INFO: Created: latency-svc-72q62 Dec 28 13:00:14.784: INFO: Got endpoints: latency-svc-72q62 [1.622442235s] Dec 28 13:00:14.816: INFO: Created: latency-svc-svlv8 Dec 28 13:00:14.817: INFO: Got endpoints: latency-svc-svlv8 [1.592996882s] Dec 28 13:00:14.904: INFO: Created: latency-svc-6q2qh Dec 28 13:00:14.977: INFO: Got endpoints: latency-svc-6q2qh [1.622345337s] Dec 28 13:00:15.016: INFO: Created: latency-svc-nmgxs Dec 28 13:00:15.035: INFO: Got endpoints: latency-svc-nmgxs [1.567660579s] Dec 28 13:00:15.197: INFO: Created: latency-svc-w2wz6 Dec 28 13:00:15.201: INFO: Got endpoints: latency-svc-w2wz6 [1.701074832s] Dec 28 13:00:15.258: INFO: Created: latency-svc-fxxz5 Dec 28 13:00:15.260: INFO: Got endpoints: latency-svc-fxxz5 [1.706559004s] Dec 28 13:00:15.422: INFO: Created: latency-svc-jscx4 Dec 28 13:00:15.469: INFO: Created: latency-svc-qqn6x Dec 28 13:00:15.471: INFO: Got endpoints: latency-svc-jscx4 [1.801920346s] Dec 28 13:00:15.486: INFO: Got endpoints: latency-svc-qqn6x [1.652650273s] Dec 28 13:00:15.604: INFO: Created: latency-svc-d6qh6 Dec 28 13:00:15.612: INFO: Got endpoints: latency-svc-d6qh6 [1.726238353s] Dec 28 13:00:15.653: INFO: Created: latency-svc-8hpbk Dec 28 13:00:15.656: INFO: Got endpoints: latency-svc-8hpbk [1.60071892s] Dec 28 13:00:15.815: INFO: Created: latency-svc-g7k9b Dec 28 13:00:15.846: INFO: Got endpoints: latency-svc-g7k9b [1.749142334s] Dec 28 13:00:15.930: INFO: Created: latency-svc-t789c Dec 28 13:00:15.983: INFO: Got endpoints: latency-svc-t789c [1.676194215s] Dec 28 13:00:16.019: INFO: Created: latency-svc-wq2lh Dec 28 13:00:16.040: INFO: Got endpoints: latency-svc-wq2lh [1.696405113s] Dec 28 13:00:16.161: INFO: Created: latency-svc-sxgp6 Dec 28 13:00:16.184: INFO: Got endpoints: latency-svc-sxgp6 [1.80473438s] Dec 28 13:00:16.255: INFO: Created: latency-svc-h7x85 Dec 28 13:00:16.357: INFO: Got endpoints: latency-svc-h7x85 [1.838671067s] Dec 28 13:00:16.389: INFO: Created: latency-svc-495vf Dec 28 13:00:16.408: INFO: Got endpoints: latency-svc-495vf [1.622996277s] Dec 28 13:00:16.573: INFO: Created: latency-svc-zmmdw Dec 28 13:00:16.637: INFO: Got endpoints: latency-svc-zmmdw [1.820699222s] Dec 28 13:00:16.660: INFO: Created: latency-svc-g67z8 Dec 28 13:00:16.813: INFO: Got endpoints: latency-svc-g67z8 [1.836017347s] Dec 28 13:00:16.875: INFO: Created: latency-svc-jrkk2 Dec 28 13:00:16.893: INFO: Got endpoints: latency-svc-jrkk2 [1.857423645s] Dec 28 13:00:17.059: INFO: Created: latency-svc-gh2pf Dec 28 13:00:17.067: INFO: Got endpoints: latency-svc-gh2pf [1.866004095s] Dec 28 13:00:17.127: INFO: Created: latency-svc-257vt Dec 28 13:00:17.327: INFO: Got endpoints: latency-svc-257vt [2.067373763s] Dec 28 13:00:17.332: INFO: Created: latency-svc-w7lr5 Dec 28 13:00:17.362: INFO: Got endpoints: latency-svc-w7lr5 [1.890451217s] Dec 28 13:00:17.390: INFO: Created: latency-svc-fkvgb Dec 28 13:00:17.398: INFO: Got endpoints: latency-svc-fkvgb [1.912256633s] Dec 28 13:00:17.588: INFO: Created: latency-svc-8s49c Dec 28 13:00:17.605: INFO: Got endpoints: latency-svc-8s49c [1.99267119s] Dec 28 13:00:17.643: INFO: Created: latency-svc-thwjx Dec 28 13:00:17.654: INFO: Got endpoints: latency-svc-thwjx [1.997674842s] Dec 28 13:00:17.811: INFO: Created: latency-svc-qpftb Dec 28 13:00:17.829: INFO: Got endpoints: latency-svc-qpftb [1.982726584s] Dec 28 13:00:17.892: INFO: Created: latency-svc-w445q Dec 28 13:00:17.904: INFO: Got endpoints: latency-svc-w445q [1.920811865s] Dec 28 13:00:18.043: INFO: Created: latency-svc-r2ntn Dec 28 13:00:18.047: INFO: Got endpoints: latency-svc-r2ntn [2.006815157s] Dec 28 13:00:18.225: INFO: Created: latency-svc-lztp8 Dec 28 13:00:18.238: INFO: Got endpoints: latency-svc-lztp8 [2.053104301s] Dec 28 13:00:18.310: INFO: Created: latency-svc-pzmg6 Dec 28 13:00:18.518: INFO: Got endpoints: latency-svc-pzmg6 [2.160038452s] Dec 28 13:00:18.542: INFO: Created: latency-svc-sskwp Dec 28 13:00:18.561: INFO: Got endpoints: latency-svc-sskwp [2.153352209s] Dec 28 13:00:18.585: INFO: Created: latency-svc-dwsql Dec 28 13:00:18.604: INFO: Got endpoints: latency-svc-dwsql [1.966492031s] Dec 28 13:00:18.767: INFO: Created: latency-svc-s6wvr Dec 28 13:00:18.814: INFO: Got endpoints: latency-svc-s6wvr [2.000476583s] Dec 28 13:00:18.846: INFO: Created: latency-svc-8l7qj Dec 28 13:00:18.956: INFO: Got endpoints: latency-svc-8l7qj [2.062513165s] Dec 28 13:00:19.011: INFO: Created: latency-svc-lllh8 Dec 28 13:00:19.022: INFO: Got endpoints: latency-svc-lllh8 [1.954259771s] Dec 28 13:00:19.232: INFO: Created: latency-svc-gg5rj Dec 28 13:00:19.237: INFO: Got endpoints: latency-svc-gg5rj [1.909794992s] Dec 28 13:00:19.294: INFO: Created: latency-svc-w5wsb Dec 28 13:00:19.300: INFO: Got endpoints: latency-svc-w5wsb [1.938479858s] Dec 28 13:00:19.474: INFO: Created: latency-svc-rnr4j Dec 28 13:00:19.485: INFO: Got endpoints: latency-svc-rnr4j [2.086273471s] Dec 28 13:00:19.543: INFO: Created: latency-svc-8qr59 Dec 28 13:00:19.719: INFO: Got endpoints: latency-svc-8qr59 [2.113091218s] Dec 28 13:00:19.723: INFO: Created: latency-svc-7548v Dec 28 13:00:19.743: INFO: Got endpoints: latency-svc-7548v [2.088238026s] Dec 28 13:00:20.090: INFO: Created: latency-svc-dhtks Dec 28 13:00:20.146: INFO: Got endpoints: latency-svc-dhtks [2.316141352s] Dec 28 13:00:20.172: INFO: Created: latency-svc-6n954 Dec 28 13:00:20.257: INFO: Got endpoints: latency-svc-6n954 [2.352837938s] Dec 28 13:00:20.442: INFO: Created: latency-svc-xppwq Dec 28 13:00:20.459: INFO: Got endpoints: latency-svc-xppwq [2.411140851s] Dec 28 13:00:20.634: INFO: Created: latency-svc-2bhjd Dec 28 13:00:20.679: INFO: Got endpoints: latency-svc-2bhjd [2.440766536s] Dec 28 13:00:20.681: INFO: Created: latency-svc-zsrxc Dec 28 13:00:20.696: INFO: Got endpoints: latency-svc-zsrxc [2.178266692s] Dec 28 13:00:20.811: INFO: Created: latency-svc-4g5ft Dec 28 13:00:20.823: INFO: Got endpoints: latency-svc-4g5ft [2.261358996s] Dec 28 13:00:20.870: INFO: Created: latency-svc-f2gnp Dec 28 13:00:20.876: INFO: Got endpoints: latency-svc-f2gnp [2.271335215s] Dec 28 13:00:20.912: INFO: Created: latency-svc-5csq6 Dec 28 13:00:20.987: INFO: Got endpoints: latency-svc-5csq6 [2.173055609s] Dec 28 13:00:21.015: INFO: Created: latency-svc-6f6g9 Dec 28 13:00:21.045: INFO: Got endpoints: latency-svc-6f6g9 [2.089035419s] Dec 28 13:00:21.097: INFO: Created: latency-svc-mz6sq Dec 28 13:00:21.166: INFO: Created: latency-svc-zxq5k Dec 28 13:00:21.174: INFO: Got endpoints: latency-svc-zxq5k [1.937056519s] Dec 28 13:00:21.176: INFO: Got endpoints: latency-svc-mz6sq [2.15350859s] Dec 28 13:00:21.233: INFO: Created: latency-svc-v5kb7 Dec 28 13:00:21.324: INFO: Got endpoints: latency-svc-v5kb7 [2.023450859s] Dec 28 13:00:21.349: INFO: Created: latency-svc-p6qdp Dec 28 13:00:21.358: INFO: Got endpoints: latency-svc-p6qdp [1.872824619s] Dec 28 13:00:21.412: INFO: Created: latency-svc-vb9qw Dec 28 13:00:21.416: INFO: Got endpoints: latency-svc-vb9qw [1.697301411s] Dec 28 13:00:21.527: INFO: Created: latency-svc-q9fx9 Dec 28 13:00:21.541: INFO: Got endpoints: latency-svc-q9fx9 [1.798010465s] Dec 28 13:00:21.587: INFO: Created: latency-svc-sh9rd Dec 28 13:00:21.596: INFO: Got endpoints: latency-svc-sh9rd [1.449725955s] Dec 28 13:00:21.741: INFO: Created: latency-svc-db2v8 Dec 28 13:00:21.753: INFO: Got endpoints: latency-svc-db2v8 [1.495406758s] Dec 28 13:00:21.770: INFO: Created: latency-svc-9qzvj Dec 28 13:00:21.789: INFO: Got endpoints: latency-svc-9qzvj [1.330436236s] Dec 28 13:00:21.915: INFO: Created: latency-svc-qvd6p Dec 28 13:00:21.922: INFO: Got endpoints: latency-svc-qvd6p [1.243044269s] Dec 28 13:00:21.972: INFO: Created: latency-svc-gbp6b Dec 28 13:00:22.005: INFO: Got endpoints: latency-svc-gbp6b [1.308488271s] Dec 28 13:00:22.132: INFO: Created: latency-svc-fq7q7 Dec 28 13:00:22.168: INFO: Got endpoints: latency-svc-fq7q7 [1.345513938s] Dec 28 13:00:22.311: INFO: Created: latency-svc-j9xjf Dec 28 13:00:22.316: INFO: Got endpoints: latency-svc-j9xjf [1.44001823s] Dec 28 13:00:22.404: INFO: Created: latency-svc-xfb94 Dec 28 13:00:22.532: INFO: Got endpoints: latency-svc-xfb94 [1.544287058s] Dec 28 13:00:22.556: INFO: Created: latency-svc-7wfs2 Dec 28 13:00:22.564: INFO: Got endpoints: latency-svc-7wfs2 [1.51891427s] Dec 28 13:00:22.789: INFO: Created: latency-svc-5lcmc Dec 28 13:00:22.793: INFO: Got endpoints: latency-svc-5lcmc [1.617403046s] Dec 28 13:00:22.844: INFO: Created: latency-svc-d5ngt Dec 28 13:00:22.977: INFO: Got endpoints: latency-svc-d5ngt [1.80245337s] Dec 28 13:00:23.025: INFO: Created: latency-svc-7qnmg Dec 28 13:00:23.029: INFO: Got endpoints: latency-svc-7qnmg [1.704147694s] Dec 28 13:00:23.201: INFO: Created: latency-svc-2t2b5 Dec 28 13:00:23.244: INFO: Got endpoints: latency-svc-2t2b5 [1.886581498s] Dec 28 13:00:23.251: INFO: Created: latency-svc-vbnsj Dec 28 13:00:23.262: INFO: Got endpoints: latency-svc-vbnsj [1.845407503s] Dec 28 13:00:23.284: INFO: Created: latency-svc-4pb24 Dec 28 13:00:23.477: INFO: Got endpoints: latency-svc-4pb24 [1.936386026s] Dec 28 13:00:23.500: INFO: Created: latency-svc-jrxxn Dec 28 13:00:23.531: INFO: Got endpoints: latency-svc-jrxxn [1.934785889s] Dec 28 13:00:23.539: INFO: Created: latency-svc-zg5h8 Dec 28 13:00:23.556: INFO: Got endpoints: latency-svc-zg5h8 [1.802861338s] Dec 28 13:00:23.727: INFO: Created: latency-svc-wvts6 Dec 28 13:00:23.729: INFO: Got endpoints: latency-svc-wvts6 [1.939413508s] Dec 28 13:00:23.778: INFO: Created: latency-svc-vwfmh Dec 28 13:00:23.782: INFO: Got endpoints: latency-svc-vwfmh [1.859116533s] Dec 28 13:00:23.971: INFO: Created: latency-svc-nbntn Dec 28 13:00:23.973: INFO: Got endpoints: latency-svc-nbntn [1.967742724s] Dec 28 13:00:24.026: INFO: Created: latency-svc-zl4bw Dec 28 13:00:24.040: INFO: Got endpoints: latency-svc-zl4bw [1.871365554s] Dec 28 13:00:24.190: INFO: Created: latency-svc-dmrqc Dec 28 13:00:24.202: INFO: Got endpoints: latency-svc-dmrqc [1.885918768s] Dec 28 13:00:24.257: INFO: Created: latency-svc-zlldx Dec 28 13:00:24.264: INFO: Got endpoints: latency-svc-zlldx [1.731715141s] Dec 28 13:00:24.449: INFO: Created: latency-svc-hhjdk Dec 28 13:00:24.487: INFO: Created: latency-svc-88295 Dec 28 13:00:24.487: INFO: Got endpoints: latency-svc-hhjdk [1.922031197s] Dec 28 13:00:24.495: INFO: Got endpoints: latency-svc-88295 [1.702271449s] Dec 28 13:00:24.534: INFO: Created: latency-svc-56lhw Dec 28 13:00:24.727: INFO: Got endpoints: latency-svc-56lhw [1.749487304s] Dec 28 13:00:24.739: INFO: Created: latency-svc-lrtxb Dec 28 13:00:24.783: INFO: Got endpoints: latency-svc-lrtxb [1.75455949s] Dec 28 13:00:25.633: INFO: Created: latency-svc-2h4cm Dec 28 13:00:25.646: INFO: Got endpoints: latency-svc-2h4cm [2.401519023s] Dec 28 13:00:25.696: INFO: Created: latency-svc-z6265 Dec 28 13:00:25.713: INFO: Got endpoints: latency-svc-z6265 [2.451290555s] Dec 28 13:00:25.912: INFO: Created: latency-svc-lp9r2 Dec 28 13:00:25.921: INFO: Got endpoints: latency-svc-lp9r2 [2.443788368s] Dec 28 13:00:25.967: INFO: Created: latency-svc-8b9mw Dec 28 13:00:26.164: INFO: Got endpoints: latency-svc-8b9mw [2.632769177s] Dec 28 13:00:26.181: INFO: Created: latency-svc-75z9l Dec 28 13:00:26.228: INFO: Got endpoints: latency-svc-75z9l [2.671914197s] Dec 28 13:00:26.243: INFO: Created: latency-svc-plb58 Dec 28 13:00:26.546: INFO: Got endpoints: latency-svc-plb58 [2.816282189s] Dec 28 13:00:26.570: INFO: Created: latency-svc-hnzk9 Dec 28 13:00:26.588: INFO: Got endpoints: latency-svc-hnzk9 [2.805584165s] Dec 28 13:00:26.651: INFO: Created: latency-svc-kfn87 Dec 28 13:00:26.791: INFO: Got endpoints: latency-svc-kfn87 [2.817904902s] Dec 28 13:00:26.825: INFO: Created: latency-svc-m6jtm Dec 28 13:00:26.896: INFO: Got endpoints: latency-svc-m6jtm [2.85562162s] Dec 28 13:00:26.913: INFO: Created: latency-svc-dgxz9 Dec 28 13:00:26.997: INFO: Got endpoints: latency-svc-dgxz9 [2.794566521s] Dec 28 13:00:27.059: INFO: Created: latency-svc-cvxtj Dec 28 13:00:27.095: INFO: Got endpoints: latency-svc-cvxtj [2.831139745s] Dec 28 13:00:27.189: INFO: Created: latency-svc-lm2dx Dec 28 13:00:27.198: INFO: Got endpoints: latency-svc-lm2dx [2.711043642s] Dec 28 13:00:27.236: INFO: Created: latency-svc-qrqqp Dec 28 13:00:27.268: INFO: Created: latency-svc-96zkp Dec 28 13:00:27.268: INFO: Got endpoints: latency-svc-qrqqp [2.772090204s] Dec 28 13:00:27.272: INFO: Got endpoints: latency-svc-96zkp [2.545481903s] Dec 28 13:00:27.417: INFO: Created: latency-svc-zrcgh Dec 28 13:00:27.438: INFO: Got endpoints: latency-svc-zrcgh [2.654043414s] Dec 28 13:00:27.626: INFO: Created: latency-svc-d8tqg Dec 28 13:00:27.629: INFO: Got endpoints: latency-svc-d8tqg [1.982310243s] Dec 28 13:00:27.696: INFO: Created: latency-svc-n5jlp Dec 28 13:00:27.707: INFO: Got endpoints: latency-svc-n5jlp [1.993417732s] Dec 28 13:00:27.869: INFO: Created: latency-svc-nbrfq Dec 28 13:00:27.896: INFO: Got endpoints: latency-svc-nbrfq [1.974023888s] Dec 28 13:00:27.959: INFO: Created: latency-svc-7f4kh Dec 28 13:00:27.971: INFO: Got endpoints: latency-svc-7f4kh [1.806142682s] Dec 28 13:00:28.120: INFO: Created: latency-svc-dc6c4 Dec 28 13:00:28.128: INFO: Got endpoints: latency-svc-dc6c4 [1.899378599s] Dec 28 13:00:28.176: INFO: Created: latency-svc-cd8m9 Dec 28 13:00:28.176: INFO: Got endpoints: latency-svc-cd8m9 [1.629412679s] Dec 28 13:00:28.309: INFO: Created: latency-svc-444q5 Dec 28 13:00:28.327: INFO: Got endpoints: latency-svc-444q5 [1.738789987s] Dec 28 13:00:28.576: INFO: Created: latency-svc-gnjqs Dec 28 13:00:28.590: INFO: Got endpoints: latency-svc-gnjqs [1.798551313s] Dec 28 13:00:28.656: INFO: Created: latency-svc-qdh48 Dec 28 13:00:28.656: INFO: Got endpoints: latency-svc-qdh48 [1.759570469s] Dec 28 13:00:28.790: INFO: Created: latency-svc-q8865 Dec 28 13:00:28.832: INFO: Got endpoints: latency-svc-q8865 [1.834428426s] Dec 28 13:00:28.839: INFO: Created: latency-svc-nrc97 Dec 28 13:00:28.849: INFO: Got endpoints: latency-svc-nrc97 [1.753417521s] Dec 28 13:00:29.040: INFO: Created: latency-svc-swqhn Dec 28 13:00:29.046: INFO: Got endpoints: latency-svc-swqhn [1.847355948s] Dec 28 13:00:29.087: INFO: Created: latency-svc-g6znq Dec 28 13:00:29.333: INFO: Created: latency-svc-mthmg Dec 28 13:00:29.334: INFO: Got endpoints: latency-svc-g6znq [2.065905382s] Dec 28 13:00:29.347: INFO: Got endpoints: latency-svc-mthmg [2.07496285s] Dec 28 13:00:29.721: INFO: Created: latency-svc-r2fsr Dec 28 13:00:29.731: INFO: Got endpoints: latency-svc-r2fsr [2.293582944s] Dec 28 13:00:29.809: INFO: Created: latency-svc-j5sc8 Dec 28 13:00:29.816: INFO: Got endpoints: latency-svc-j5sc8 [2.187014058s] Dec 28 13:00:29.978: INFO: Created: latency-svc-xvgds Dec 28 13:00:29.993: INFO: Got endpoints: latency-svc-xvgds [2.285604881s] Dec 28 13:00:30.149: INFO: Created: latency-svc-xhvlr Dec 28 13:00:30.215: INFO: Created: latency-svc-78flw Dec 28 13:00:30.215: INFO: Got endpoints: latency-svc-xhvlr [2.319368956s] Dec 28 13:00:30.231: INFO: Got endpoints: latency-svc-78flw [2.260446728s] Dec 28 13:00:30.381: INFO: Created: latency-svc-sl2cn Dec 28 13:00:30.391: INFO: Got endpoints: latency-svc-sl2cn [2.262681459s] Dec 28 13:00:30.468: INFO: Created: latency-svc-78w2z Dec 28 13:00:30.599: INFO: Got endpoints: latency-svc-78w2z [2.423486341s] Dec 28 13:00:30.649: INFO: Created: latency-svc-8gjfg Dec 28 13:00:30.690: INFO: Got endpoints: latency-svc-8gjfg [2.362561049s] Dec 28 13:00:30.783: INFO: Created: latency-svc-l2gsx Dec 28 13:00:31.079: INFO: Got endpoints: latency-svc-l2gsx [2.489098274s] Dec 28 13:00:31.089: INFO: Created: latency-svc-2zfl2 Dec 28 13:00:31.090: INFO: Created: latency-svc-hz6dx Dec 28 13:00:31.173: INFO: Got endpoints: latency-svc-hz6dx [2.516929046s] Dec 28 13:00:31.173: INFO: Got endpoints: latency-svc-2zfl2 [2.341030054s] Dec 28 13:00:31.181: INFO: Created: latency-svc-j6xrl Dec 28 13:00:31.325: INFO: Got endpoints: latency-svc-j6xrl [2.475503427s] Dec 28 13:00:31.381: INFO: Created: latency-svc-gg6tq Dec 28 13:00:31.391: INFO: Got endpoints: latency-svc-gg6tq [2.345147657s] Dec 28 13:00:31.561: INFO: Created: latency-svc-c6lpk Dec 28 13:00:31.577: INFO: Got endpoints: latency-svc-c6lpk [2.243060224s] Dec 28 13:00:31.631: INFO: Created: latency-svc-p6slk Dec 28 13:00:31.746: INFO: Got endpoints: latency-svc-p6slk [2.398700937s] Dec 28 13:00:31.929: INFO: Created: latency-svc-fhdqt Dec 28 13:00:31.937: INFO: Got endpoints: latency-svc-fhdqt [2.205511349s] Dec 28 13:00:31.979: INFO: Created: latency-svc-wqkfj Dec 28 13:00:31.992: INFO: Got endpoints: latency-svc-wqkfj [2.17532183s] Dec 28 13:00:32.069: INFO: Created: latency-svc-dvg5s Dec 28 13:00:32.082: INFO: Got endpoints: latency-svc-dvg5s [2.088854639s] Dec 28 13:00:32.120: INFO: Created: latency-svc-4wj8b Dec 28 13:00:32.130: INFO: Got endpoints: latency-svc-4wj8b [1.914754309s] Dec 28 13:00:32.161: INFO: Created: latency-svc-tpr4b Dec 28 13:00:32.254: INFO: Got endpoints: latency-svc-tpr4b [2.022550494s] Dec 28 13:00:32.354: INFO: Created: latency-svc-qlts9 Dec 28 13:00:32.431: INFO: Got endpoints: latency-svc-qlts9 [2.040232164s] Dec 28 13:00:32.477: INFO: Created: latency-svc-b28w8 Dec 28 13:00:32.493: INFO: Got endpoints: latency-svc-b28w8 [1.893823672s] Dec 28 13:00:32.639: INFO: Created: latency-svc-64wwt Dec 28 13:00:32.675: INFO: Got endpoints: latency-svc-64wwt [1.984497934s] Dec 28 13:00:32.678: INFO: Created: latency-svc-rgv5n Dec 28 13:00:32.688: INFO: Got endpoints: latency-svc-rgv5n [1.608473887s] Dec 28 13:00:32.836: INFO: Created: latency-svc-7vzrf Dec 28 13:00:32.847: INFO: Got endpoints: latency-svc-7vzrf [1.674190673s] Dec 28 13:00:32.923: INFO: Created: latency-svc-9crxw Dec 28 13:00:32.995: INFO: Got endpoints: latency-svc-9crxw [1.821834281s] Dec 28 13:00:33.041: INFO: Created: latency-svc-rb6b4 Dec 28 13:00:33.042: INFO: Got endpoints: latency-svc-rb6b4 [1.716557559s] Dec 28 13:00:33.075: INFO: Created: latency-svc-qc2wb Dec 28 13:00:33.172: INFO: Got endpoints: latency-svc-qc2wb [1.780778058s] Dec 28 13:00:33.205: INFO: Created: latency-svc-vf5pc Dec 28 13:00:33.231: INFO: Got endpoints: latency-svc-vf5pc [1.653730127s] Dec 28 13:00:33.278: INFO: Created: latency-svc-qcnsw Dec 28 13:00:33.337: INFO: Got endpoints: latency-svc-qcnsw [1.59028936s] Dec 28 13:00:33.361: INFO: Created: latency-svc-wqqfm Dec 28 13:00:33.361: INFO: Got endpoints: latency-svc-wqqfm [1.423619469s] Dec 28 13:00:33.395: INFO: Created: latency-svc-wcdp8 Dec 28 13:00:33.433: INFO: Got endpoints: latency-svc-wcdp8 [1.441363605s] Dec 28 13:00:33.441: INFO: Created: latency-svc-zkptw Dec 28 13:00:33.600: INFO: Got endpoints: latency-svc-zkptw [1.517766618s] Dec 28 13:00:33.632: INFO: Created: latency-svc-l7w8n Dec 28 13:00:33.654: INFO: Got endpoints: latency-svc-l7w8n [1.52324095s] Dec 28 13:00:33.675: INFO: Created: latency-svc-6stjn Dec 28 13:00:33.826: INFO: Got endpoints: latency-svc-6stjn [1.57176095s] Dec 28 13:00:33.832: INFO: Created: latency-svc-bj4wz Dec 28 13:00:33.858: INFO: Got endpoints: latency-svc-bj4wz [1.426548386s] Dec 28 13:00:33.903: INFO: Created: latency-svc-k7lmf Dec 28 13:00:33.916: INFO: Got endpoints: latency-svc-k7lmf [1.421960776s] Dec 28 13:00:34.024: INFO: Created: latency-svc-q6qj2 Dec 28 13:00:34.066: INFO: Got endpoints: latency-svc-q6qj2 [1.390588325s] Dec 28 13:00:34.080: INFO: Created: latency-svc-pmhq5 Dec 28 13:00:34.093: INFO: Got endpoints: latency-svc-pmhq5 [1.404775928s] Dec 28 13:00:34.205: INFO: Created: latency-svc-ctfmn Dec 28 13:00:34.221: INFO: Got endpoints: latency-svc-ctfmn [1.373200052s] Dec 28 13:00:34.263: INFO: Created: latency-svc-t56rs Dec 28 13:00:34.298: INFO: Got endpoints: latency-svc-t56rs [1.302706631s] Dec 28 13:00:34.439: INFO: Created: latency-svc-2wswt Dec 28 13:00:34.444: INFO: Got endpoints: latency-svc-2wswt [1.402481521s] Dec 28 13:00:34.502: INFO: Created: latency-svc-96qjw Dec 28 13:00:34.691: INFO: Got endpoints: latency-svc-96qjw [1.518622546s] Dec 28 13:00:34.693: INFO: Created: latency-svc-chk85 Dec 28 13:00:34.735: INFO: Got endpoints: latency-svc-chk85 [1.503606622s] Dec 28 13:00:34.756: INFO: Created: latency-svc-grhrn Dec 28 13:00:34.756: INFO: Got endpoints: latency-svc-grhrn [1.419360073s] Dec 28 13:00:34.882: INFO: Created: latency-svc-nkt6x Dec 28 13:00:34.909: INFO: Got endpoints: latency-svc-nkt6x [1.548151787s] Dec 28 13:00:34.938: INFO: Created: latency-svc-9zfqr Dec 28 13:00:34.956: INFO: Got endpoints: latency-svc-9zfqr [1.522398825s] Dec 28 13:00:35.085: INFO: Created: latency-svc-2hl4p Dec 28 13:00:35.094: INFO: Got endpoints: latency-svc-2hl4p [1.493705795s] Dec 28 13:00:35.181: INFO: Created: latency-svc-l8kdn Dec 28 13:00:35.307: INFO: Got endpoints: latency-svc-l8kdn [1.653269719s] Dec 28 13:00:35.325: INFO: Created: latency-svc-7lb2c Dec 28 13:00:35.337: INFO: Got endpoints: latency-svc-7lb2c [1.510206622s] Dec 28 13:00:35.370: INFO: Created: latency-svc-m29jq Dec 28 13:00:35.376: INFO: Got endpoints: latency-svc-m29jq [1.516951168s] Dec 28 13:00:35.606: INFO: Created: latency-svc-mjnxn Dec 28 13:00:35.616: INFO: Got endpoints: latency-svc-mjnxn [1.700427434s] Dec 28 13:00:35.677: INFO: Created: latency-svc-26tcn Dec 28 13:00:35.689: INFO: Got endpoints: latency-svc-26tcn [1.623363985s] Dec 28 13:00:35.839: INFO: Created: latency-svc-gcd5t Dec 28 13:00:35.862: INFO: Got endpoints: latency-svc-gcd5t [1.767933143s] Dec 28 13:00:35.928: INFO: Created: latency-svc-svdmh Dec 28 13:00:36.018: INFO: Got endpoints: latency-svc-svdmh [1.797073235s] Dec 28 13:00:36.056: INFO: Created: latency-svc-r4vg9 Dec 28 13:00:36.057: INFO: Got endpoints: latency-svc-r4vg9 [1.758898011s] Dec 28 13:00:36.235: INFO: Created: latency-svc-gdbs7 Dec 28 13:00:36.236: INFO: Got endpoints: latency-svc-gdbs7 [1.792130203s] Dec 28 13:00:36.237: INFO: Latencies: [214.253749ms 360.120785ms 429.83428ms 696.390806ms 884.512099ms 886.218929ms 949.280139ms 1.139014324s 1.214321819s 1.243044269s 1.302706631s 1.308488271s 1.330436236s 1.345513938s 1.373200052s 1.374682999s 1.390588325s 1.402481521s 1.404775928s 1.409087401s 1.419360073s 1.421960776s 1.423619469s 1.426548386s 1.44001823s 1.441363605s 1.449725955s 1.493705795s 1.493876826s 1.495406758s 1.503606622s 1.510206622s 1.516951168s 1.517766618s 1.518622546s 1.51891427s 1.522398825s 1.52324095s 1.544287058s 1.548151787s 1.567660579s 1.57176095s 1.59028936s 1.592996882s 1.60071892s 1.608473887s 1.617403046s 1.622345337s 1.622442235s 1.622996277s 1.623363985s 1.629412679s 1.652650273s 1.653269719s 1.653730127s 1.674190673s 1.676194215s 1.696405113s 1.697301411s 1.700427434s 1.701074832s 1.702271449s 1.704147694s 1.706559004s 1.716557559s 1.717167303s 1.717443s 1.726238353s 1.729802371s 1.731715141s 1.738789987s 1.749142334s 1.749487304s 1.753417521s 1.75455949s 1.758898011s 1.759570469s 1.767933143s 1.780778058s 1.792130203s 1.797073235s 1.798010465s 1.798551313s 1.801920346s 1.80245337s 1.802861338s 1.80473438s 1.806142682s 1.820699222s 1.821834281s 1.834428426s 1.836017347s 1.838671067s 1.845407503s 1.847355948s 1.857423645s 1.859116533s 1.866004095s 1.871270022s 1.871365554s 1.872824619s 1.885918768s 1.886581498s 1.888975056s 1.890451217s 1.893823672s 1.899378599s 1.909794992s 1.912256633s 1.914754309s 1.920811865s 1.922031197s 1.934785889s 1.936386026s 1.937056519s 1.938479858s 1.939413508s 1.939482926s 1.954259771s 1.966492031s 1.967742724s 1.974023888s 1.982310243s 1.982726584s 1.984497934s 1.99267119s 1.993417732s 1.997674842s 2.000476583s 2.006815157s 2.022550494s 2.023450859s 2.040232164s 2.052002707s 2.053104301s 2.062513165s 2.065905382s 2.067373763s 2.07496285s 2.086273471s 2.088238026s 2.088854639s 2.089035419s 2.106245512s 2.113091218s 2.128894721s 2.131693615s 2.13788231s 2.153352209s 2.15350859s 2.160038452s 2.168629805s 2.173055609s 2.17532183s 2.178266692s 2.182925957s 2.187014058s 2.205511349s 2.241375383s 2.243060224s 2.260446728s 2.261358996s 2.262681459s 2.264258583s 2.271335215s 2.285604881s 2.293582944s 2.297284167s 2.30193131s 2.316141352s 2.319368956s 2.327365455s 2.341030054s 2.345147657s 2.352837938s 2.362561049s 2.37601805s 2.398700937s 2.401519023s 2.411140851s 2.423486341s 2.428191891s 2.440766536s 2.443788368s 2.451290555s 2.475503427s 2.489098274s 2.516929046s 2.545481903s 2.632769177s 2.654043414s 2.671914197s 2.711043642s 2.772090204s 2.794566521s 2.805584165s 2.816282189s 2.817904902s 2.831139745s 2.85562162s] Dec 28 13:00:36.238: INFO: 50 %ile: 1.872824619s Dec 28 13:00:36.238: INFO: 90 %ile: 2.423486341s Dec 28 13:00:36.238: INFO: 99 %ile: 2.831139745s Dec 28 13:00:36.238: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:00:36.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-1359" for this suite. Dec 28 13:01:20.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:01:20.594: INFO: namespace svc-latency-1359 deletion completed in 44.299282319s • [SLOW TEST:79.815 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:01:20.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Dec 28 13:01:31.420: INFO: Successfully updated pod "labelsupdatec0fd9926-14f8-4402-a26d-b0797906ca8b" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:01:35.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3409" for this suite. Dec 28 13:01:57.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:01:57.721: INFO: namespace downward-api-3409 deletion completed in 22.171362908s • [SLOW TEST:37.126 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:01:57.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 28 13:02:07.015: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:02:07.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9872" for this suite. Dec 28 13:02:13.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:02:13.193: INFO: namespace container-runtime-9872 deletion completed in 6.126852497s • [SLOW TEST:15.472 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:02:13.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:02:22.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2294" for this suite. Dec 28 13:02:44.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:02:44.553: INFO: namespace replication-controller-2294 deletion completed in 22.150884539s • [SLOW TEST:31.359 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:02:44.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 28 13:02:44.678: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0f251e2f-6bb6-41a1-b907-3f2f69be6993" in namespace "downward-api-4282" to be "success or failure" Dec 28 13:02:44.688: INFO: Pod "downwardapi-volume-0f251e2f-6bb6-41a1-b907-3f2f69be6993": Phase="Pending", Reason="", readiness=false. Elapsed: 9.6612ms Dec 28 13:02:46.697: INFO: Pod "downwardapi-volume-0f251e2f-6bb6-41a1-b907-3f2f69be6993": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0188082s Dec 28 13:02:48.706: INFO: Pod "downwardapi-volume-0f251e2f-6bb6-41a1-b907-3f2f69be6993": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028135669s Dec 28 13:02:50.717: INFO: Pod "downwardapi-volume-0f251e2f-6bb6-41a1-b907-3f2f69be6993": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039058535s Dec 28 13:02:52.737: INFO: Pod "downwardapi-volume-0f251e2f-6bb6-41a1-b907-3f2f69be6993": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058623119s Dec 28 13:02:54.752: INFO: Pod "downwardapi-volume-0f251e2f-6bb6-41a1-b907-3f2f69be6993": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073607744s STEP: Saw pod success Dec 28 13:02:54.752: INFO: Pod "downwardapi-volume-0f251e2f-6bb6-41a1-b907-3f2f69be6993" satisfied condition "success or failure" Dec 28 13:02:54.758: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-0f251e2f-6bb6-41a1-b907-3f2f69be6993 container client-container: STEP: delete the pod Dec 28 13:02:54.897: INFO: Waiting for pod downwardapi-volume-0f251e2f-6bb6-41a1-b907-3f2f69be6993 to disappear Dec 28 13:02:54.906: INFO: Pod downwardapi-volume-0f251e2f-6bb6-41a1-b907-3f2f69be6993 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:02:54.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4282" for this suite. Dec 28 13:03:00.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:03:01.097: INFO: namespace downward-api-4282 deletion completed in 6.183090265s • [SLOW TEST:16.544 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:03:01.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-fmtd STEP: Creating a pod to test atomic-volume-subpath Dec 28 13:03:01.181: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-fmtd" in namespace "subpath-7692" to be "success or failure" Dec 28 13:03:01.183: INFO: Pod "pod-subpath-test-configmap-fmtd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.814783ms Dec 28 13:03:03.193: INFO: Pod "pod-subpath-test-configmap-fmtd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012056311s Dec 28 13:03:05.200: INFO: Pod "pod-subpath-test-configmap-fmtd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019126305s Dec 28 13:03:07.209: INFO: Pod "pod-subpath-test-configmap-fmtd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028829896s Dec 28 13:03:09.217: INFO: Pod "pod-subpath-test-configmap-fmtd": Phase="Running", Reason="", readiness=true. Elapsed: 8.03641499s Dec 28 13:03:11.225: INFO: Pod "pod-subpath-test-configmap-fmtd": Phase="Running", Reason="", readiness=true. Elapsed: 10.043969049s Dec 28 13:03:13.235: INFO: Pod "pod-subpath-test-configmap-fmtd": Phase="Running", Reason="", readiness=true. Elapsed: 12.054039072s Dec 28 13:03:15.243: INFO: Pod "pod-subpath-test-configmap-fmtd": Phase="Running", Reason="", readiness=true. Elapsed: 14.062186661s Dec 28 13:03:17.253: INFO: Pod "pod-subpath-test-configmap-fmtd": Phase="Running", Reason="", readiness=true. Elapsed: 16.072775247s Dec 28 13:03:19.265: INFO: Pod "pod-subpath-test-configmap-fmtd": Phase="Running", Reason="", readiness=true. Elapsed: 18.084897344s Dec 28 13:03:21.271: INFO: Pod "pod-subpath-test-configmap-fmtd": Phase="Running", Reason="", readiness=true. Elapsed: 20.090801286s Dec 28 13:03:23.278: INFO: Pod "pod-subpath-test-configmap-fmtd": Phase="Running", Reason="", readiness=true. Elapsed: 22.097271357s Dec 28 13:03:25.285: INFO: Pod "pod-subpath-test-configmap-fmtd": Phase="Running", Reason="", readiness=true. Elapsed: 24.104544294s Dec 28 13:03:27.434: INFO: Pod "pod-subpath-test-configmap-fmtd": Phase="Running", Reason="", readiness=true. Elapsed: 26.253824107s Dec 28 13:03:29.445: INFO: Pod "pod-subpath-test-configmap-fmtd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.264891396s STEP: Saw pod success Dec 28 13:03:29.446: INFO: Pod "pod-subpath-test-configmap-fmtd" satisfied condition "success or failure" Dec 28 13:03:29.450: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-fmtd container test-container-subpath-configmap-fmtd: STEP: delete the pod Dec 28 13:03:29.535: INFO: Waiting for pod pod-subpath-test-configmap-fmtd to disappear Dec 28 13:03:29.593: INFO: Pod pod-subpath-test-configmap-fmtd no longer exists STEP: Deleting pod pod-subpath-test-configmap-fmtd Dec 28 13:03:29.593: INFO: Deleting pod "pod-subpath-test-configmap-fmtd" in namespace "subpath-7692" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:03:29.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7692" for this suite. Dec 28 13:03:35.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:03:35.832: INFO: namespace subpath-7692 deletion completed in 6.218629909s • [SLOW TEST:34.735 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:03:35.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Dec 28 13:03:36.014: INFO: Waiting up to 5m0s for pod "pod-ccab3495-582d-4ba5-81fd-e4c9a728253a" in namespace "emptydir-850" to be "success or failure" Dec 28 13:03:36.032: INFO: Pod "pod-ccab3495-582d-4ba5-81fd-e4c9a728253a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.840808ms Dec 28 13:03:38.040: INFO: Pod "pod-ccab3495-582d-4ba5-81fd-e4c9a728253a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026202587s Dec 28 13:03:40.058: INFO: Pod "pod-ccab3495-582d-4ba5-81fd-e4c9a728253a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043909962s Dec 28 13:03:42.065: INFO: Pod "pod-ccab3495-582d-4ba5-81fd-e4c9a728253a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050986263s Dec 28 13:03:44.074: INFO: Pod "pod-ccab3495-582d-4ba5-81fd-e4c9a728253a": Phase="Running", Reason="", readiness=true. Elapsed: 8.060050717s Dec 28 13:03:46.082: INFO: Pod "pod-ccab3495-582d-4ba5-81fd-e4c9a728253a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067345976s STEP: Saw pod success Dec 28 13:03:46.082: INFO: Pod "pod-ccab3495-582d-4ba5-81fd-e4c9a728253a" satisfied condition "success or failure" Dec 28 13:03:46.086: INFO: Trying to get logs from node iruya-node pod pod-ccab3495-582d-4ba5-81fd-e4c9a728253a container test-container: STEP: delete the pod Dec 28 13:03:46.162: INFO: Waiting for pod pod-ccab3495-582d-4ba5-81fd-e4c9a728253a to disappear Dec 28 13:03:46.171: INFO: Pod pod-ccab3495-582d-4ba5-81fd-e4c9a728253a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:03:46.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-850" for this suite. Dec 28 13:03:52.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:03:52.420: INFO: namespace emptydir-850 deletion completed in 6.243458904s • [SLOW TEST:16.587 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:03:52.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Dec 28 13:04:00.623: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Dec 28 13:04:20.851: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:04:20.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5398" for this suite. Dec 28 13:04:26.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:04:27.032: INFO: namespace pods-5398 deletion completed in 6.15377417s • [SLOW TEST:34.612 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:04:27.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3787 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-3787 STEP: Creating statefulset with conflicting port in namespace statefulset-3787 STEP: Waiting until pod test-pod will start running in namespace statefulset-3787 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3787 Dec 28 13:04:39.234: INFO: Observed stateful pod in namespace: statefulset-3787, name: ss-0, uid: 8bc8e97c-2a81-401b-8817-54a8ce6c229b, status phase: Pending. Waiting for statefulset controller to delete. Dec 28 13:04:46.499: INFO: Observed stateful pod in namespace: statefulset-3787, name: ss-0, uid: 8bc8e97c-2a81-401b-8817-54a8ce6c229b, status phase: Failed. Waiting for statefulset controller to delete. Dec 28 13:04:46.519: INFO: Observed stateful pod in namespace: statefulset-3787, name: ss-0, uid: 8bc8e97c-2a81-401b-8817-54a8ce6c229b, status phase: Failed. Waiting for statefulset controller to delete. Dec 28 13:04:46.527: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3787 STEP: Removing pod with conflicting port in namespace statefulset-3787 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3787 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Dec 28 13:04:56.717: INFO: Deleting all statefulset in ns statefulset-3787 Dec 28 13:04:56.722: INFO: Scaling statefulset ss to 0 Dec 28 13:05:06.757: INFO: Waiting for statefulset status.replicas updated to 0 Dec 28 13:05:06.767: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:05:06.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3787" for this suite. Dec 28 13:05:12.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:05:13.036: INFO: namespace statefulset-3787 deletion completed in 6.185584036s • [SLOW TEST:46.004 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:05:13.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Dec 28 13:05:13.173: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7354,SelfLink:/api/v1/namespaces/watch-7354/configmaps/e2e-watch-test-label-changed,UID:b92f4ccf-2ebe-44a1-a702-ef28e20191b9,ResourceVersion:18391290,Generation:0,CreationTimestamp:2019-12-28 13:05:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 28 13:05:13.174: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7354,SelfLink:/api/v1/namespaces/watch-7354/configmaps/e2e-watch-test-label-changed,UID:b92f4ccf-2ebe-44a1-a702-ef28e20191b9,ResourceVersion:18391291,Generation:0,CreationTimestamp:2019-12-28 13:05:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Dec 28 13:05:13.174: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7354,SelfLink:/api/v1/namespaces/watch-7354/configmaps/e2e-watch-test-label-changed,UID:b92f4ccf-2ebe-44a1-a702-ef28e20191b9,ResourceVersion:18391292,Generation:0,CreationTimestamp:2019-12-28 13:05:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Dec 28 13:05:23.229: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7354,SelfLink:/api/v1/namespaces/watch-7354/configmaps/e2e-watch-test-label-changed,UID:b92f4ccf-2ebe-44a1-a702-ef28e20191b9,ResourceVersion:18391307,Generation:0,CreationTimestamp:2019-12-28 13:05:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 28 13:05:23.230: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7354,SelfLink:/api/v1/namespaces/watch-7354/configmaps/e2e-watch-test-label-changed,UID:b92f4ccf-2ebe-44a1-a702-ef28e20191b9,ResourceVersion:18391308,Generation:0,CreationTimestamp:2019-12-28 13:05:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Dec 28 13:05:23.230: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7354,SelfLink:/api/v1/namespaces/watch-7354/configmaps/e2e-watch-test-label-changed,UID:b92f4ccf-2ebe-44a1-a702-ef28e20191b9,ResourceVersion:18391309,Generation:0,CreationTimestamp:2019-12-28 13:05:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:05:23.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7354" for this suite. Dec 28 13:05:29.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:05:29.466: INFO: namespace watch-7354 deletion completed in 6.229053558s • [SLOW TEST:16.429 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:05:29.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-6fca21d7-2134-4e04-8cc1-f2d82922d753 STEP: Creating configMap with name cm-test-opt-upd-3f219e00-94c9-4d5e-a1c9-36a4e9c01f1c STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-6fca21d7-2134-4e04-8cc1-f2d82922d753 STEP: Updating configmap cm-test-opt-upd-3f219e00-94c9-4d5e-a1c9-36a4e9c01f1c STEP: Creating configMap with name cm-test-opt-create-7bb6e6ae-46df-4077-b29a-54498edb2a22 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:07:08.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8985" for this suite. Dec 28 13:07:32.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:07:32.525: INFO: namespace projected-8985 deletion completed in 24.188504792s • [SLOW TEST:123.057 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:07:32.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:08:31.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1047" for this suite. Dec 28 13:08:37.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:08:37.215: INFO: namespace container-runtime-1047 deletion completed in 6.164889296s • [SLOW TEST:64.690 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:08:37.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-1361/configmap-test-ff727761-dede-46e7-ab29-a3e7200fe02d STEP: Creating a pod to test consume configMaps Dec 28 13:08:37.406: INFO: Waiting up to 5m0s for pod "pod-configmaps-7d0476f9-a3c0-4f83-b1b5-d06bb9e7654e" in namespace "configmap-1361" to be "success or failure" Dec 28 13:08:37.421: INFO: Pod "pod-configmaps-7d0476f9-a3c0-4f83-b1b5-d06bb9e7654e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.529957ms Dec 28 13:08:39.430: INFO: Pod "pod-configmaps-7d0476f9-a3c0-4f83-b1b5-d06bb9e7654e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024260472s Dec 28 13:08:41.440: INFO: Pod "pod-configmaps-7d0476f9-a3c0-4f83-b1b5-d06bb9e7654e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033985839s Dec 28 13:08:43.456: INFO: Pod "pod-configmaps-7d0476f9-a3c0-4f83-b1b5-d06bb9e7654e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050558985s Dec 28 13:08:45.464: INFO: Pod "pod-configmaps-7d0476f9-a3c0-4f83-b1b5-d06bb9e7654e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058697646s Dec 28 13:08:47.471: INFO: Pod "pod-configmaps-7d0476f9-a3c0-4f83-b1b5-d06bb9e7654e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.065171829s Dec 28 13:08:49.485: INFO: Pod "pod-configmaps-7d0476f9-a3c0-4f83-b1b5-d06bb9e7654e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.07959021s STEP: Saw pod success Dec 28 13:08:49.485: INFO: Pod "pod-configmaps-7d0476f9-a3c0-4f83-b1b5-d06bb9e7654e" satisfied condition "success or failure" Dec 28 13:08:49.490: INFO: Trying to get logs from node iruya-node pod pod-configmaps-7d0476f9-a3c0-4f83-b1b5-d06bb9e7654e container env-test: STEP: delete the pod Dec 28 13:08:49.902: INFO: Waiting for pod pod-configmaps-7d0476f9-a3c0-4f83-b1b5-d06bb9e7654e to disappear Dec 28 13:08:49.925: INFO: Pod pod-configmaps-7d0476f9-a3c0-4f83-b1b5-d06bb9e7654e no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:08:49.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1361" for this suite. Dec 28 13:08:56.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:08:56.270: INFO: namespace configmap-1361 deletion completed in 6.32957141s • [SLOW TEST:19.054 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:08:56.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:08:56.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-507" for this suite. Dec 28 13:09:03.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:09:03.200: INFO: namespace services-507 deletion completed in 6.23098062s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.929 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:09:03.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Dec 28 13:09:04.310: INFO: Pod name wrapped-volume-race-d96e4d06-fa87-4e1a-96b4-fc16cea5decc: Found 0 pods out of 5 Dec 28 13:09:09.325: INFO: Pod name wrapped-volume-race-d96e4d06-fa87-4e1a-96b4-fc16cea5decc: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d96e4d06-fa87-4e1a-96b4-fc16cea5decc in namespace emptydir-wrapper-679, will wait for the garbage collector to delete the pods Dec 28 13:09:43.408: INFO: Deleting ReplicationController wrapped-volume-race-d96e4d06-fa87-4e1a-96b4-fc16cea5decc took: 10.841768ms Dec 28 13:09:43.809: INFO: Terminating ReplicationController wrapped-volume-race-d96e4d06-fa87-4e1a-96b4-fc16cea5decc pods took: 401.008225ms STEP: Creating RC which spawns configmap-volume pods Dec 28 13:10:36.859: INFO: Pod name wrapped-volume-race-6edb443a-ed76-41ff-ac32-ae9a01955e64: Found 0 pods out of 5 Dec 28 13:10:41.888: INFO: Pod name wrapped-volume-race-6edb443a-ed76-41ff-ac32-ae9a01955e64: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-6edb443a-ed76-41ff-ac32-ae9a01955e64 in namespace emptydir-wrapper-679, will wait for the garbage collector to delete the pods Dec 28 13:11:15.994: INFO: Deleting ReplicationController wrapped-volume-race-6edb443a-ed76-41ff-ac32-ae9a01955e64 took: 11.87366ms Dec 28 13:11:16.395: INFO: Terminating ReplicationController wrapped-volume-race-6edb443a-ed76-41ff-ac32-ae9a01955e64 pods took: 401.002172ms STEP: Creating RC which spawns configmap-volume pods Dec 28 13:12:07.790: INFO: Pod name wrapped-volume-race-9b1f2d92-835d-4178-9158-a9892b32b8cb: Found 0 pods out of 5 Dec 28 13:12:12.813: INFO: Pod name wrapped-volume-race-9b1f2d92-835d-4178-9158-a9892b32b8cb: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9b1f2d92-835d-4178-9158-a9892b32b8cb in namespace emptydir-wrapper-679, will wait for the garbage collector to delete the pods Dec 28 13:12:48.926: INFO: Deleting ReplicationController wrapped-volume-race-9b1f2d92-835d-4178-9158-a9892b32b8cb took: 12.929986ms Dec 28 13:12:49.326: INFO: Terminating ReplicationController wrapped-volume-race-9b1f2d92-835d-4178-9158-a9892b32b8cb pods took: 400.4113ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:13:37.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-679" for this suite. Dec 28 13:13:49.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:13:49.974: INFO: namespace emptydir-wrapper-679 deletion completed in 12.17376298s • [SLOW TEST:286.774 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:13:49.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-087cc4db-20fd-4d93-8544-806f9c37a17f STEP: Creating a pod to test consume configMaps Dec 28 13:13:50.236: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-23ea1290-28f4-4613-9fbe-70bb1817e1c1" in namespace "projected-181" to be "success or failure" Dec 28 13:13:50.244: INFO: Pod "pod-projected-configmaps-23ea1290-28f4-4613-9fbe-70bb1817e1c1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.986952ms Dec 28 13:13:52.258: INFO: Pod "pod-projected-configmaps-23ea1290-28f4-4613-9fbe-70bb1817e1c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02226563s Dec 28 13:13:54.266: INFO: Pod "pod-projected-configmaps-23ea1290-28f4-4613-9fbe-70bb1817e1c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030528263s Dec 28 13:13:56.320: INFO: Pod "pod-projected-configmaps-23ea1290-28f4-4613-9fbe-70bb1817e1c1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083716883s Dec 28 13:13:58.330: INFO: Pod "pod-projected-configmaps-23ea1290-28f4-4613-9fbe-70bb1817e1c1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.094355237s Dec 28 13:14:00.336: INFO: Pod "pod-projected-configmaps-23ea1290-28f4-4613-9fbe-70bb1817e1c1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.100066341s Dec 28 13:14:02.343: INFO: Pod "pod-projected-configmaps-23ea1290-28f4-4613-9fbe-70bb1817e1c1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.106917172s Dec 28 13:14:04.356: INFO: Pod "pod-projected-configmaps-23ea1290-28f4-4613-9fbe-70bb1817e1c1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.120140845s Dec 28 13:14:06.369: INFO: Pod "pod-projected-configmaps-23ea1290-28f4-4613-9fbe-70bb1817e1c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.133050861s STEP: Saw pod success Dec 28 13:14:06.369: INFO: Pod "pod-projected-configmaps-23ea1290-28f4-4613-9fbe-70bb1817e1c1" satisfied condition "success or failure" Dec 28 13:14:06.373: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-23ea1290-28f4-4613-9fbe-70bb1817e1c1 container projected-configmap-volume-test: STEP: delete the pod Dec 28 13:14:06.702: INFO: Waiting for pod pod-projected-configmaps-23ea1290-28f4-4613-9fbe-70bb1817e1c1 to disappear Dec 28 13:14:06.715: INFO: Pod pod-projected-configmaps-23ea1290-28f4-4613-9fbe-70bb1817e1c1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:14:06.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-181" for this suite. Dec 28 13:14:12.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:14:12.939: INFO: namespace projected-181 deletion completed in 6.218306701s • [SLOW TEST:22.965 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:14:12.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 28 13:14:13.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2531' Dec 28 13:14:15.734: INFO: stderr: "" Dec 28 13:14:15.734: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Dec 28 13:14:15.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-2531' Dec 28 13:14:22.891: INFO: stderr: "" Dec 28 13:14:22.891: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:14:22.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2531" for this suite. Dec 28 13:14:28.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:14:29.037: INFO: namespace kubectl-2531 deletion completed in 6.11606059s • [SLOW TEST:16.098 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:14:29.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-e0924696-d7ee-44c4-b77d-a6dbbba61469 STEP: Creating a pod to test consume configMaps Dec 28 13:14:29.304: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bfa03fe6-d29b-4fd8-821f-de4f68aeefc0" in namespace "projected-7487" to be "success or failure" Dec 28 13:14:29.338: INFO: Pod "pod-projected-configmaps-bfa03fe6-d29b-4fd8-821f-de4f68aeefc0": Phase="Pending", Reason="", readiness=false. Elapsed: 33.960599ms Dec 28 13:14:31.725: INFO: Pod "pod-projected-configmaps-bfa03fe6-d29b-4fd8-821f-de4f68aeefc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.420991577s Dec 28 13:14:33.736: INFO: Pod "pod-projected-configmaps-bfa03fe6-d29b-4fd8-821f-de4f68aeefc0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43189227s Dec 28 13:14:35.744: INFO: Pod "pod-projected-configmaps-bfa03fe6-d29b-4fd8-821f-de4f68aeefc0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.439379373s Dec 28 13:14:37.755: INFO: Pod "pod-projected-configmaps-bfa03fe6-d29b-4fd8-821f-de4f68aeefc0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.450476845s Dec 28 13:14:39.770: INFO: Pod "pod-projected-configmaps-bfa03fe6-d29b-4fd8-821f-de4f68aeefc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.466166526s STEP: Saw pod success Dec 28 13:14:39.771: INFO: Pod "pod-projected-configmaps-bfa03fe6-d29b-4fd8-821f-de4f68aeefc0" satisfied condition "success or failure" Dec 28 13:14:39.777: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-bfa03fe6-d29b-4fd8-821f-de4f68aeefc0 container projected-configmap-volume-test: STEP: delete the pod Dec 28 13:14:39.927: INFO: Waiting for pod pod-projected-configmaps-bfa03fe6-d29b-4fd8-821f-de4f68aeefc0 to disappear Dec 28 13:14:39.985: INFO: Pod pod-projected-configmaps-bfa03fe6-d29b-4fd8-821f-de4f68aeefc0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:14:39.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7487" for this suite. Dec 28 13:14:46.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:14:46.183: INFO: namespace projected-7487 deletion completed in 6.188813258s • [SLOW TEST:17.145 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:14:46.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 28 13:14:46.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-1397' Dec 28 13:14:46.619: INFO: stderr: "" Dec 28 13:14:46.619: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Dec 28 13:14:56.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-1397 -o json' Dec 28 13:14:56.841: INFO: stderr: "" Dec 28 13:14:56.841: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-12-28T13:14:46Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-1397\",\n \"resourceVersion\": \"18393142\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-1397/pods/e2e-test-nginx-pod\",\n \"uid\": \"a15b0146-933f-4f91-9ae2-184aeed62198\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-8r96r\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-node\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-8r96r\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-8r96r\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-28T13:14:46Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-28T13:14:54Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-28T13:14:54Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-28T13:14:46Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://897f6e02dd768790f231742d96c8c45a01a585cbf4b8fd5f8a4049da3b5c70e8\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2019-12-28T13:14:53Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.3.65\",\n \"phase\": \"Running\",\n \"podIP\": \"10.44.0.1\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2019-12-28T13:14:46Z\"\n }\n}\n" STEP: replace the image in the pod Dec 28 13:14:56.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1397' Dec 28 13:14:57.233: INFO: stderr: "" Dec 28 13:14:57.233: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Dec 28 13:14:57.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-1397' Dec 28 13:15:04.904: INFO: stderr: "" Dec 28 13:15:04.904: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:15:04.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1397" for this suite. Dec 28 13:15:11.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:15:11.081: INFO: namespace kubectl-1397 deletion completed in 6.096937595s • [SLOW TEST:24.896 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:15:11.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-ec0f4ab5-d858-4de0-81fe-813c480da813 Dec 28 13:15:11.168: INFO: Pod name my-hostname-basic-ec0f4ab5-d858-4de0-81fe-813c480da813: Found 0 pods out of 1 Dec 28 13:15:16.176: INFO: Pod name my-hostname-basic-ec0f4ab5-d858-4de0-81fe-813c480da813: Found 1 pods out of 1 Dec 28 13:15:16.176: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-ec0f4ab5-d858-4de0-81fe-813c480da813" are running Dec 28 13:15:20.189: INFO: Pod "my-hostname-basic-ec0f4ab5-d858-4de0-81fe-813c480da813-swrnc" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 13:15:11 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 13:15:11 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ec0f4ab5-d858-4de0-81fe-813c480da813]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 13:15:11 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ec0f4ab5-d858-4de0-81fe-813c480da813]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 13:15:11 +0000 UTC Reason: Message:}]) Dec 28 13:15:20.189: INFO: Trying to dial the pod Dec 28 13:15:25.345: INFO: Controller my-hostname-basic-ec0f4ab5-d858-4de0-81fe-813c480da813: Got expected result from replica 1 [my-hostname-basic-ec0f4ab5-d858-4de0-81fe-813c480da813-swrnc]: "my-hostname-basic-ec0f4ab5-d858-4de0-81fe-813c480da813-swrnc", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:15:25.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2156" for this suite. Dec 28 13:15:31.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:15:31.625: INFO: namespace replication-controller-2156 deletion completed in 6.272753494s • [SLOW TEST:20.544 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:15:31.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W1228 13:16:02.335042 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 28 13:16:02.335: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:16:02.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8217" for this suite. Dec 28 13:16:08.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:16:09.274: INFO: namespace gc-8217 deletion completed in 6.935835209s • [SLOW TEST:37.646 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:16:09.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Dec 28 13:16:22.041: INFO: 10 pods remaining Dec 28 13:16:22.041: INFO: 10 pods has nil DeletionTimestamp Dec 28 13:16:22.041: INFO: Dec 28 13:16:23.094: INFO: 0 pods remaining Dec 28 13:16:23.094: INFO: 0 pods has nil DeletionTimestamp Dec 28 13:16:23.094: INFO: STEP: Gathering metrics W1228 13:16:23.888575 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 28 13:16:23.888: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:16:23.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9098" for this suite. Dec 28 13:16:37.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:16:38.060: INFO: namespace gc-9098 deletion completed in 14.162311686s • [SLOW TEST:28.786 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:16:38.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Dec 28 13:16:58.365: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 28 13:16:58.395: INFO: Pod pod-with-poststart-http-hook still exists Dec 28 13:17:00.395: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 28 13:17:00.411: INFO: Pod pod-with-poststart-http-hook still exists Dec 28 13:17:02.395: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 28 13:17:02.405: INFO: Pod pod-with-poststart-http-hook still exists Dec 28 13:17:04.395: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 28 13:17:04.407: INFO: Pod pod-with-poststart-http-hook still exists Dec 28 13:17:06.395: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 28 13:17:06.405: INFO: Pod pod-with-poststart-http-hook still exists Dec 28 13:17:08.395: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 28 13:17:08.789: INFO: Pod pod-with-poststart-http-hook still exists Dec 28 13:17:10.395: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 28 13:17:10.404: INFO: Pod pod-with-poststart-http-hook still exists Dec 28 13:17:12.395: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 28 13:17:12.402: INFO: Pod pod-with-poststart-http-hook still exists Dec 28 13:17:14.395: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 28 13:17:14.405: INFO: Pod pod-with-poststart-http-hook still exists Dec 28 13:17:16.395: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 28 13:17:16.410: INFO: Pod pod-with-poststart-http-hook still exists Dec 28 13:17:18.395: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 28 13:17:18.402: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:17:18.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9315" for this suite. Dec 28 13:17:42.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:17:42.551: INFO: namespace container-lifecycle-hook-9315 deletion completed in 24.143165571s • [SLOW TEST:64.491 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:17:42.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 28 13:17:42.691: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Dec 28 13:17:47.711: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 28 13:17:51.732: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Dec 28 13:17:51.795: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-6812,SelfLink:/apis/apps/v1/namespaces/deployment-6812/deployments/test-cleanup-deployment,UID:29c09930-160c-4e10-9d85-7132b3655104,ResourceVersion:18393640,Generation:1,CreationTimestamp:2019-12-28 13:17:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Dec 28 13:17:51.839: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-6812,SelfLink:/apis/apps/v1/namespaces/deployment-6812/replicasets/test-cleanup-deployment-55bbcbc84c,UID:d6c86f7c-c138-47a6-9ac8-a9a701b1f5b3,ResourceVersion:18393642,Generation:1,CreationTimestamp:2019-12-28 13:17:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 29c09930-160c-4e10-9d85-7132b3655104 0xc002b4c247 0xc002b4c248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 28 13:17:51.839: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Dec 28 13:17:51.839: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-6812,SelfLink:/apis/apps/v1/namespaces/deployment-6812/replicasets/test-cleanup-controller,UID:f2dde584-7244-4734-abf0-71e8134dd4fc,ResourceVersion:18393641,Generation:1,CreationTimestamp:2019-12-28 13:17:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 29c09930-160c-4e10-9d85-7132b3655104 0xc0021a7ff7 0xc0021a7ff8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Dec 28 13:17:51.849: INFO: Pod "test-cleanup-controller-vjp8x" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-vjp8x,GenerateName:test-cleanup-controller-,Namespace:deployment-6812,SelfLink:/api/v1/namespaces/deployment-6812/pods/test-cleanup-controller-vjp8x,UID:aa1cfb0b-c957-4e5c-83dd-2d560bdbab8d,ResourceVersion:18393636,Generation:0,CreationTimestamp:2019-12-28 13:17:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller f2dde584-7244-4734-abf0-71e8134dd4fc 0xc002b4d147 0xc002b4d148}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rclpn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rclpn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rclpn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b4d1e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b4d240}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:17:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:17:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:17:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:17:42 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-28 13:17:42 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-28 13:17:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8e90cdb4765e6630dd6b0eb76249c3737e5a856075c0d6f60650ee5f72c28e2f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:17:51.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6812" for this suite. Dec 28 13:17:58.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:17:58.238: INFO: namespace deployment-6812 deletion completed in 6.328318214s • [SLOW TEST:15.687 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:17:58.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Dec 28 13:17:58.347: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Dec 28 13:17:58.460: INFO: Waiting for terminating namespaces to be deleted... Dec 28 13:17:58.467: INFO: Logging pods the kubelet thinks is on node iruya-node before test Dec 28 13:17:58.490: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Dec 28 13:17:58.490: INFO: Container weave ready: true, restart count 0 Dec 28 13:17:58.490: INFO: Container weave-npc ready: true, restart count 0 Dec 28 13:17:58.490: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Dec 28 13:17:58.490: INFO: Container kube-proxy ready: true, restart count 0 Dec 28 13:17:58.490: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Dec 28 13:17:58.509: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Dec 28 13:17:58.510: INFO: Container kube-scheduler ready: true, restart count 10 Dec 28 13:17:58.510: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Dec 28 13:17:58.510: INFO: Container coredns ready: true, restart count 0 Dec 28 13:17:58.510: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Dec 28 13:17:58.510: INFO: Container etcd ready: true, restart count 0 Dec 28 13:17:58.510: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Dec 28 13:17:58.510: INFO: Container weave ready: true, restart count 0 Dec 28 13:17:58.510: INFO: Container weave-npc ready: true, restart count 0 Dec 28 13:17:58.510: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Dec 28 13:17:58.510: INFO: Container coredns ready: true, restart count 0 Dec 28 13:17:58.510: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Dec 28 13:17:58.510: INFO: Container kube-controller-manager ready: true, restart count 14 Dec 28 13:17:58.510: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Dec 28 13:17:58.510: INFO: Container kube-proxy ready: true, restart count 0 Dec 28 13:17:58.510: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Dec 28 13:17:58.510: INFO: Container kube-apiserver ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15e48b6f0c570468], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:17:59.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3739" for this suite. Dec 28 13:18:05.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:18:05.889: INFO: namespace sched-pred-3739 deletion completed in 6.331710064s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.650 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:18:05.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Dec 28 13:18:06.092: INFO: Waiting up to 5m0s for pod "pod-3faccb86-022a-4f3a-b96c-4b0a07882ec7" in namespace "emptydir-4727" to be "success or failure" Dec 28 13:18:06.097: INFO: Pod "pod-3faccb86-022a-4f3a-b96c-4b0a07882ec7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.862145ms Dec 28 13:18:08.108: INFO: Pod "pod-3faccb86-022a-4f3a-b96c-4b0a07882ec7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015891873s Dec 28 13:18:11.243: INFO: Pod "pod-3faccb86-022a-4f3a-b96c-4b0a07882ec7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.151053004s Dec 28 13:18:13.253: INFO: Pod "pod-3faccb86-022a-4f3a-b96c-4b0a07882ec7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.161103457s Dec 28 13:18:15.262: INFO: Pod "pod-3faccb86-022a-4f3a-b96c-4b0a07882ec7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.17022613s STEP: Saw pod success Dec 28 13:18:15.262: INFO: Pod "pod-3faccb86-022a-4f3a-b96c-4b0a07882ec7" satisfied condition "success or failure" Dec 28 13:18:15.267: INFO: Trying to get logs from node iruya-node pod pod-3faccb86-022a-4f3a-b96c-4b0a07882ec7 container test-container: STEP: delete the pod Dec 28 13:18:15.404: INFO: Waiting for pod pod-3faccb86-022a-4f3a-b96c-4b0a07882ec7 to disappear Dec 28 13:18:15.414: INFO: Pod pod-3faccb86-022a-4f3a-b96c-4b0a07882ec7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:18:15.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4727" for this suite. Dec 28 13:18:21.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:18:21.667: INFO: namespace emptydir-4727 deletion completed in 6.247468355s • [SLOW TEST:15.778 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:18:21.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Dec 28 13:18:21.934: INFO: Waiting up to 5m0s for pod "pod-1716cb2f-08a4-4416-bcc8-0c52bd23b336" in namespace "emptydir-5248" to be "success or failure" Dec 28 13:18:21.949: INFO: Pod "pod-1716cb2f-08a4-4416-bcc8-0c52bd23b336": Phase="Pending", Reason="", readiness=false. Elapsed: 14.401159ms Dec 28 13:18:23.966: INFO: Pod "pod-1716cb2f-08a4-4416-bcc8-0c52bd23b336": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032098664s Dec 28 13:18:25.977: INFO: Pod "pod-1716cb2f-08a4-4416-bcc8-0c52bd23b336": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042741041s Dec 28 13:18:27.993: INFO: Pod "pod-1716cb2f-08a4-4416-bcc8-0c52bd23b336": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059213165s Dec 28 13:18:30.016: INFO: Pod "pod-1716cb2f-08a4-4416-bcc8-0c52bd23b336": Phase="Pending", Reason="", readiness=false. Elapsed: 8.081524386s Dec 28 13:18:32.030: INFO: Pod "pod-1716cb2f-08a4-4416-bcc8-0c52bd23b336": Phase="Running", Reason="", readiness=true. Elapsed: 10.095797139s Dec 28 13:18:34.057: INFO: Pod "pod-1716cb2f-08a4-4416-bcc8-0c52bd23b336": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.122288978s STEP: Saw pod success Dec 28 13:18:34.057: INFO: Pod "pod-1716cb2f-08a4-4416-bcc8-0c52bd23b336" satisfied condition "success or failure" Dec 28 13:18:34.071: INFO: Trying to get logs from node iruya-node pod pod-1716cb2f-08a4-4416-bcc8-0c52bd23b336 container test-container: STEP: delete the pod Dec 28 13:18:34.201: INFO: Waiting for pod pod-1716cb2f-08a4-4416-bcc8-0c52bd23b336 to disappear Dec 28 13:18:34.211: INFO: Pod pod-1716cb2f-08a4-4416-bcc8-0c52bd23b336 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:18:34.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5248" for this suite. Dec 28 13:18:40.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:18:40.469: INFO: namespace emptydir-5248 deletion completed in 6.247214255s • [SLOW TEST:18.801 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:18:40.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-493 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-493 to expose endpoints map[] Dec 28 13:18:40.663: INFO: Get endpoints failed (15.398567ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Dec 28 13:18:41.670: INFO: successfully validated that service endpoint-test2 in namespace services-493 exposes endpoints map[] (1.023066105s elapsed) STEP: Creating pod pod1 in namespace services-493 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-493 to expose endpoints map[pod1:[80]] Dec 28 13:18:46.789: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (5.085736503s elapsed, will retry) Dec 28 13:18:50.872: INFO: successfully validated that service endpoint-test2 in namespace services-493 exposes endpoints map[pod1:[80]] (9.168208089s elapsed) STEP: Creating pod pod2 in namespace services-493 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-493 to expose endpoints map[pod1:[80] pod2:[80]] Dec 28 13:18:55.701: INFO: Unexpected endpoints: found map[dc1f9444-5937-4269-9d32-91aa76bc96cb:[80]], expected map[pod1:[80] pod2:[80]] (4.79375887s elapsed, will retry) Dec 28 13:18:59.588: INFO: successfully validated that service endpoint-test2 in namespace services-493 exposes endpoints map[pod1:[80] pod2:[80]] (8.680412528s elapsed) STEP: Deleting pod pod1 in namespace services-493 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-493 to expose endpoints map[pod2:[80]] Dec 28 13:18:59.761: INFO: successfully validated that service endpoint-test2 in namespace services-493 exposes endpoints map[pod2:[80]] (135.571198ms elapsed) STEP: Deleting pod pod2 in namespace services-493 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-493 to expose endpoints map[] Dec 28 13:18:59.888: INFO: successfully validated that service endpoint-test2 in namespace services-493 exposes endpoints map[] (72.402811ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:18:59.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-493" for this suite. Dec 28 13:19:24.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:19:24.176: INFO: namespace services-493 deletion completed in 24.200915375s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:43.706 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:19:24.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Dec 28 13:19:24.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Dec 28 13:19:24.510: INFO: stderr: "" Dec 28 13:19:24.511: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:19:24.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6089" for this suite. Dec 28 13:19:30.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:19:30.719: INFO: namespace kubectl-6089 deletion completed in 6.198552371s • [SLOW TEST:6.543 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:19:30.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-ee24153d-bfa7-4e1a-9b2e-b61c7505d006 STEP: Creating configMap with name cm-test-opt-upd-d7b0e40c-4833-4544-91d0-cd2262032b2b STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-ee24153d-bfa7-4e1a-9b2e-b61c7505d006 STEP: Updating configmap cm-test-opt-upd-d7b0e40c-4833-4544-91d0-cd2262032b2b STEP: Creating configMap with name cm-test-opt-create-8977b8c4-bc0f-431a-8b1f-c2ac33084d5d STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:19:45.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5301" for this suite. Dec 28 13:20:09.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:20:09.446: INFO: namespace configmap-5301 deletion completed in 24.20256949s • [SLOW TEST:38.727 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:20:09.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Dec 28 13:20:09.606: INFO: Waiting up to 5m0s for pod "client-containers-51589bb1-0cb7-44ef-98a6-9eda3c1791bb" in namespace "containers-7762" to be "success or failure" Dec 28 13:20:09.626: INFO: Pod "client-containers-51589bb1-0cb7-44ef-98a6-9eda3c1791bb": Phase="Pending", Reason="", readiness=false. Elapsed: 19.444883ms Dec 28 13:20:11.634: INFO: Pod "client-containers-51589bb1-0cb7-44ef-98a6-9eda3c1791bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027694937s Dec 28 13:20:13.647: INFO: Pod "client-containers-51589bb1-0cb7-44ef-98a6-9eda3c1791bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039942547s Dec 28 13:20:15.656: INFO: Pod "client-containers-51589bb1-0cb7-44ef-98a6-9eda3c1791bb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049414643s Dec 28 13:20:17.665: INFO: Pod "client-containers-51589bb1-0cb7-44ef-98a6-9eda3c1791bb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057928451s Dec 28 13:20:19.673: INFO: Pod "client-containers-51589bb1-0cb7-44ef-98a6-9eda3c1791bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.066323683s STEP: Saw pod success Dec 28 13:20:19.673: INFO: Pod "client-containers-51589bb1-0cb7-44ef-98a6-9eda3c1791bb" satisfied condition "success or failure" Dec 28 13:20:19.678: INFO: Trying to get logs from node iruya-node pod client-containers-51589bb1-0cb7-44ef-98a6-9eda3c1791bb container test-container: STEP: delete the pod Dec 28 13:20:19.763: INFO: Waiting for pod client-containers-51589bb1-0cb7-44ef-98a6-9eda3c1791bb to disappear Dec 28 13:20:19.775: INFO: Pod client-containers-51589bb1-0cb7-44ef-98a6-9eda3c1791bb no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:20:19.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7762" for this suite. Dec 28 13:20:25.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:20:26.056: INFO: namespace containers-7762 deletion completed in 6.264506556s • [SLOW TEST:16.609 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:20:26.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-8626 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 28 13:20:26.143: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 28 13:21:04.282: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8626 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 28 13:21:04.282: INFO: >>> kubeConfig: /root/.kube/config Dec 28 13:21:05.728: INFO: Found all expected endpoints: [netserver-0] Dec 28 13:21:06.367: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8626 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 28 13:21:06.367: INFO: >>> kubeConfig: /root/.kube/config Dec 28 13:21:07.910: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:21:07.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8626" for this suite. Dec 28 13:21:31.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:21:32.117: INFO: namespace pod-network-test-8626 deletion completed in 24.19613245s • [SLOW TEST:66.061 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:21:32.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-741d5381-e230-4c8b-93e9-06c880db5915 STEP: Creating a pod to test consume configMaps Dec 28 13:21:32.271: INFO: Waiting up to 5m0s for pod "pod-configmaps-7d90ac75-9759-44e9-9da4-02630c5c6e5b" in namespace "configmap-1159" to be "success or failure" Dec 28 13:21:32.367: INFO: Pod "pod-configmaps-7d90ac75-9759-44e9-9da4-02630c5c6e5b": Phase="Pending", Reason="", readiness=false. Elapsed: 96.333028ms Dec 28 13:21:34.376: INFO: Pod "pod-configmaps-7d90ac75-9759-44e9-9da4-02630c5c6e5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10558024s Dec 28 13:21:36.387: INFO: Pod "pod-configmaps-7d90ac75-9759-44e9-9da4-02630c5c6e5b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116586372s Dec 28 13:21:38.402: INFO: Pod "pod-configmaps-7d90ac75-9759-44e9-9da4-02630c5c6e5b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.13126274s Dec 28 13:21:40.431: INFO: Pod "pod-configmaps-7d90ac75-9759-44e9-9da4-02630c5c6e5b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.160571136s Dec 28 13:21:42.441: INFO: Pod "pod-configmaps-7d90ac75-9759-44e9-9da4-02630c5c6e5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.170669728s STEP: Saw pod success Dec 28 13:21:42.442: INFO: Pod "pod-configmaps-7d90ac75-9759-44e9-9da4-02630c5c6e5b" satisfied condition "success or failure" Dec 28 13:21:42.446: INFO: Trying to get logs from node iruya-node pod pod-configmaps-7d90ac75-9759-44e9-9da4-02630c5c6e5b container configmap-volume-test: STEP: delete the pod Dec 28 13:21:42.508: INFO: Waiting for pod pod-configmaps-7d90ac75-9759-44e9-9da4-02630c5c6e5b to disappear Dec 28 13:21:42.514: INFO: Pod pod-configmaps-7d90ac75-9759-44e9-9da4-02630c5c6e5b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:21:42.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1159" for this suite. Dec 28 13:21:48.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:21:48.769: INFO: namespace configmap-1159 deletion completed in 6.250083782s • [SLOW TEST:16.651 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:21:48.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-6631, will wait for the garbage collector to delete the pods Dec 28 13:21:58.894: INFO: Deleting Job.batch foo took: 15.842442ms Dec 28 13:21:59.195: INFO: Terminating Job.batch foo pods took: 300.720012ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:22:35.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6631" for this suite. Dec 28 13:22:41.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:22:41.898: INFO: namespace job-6631 deletion completed in 6.190297387s • [SLOW TEST:53.128 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:22:41.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Dec 28 13:22:42.022: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix152939359/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:22:42.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-50" for this suite. Dec 28 13:22:48.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:22:48.255: INFO: namespace kubectl-50 deletion completed in 6.159252086s • [SLOW TEST:6.356 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:22:48.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Dec 28 13:23:06.493: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 28 13:23:06.509: INFO: Pod pod-with-prestop-exec-hook still exists Dec 28 13:23:08.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 28 13:23:08.531: INFO: Pod pod-with-prestop-exec-hook still exists Dec 28 13:23:10.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 28 13:23:10.524: INFO: Pod pod-with-prestop-exec-hook still exists Dec 28 13:23:12.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 28 13:23:12.522: INFO: Pod pod-with-prestop-exec-hook still exists Dec 28 13:23:14.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 28 13:23:14.531: INFO: Pod pod-with-prestop-exec-hook still exists Dec 28 13:23:16.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 28 13:23:16.536: INFO: Pod pod-with-prestop-exec-hook still exists Dec 28 13:23:18.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 28 13:23:18.688: INFO: Pod pod-with-prestop-exec-hook still exists Dec 28 13:23:20.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 28 13:23:20.537: INFO: Pod pod-with-prestop-exec-hook still exists Dec 28 13:23:22.509: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 28 13:23:22.522: INFO: Pod pod-with-prestop-exec-hook still exists Dec 28 13:23:24.509: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 28 13:23:24.576: INFO: Pod pod-with-prestop-exec-hook still exists Dec 28 13:23:26.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 28 13:23:26.526: INFO: Pod pod-with-prestop-exec-hook still exists Dec 28 13:23:28.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 28 13:23:28.520: INFO: Pod pod-with-prestop-exec-hook still exists Dec 28 13:23:30.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 28 13:23:30.558: INFO: Pod pod-with-prestop-exec-hook still exists Dec 28 13:23:32.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 28 13:23:32.572: INFO: Pod pod-with-prestop-exec-hook still exists Dec 28 13:23:34.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 28 13:23:34.527: INFO: Pod pod-with-prestop-exec-hook still exists Dec 28 13:23:36.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 28 13:23:36.539: INFO: Pod pod-with-prestop-exec-hook still exists Dec 28 13:23:38.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 28 13:23:38.533: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:23:38.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2805" for this suite. Dec 28 13:24:00.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:24:00.948: INFO: namespace container-lifecycle-hook-2805 deletion completed in 22.213169852s • [SLOW TEST:72.693 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:24:00.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-c91580a7-0d7e-46dc-a048-b99155a5c063 STEP: Creating a pod to test consume configMaps Dec 28 13:24:01.160: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ac6ac73d-afc1-4642-b6b0-01a80a48972e" in namespace "projected-8845" to be "success or failure" Dec 28 13:24:01.206: INFO: Pod "pod-projected-configmaps-ac6ac73d-afc1-4642-b6b0-01a80a48972e": Phase="Pending", Reason="", readiness=false. Elapsed: 45.501781ms Dec 28 13:24:03.217: INFO: Pod "pod-projected-configmaps-ac6ac73d-afc1-4642-b6b0-01a80a48972e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056344516s Dec 28 13:24:05.225: INFO: Pod "pod-projected-configmaps-ac6ac73d-afc1-4642-b6b0-01a80a48972e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064414254s Dec 28 13:24:07.238: INFO: Pod "pod-projected-configmaps-ac6ac73d-afc1-4642-b6b0-01a80a48972e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077604519s Dec 28 13:24:09.252: INFO: Pod "pod-projected-configmaps-ac6ac73d-afc1-4642-b6b0-01a80a48972e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.091738559s Dec 28 13:24:11.262: INFO: Pod "pod-projected-configmaps-ac6ac73d-afc1-4642-b6b0-01a80a48972e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.102140599s STEP: Saw pod success Dec 28 13:24:11.263: INFO: Pod "pod-projected-configmaps-ac6ac73d-afc1-4642-b6b0-01a80a48972e" satisfied condition "success or failure" Dec 28 13:24:11.272: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-ac6ac73d-afc1-4642-b6b0-01a80a48972e container projected-configmap-volume-test: STEP: delete the pod Dec 28 13:24:11.356: INFO: Waiting for pod pod-projected-configmaps-ac6ac73d-afc1-4642-b6b0-01a80a48972e to disappear Dec 28 13:24:11.368: INFO: Pod pod-projected-configmaps-ac6ac73d-afc1-4642-b6b0-01a80a48972e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:24:11.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8845" for this suite. Dec 28 13:24:17.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:24:17.529: INFO: namespace projected-8845 deletion completed in 6.152290152s • [SLOW TEST:16.580 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:24:17.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Dec 28 13:24:26.729: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:24:26.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1787" for this suite. Dec 28 13:24:50.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:24:51.025: INFO: namespace replicaset-1787 deletion completed in 24.208414822s • [SLOW TEST:33.495 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:24:51.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-7c65b2bf-34cc-4f98-a0e2-85004b57412e in namespace container-probe-5583 Dec 28 13:24:59.257: INFO: Started pod liveness-7c65b2bf-34cc-4f98-a0e2-85004b57412e in namespace container-probe-5583 STEP: checking the pod's current state and verifying that restartCount is present Dec 28 13:24:59.260: INFO: Initial restart count of pod liveness-7c65b2bf-34cc-4f98-a0e2-85004b57412e is 0 Dec 28 13:25:19.419: INFO: Restart count of pod container-probe-5583/liveness-7c65b2bf-34cc-4f98-a0e2-85004b57412e is now 1 (20.158702682s elapsed) Dec 28 13:25:39.519: INFO: Restart count of pod container-probe-5583/liveness-7c65b2bf-34cc-4f98-a0e2-85004b57412e is now 2 (40.258850683s elapsed) Dec 28 13:25:59.634: INFO: Restart count of pod container-probe-5583/liveness-7c65b2bf-34cc-4f98-a0e2-85004b57412e is now 3 (1m0.373696928s elapsed) Dec 28 13:26:19.752: INFO: Restart count of pod container-probe-5583/liveness-7c65b2bf-34cc-4f98-a0e2-85004b57412e is now 4 (1m20.491832857s elapsed) Dec 28 13:27:22.661: INFO: Restart count of pod container-probe-5583/liveness-7c65b2bf-34cc-4f98-a0e2-85004b57412e is now 5 (2m23.400560694s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:27:22.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5583" for this suite. Dec 28 13:27:28.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:27:29.008: INFO: namespace container-probe-5583 deletion completed in 6.13251268s • [SLOW TEST:157.983 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:27:29.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 28 13:27:29.067: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Dec 28 13:27:31.926: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:27:32.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7422" for this suite. Dec 28 13:27:44.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:27:44.776: INFO: namespace replication-controller-7422 deletion completed in 12.30558802s • [SLOW TEST:15.768 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:27:44.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-4635 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4635 to expose endpoints map[] Dec 28 13:27:45.008: INFO: Get endpoints failed (15.877627ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Dec 28 13:27:46.018: INFO: successfully validated that service multi-endpoint-test in namespace services-4635 exposes endpoints map[] (1.025153529s elapsed) STEP: Creating pod pod1 in namespace services-4635 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4635 to expose endpoints map[pod1:[100]] Dec 28 13:27:50.430: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.395460258s elapsed, will retry) Dec 28 13:27:53.475: INFO: successfully validated that service multi-endpoint-test in namespace services-4635 exposes endpoints map[pod1:[100]] (7.441230101s elapsed) STEP: Creating pod pod2 in namespace services-4635 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4635 to expose endpoints map[pod1:[100] pod2:[101]] Dec 28 13:27:59.339: INFO: Unexpected endpoints: found map[ea802f64-1357-4ce8-9392-538212bdeadf:[100]], expected map[pod1:[100] pod2:[101]] (5.855297989s elapsed, will retry) Dec 28 13:28:02.490: INFO: successfully validated that service multi-endpoint-test in namespace services-4635 exposes endpoints map[pod1:[100] pod2:[101]] (9.006584665s elapsed) STEP: Deleting pod pod1 in namespace services-4635 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4635 to expose endpoints map[pod2:[101]] Dec 28 13:28:02.568: INFO: successfully validated that service multi-endpoint-test in namespace services-4635 exposes endpoints map[pod2:[101]] (55.090544ms elapsed) STEP: Deleting pod pod2 in namespace services-4635 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4635 to expose endpoints map[] Dec 28 13:28:02.634: INFO: successfully validated that service multi-endpoint-test in namespace services-4635 exposes endpoints map[] (42.241535ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:28:02.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4635" for this suite. Dec 28 13:28:24.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:28:24.866: INFO: namespace services-4635 deletion completed in 22.114999409s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:40.089 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:28:24.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-850.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-850.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-850.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-850.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 28 13:28:37.088: INFO: File wheezy_udp@dns-test-service-3.dns-850.svc.cluster.local from pod dns-850/dns-test-1bfaa102-2c40-483b-a55b-d9b20878bb18 contains '' instead of 'foo.example.com.' Dec 28 13:28:37.094: INFO: File jessie_udp@dns-test-service-3.dns-850.svc.cluster.local from pod dns-850/dns-test-1bfaa102-2c40-483b-a55b-d9b20878bb18 contains '' instead of 'foo.example.com.' Dec 28 13:28:37.094: INFO: Lookups using dns-850/dns-test-1bfaa102-2c40-483b-a55b-d9b20878bb18 failed for: [wheezy_udp@dns-test-service-3.dns-850.svc.cluster.local jessie_udp@dns-test-service-3.dns-850.svc.cluster.local] Dec 28 13:28:42.115: INFO: DNS probes using dns-test-1bfaa102-2c40-483b-a55b-d9b20878bb18 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-850.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-850.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-850.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-850.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 28 13:28:56.378: INFO: File wheezy_udp@dns-test-service-3.dns-850.svc.cluster.local from pod dns-850/dns-test-2c7cc566-14a9-44f3-85c4-c10a19849153 contains '' instead of 'bar.example.com.' Dec 28 13:28:56.384: INFO: File jessie_udp@dns-test-service-3.dns-850.svc.cluster.local from pod dns-850/dns-test-2c7cc566-14a9-44f3-85c4-c10a19849153 contains '' instead of 'bar.example.com.' Dec 28 13:28:56.384: INFO: Lookups using dns-850/dns-test-2c7cc566-14a9-44f3-85c4-c10a19849153 failed for: [wheezy_udp@dns-test-service-3.dns-850.svc.cluster.local jessie_udp@dns-test-service-3.dns-850.svc.cluster.local] Dec 28 13:29:01.412: INFO: File wheezy_udp@dns-test-service-3.dns-850.svc.cluster.local from pod dns-850/dns-test-2c7cc566-14a9-44f3-85c4-c10a19849153 contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 28 13:29:01.424: INFO: File jessie_udp@dns-test-service-3.dns-850.svc.cluster.local from pod dns-850/dns-test-2c7cc566-14a9-44f3-85c4-c10a19849153 contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 28 13:29:01.424: INFO: Lookups using dns-850/dns-test-2c7cc566-14a9-44f3-85c4-c10a19849153 failed for: [wheezy_udp@dns-test-service-3.dns-850.svc.cluster.local jessie_udp@dns-test-service-3.dns-850.svc.cluster.local] Dec 28 13:29:06.398: INFO: File wheezy_udp@dns-test-service-3.dns-850.svc.cluster.local from pod dns-850/dns-test-2c7cc566-14a9-44f3-85c4-c10a19849153 contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 28 13:29:06.404: INFO: File jessie_udp@dns-test-service-3.dns-850.svc.cluster.local from pod dns-850/dns-test-2c7cc566-14a9-44f3-85c4-c10a19849153 contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 28 13:29:06.404: INFO: Lookups using dns-850/dns-test-2c7cc566-14a9-44f3-85c4-c10a19849153 failed for: [wheezy_udp@dns-test-service-3.dns-850.svc.cluster.local jessie_udp@dns-test-service-3.dns-850.svc.cluster.local] Dec 28 13:29:11.407: INFO: File jessie_udp@dns-test-service-3.dns-850.svc.cluster.local from pod dns-850/dns-test-2c7cc566-14a9-44f3-85c4-c10a19849153 contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 28 13:29:11.408: INFO: Lookups using dns-850/dns-test-2c7cc566-14a9-44f3-85c4-c10a19849153 failed for: [jessie_udp@dns-test-service-3.dns-850.svc.cluster.local] Dec 28 13:29:16.402: INFO: DNS probes using dns-test-2c7cc566-14a9-44f3-85c4-c10a19849153 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-850.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-850.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-850.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-850.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 28 13:29:30.747: INFO: File wheezy_udp@dns-test-service-3.dns-850.svc.cluster.local from pod dns-850/dns-test-5ff347ca-80cc-4c64-adcb-a8e991f66ff2 contains '' instead of '10.111.144.71' Dec 28 13:29:30.756: INFO: File jessie_udp@dns-test-service-3.dns-850.svc.cluster.local from pod dns-850/dns-test-5ff347ca-80cc-4c64-adcb-a8e991f66ff2 contains '' instead of '10.111.144.71' Dec 28 13:29:30.756: INFO: Lookups using dns-850/dns-test-5ff347ca-80cc-4c64-adcb-a8e991f66ff2 failed for: [wheezy_udp@dns-test-service-3.dns-850.svc.cluster.local jessie_udp@dns-test-service-3.dns-850.svc.cluster.local] Dec 28 13:29:35.787: INFO: DNS probes using dns-test-5ff347ca-80cc-4c64-adcb-a8e991f66ff2 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:29:35.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-850" for this suite. Dec 28 13:29:44.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:29:44.254: INFO: namespace dns-850 deletion completed in 8.145084673s • [SLOW TEST:79.387 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:29:44.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 28 13:30:14.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7369" for this suite. Dec 28 13:30:20.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:30:20.985: INFO: namespace namespaces-7369 deletion completed in 6.256822552s STEP: Destroying namespace "nsdeletetest-983" for this suite. Dec 28 13:30:20.988: INFO: Namespace nsdeletetest-983 was already deleted STEP: Destroying namespace "nsdeletetest-7405" for this suite. Dec 28 13:30:27.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 13:30:27.149: INFO: namespace nsdeletetest-7405 deletion completed in 6.161228227s • [SLOW TEST:42.895 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 28 13:30:27.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 28 13:30:27.278: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/:
alternatives.log
alternatives.l... (200; 21.19709ms)
Dec 28 13:30:27.285: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.508402ms)
Dec 28 13:30:27.290: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.543386ms)
Dec 28 13:30:27.296: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.969842ms)
Dec 28 13:30:27.327: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 30.477438ms)
Dec 28 13:30:27.336: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.084273ms)
Dec 28 13:30:27.342: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.934087ms)
Dec 28 13:30:27.347: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.585697ms)
Dec 28 13:30:27.355: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.418987ms)
Dec 28 13:30:27.361: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.677773ms)
Dec 28 13:30:27.368: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.124194ms)
Dec 28 13:30:27.374: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.099397ms)
Dec 28 13:30:27.380: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.855173ms)
Dec 28 13:30:27.385: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.576466ms)
Dec 28 13:30:27.394: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.344064ms)
Dec 28 13:30:27.403: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.307788ms)
Dec 28 13:30:27.409: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.738362ms)
Dec 28 13:30:27.414: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.276001ms)
Dec 28 13:30:27.420: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.936523ms)
Dec 28 13:30:27.427: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.698681ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:30:27.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-902" for this suite.
Dec 28 13:30:33.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:30:33.610: INFO: namespace proxy-902 deletion completed in 6.178761025s

• [SLOW TEST:6.460 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:30:33.611: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Dec 28 13:30:33.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6243'
Dec 28 13:30:36.592: INFO: stderr: ""
Dec 28 13:30:36.592: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 28 13:30:36.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6243'
Dec 28 13:30:36.798: INFO: stderr: ""
Dec 28 13:30:36.798: INFO: stdout: "update-demo-nautilus-nbxkt update-demo-nautilus-rrhfs "
Dec 28 13:30:36.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nbxkt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6243'
Dec 28 13:30:36.911: INFO: stderr: ""
Dec 28 13:30:36.911: INFO: stdout: ""
Dec 28 13:30:36.911: INFO: update-demo-nautilus-nbxkt is created but not running
Dec 28 13:30:41.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6243'
Dec 28 13:30:42.098: INFO: stderr: ""
Dec 28 13:30:42.099: INFO: stdout: "update-demo-nautilus-nbxkt update-demo-nautilus-rrhfs "
Dec 28 13:30:42.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nbxkt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6243'
Dec 28 13:30:42.399: INFO: stderr: ""
Dec 28 13:30:42.400: INFO: stdout: ""
Dec 28 13:30:42.400: INFO: update-demo-nautilus-nbxkt is created but not running
Dec 28 13:30:47.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6243'
Dec 28 13:30:47.565: INFO: stderr: ""
Dec 28 13:30:47.566: INFO: stdout: "update-demo-nautilus-nbxkt update-demo-nautilus-rrhfs "
Dec 28 13:30:47.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nbxkt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6243'
Dec 28 13:30:47.739: INFO: stderr: ""
Dec 28 13:30:47.739: INFO: stdout: "true"
Dec 28 13:30:47.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nbxkt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6243'
Dec 28 13:30:47.912: INFO: stderr: ""
Dec 28 13:30:47.912: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 28 13:30:47.912: INFO: validating pod update-demo-nautilus-nbxkt
Dec 28 13:30:47.937: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 28 13:30:47.937: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 28 13:30:47.937: INFO: update-demo-nautilus-nbxkt is verified up and running
Dec 28 13:30:47.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rrhfs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6243'
Dec 28 13:30:48.077: INFO: stderr: ""
Dec 28 13:30:48.078: INFO: stdout: "true"
Dec 28 13:30:48.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rrhfs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6243'
Dec 28 13:30:48.172: INFO: stderr: ""
Dec 28 13:30:48.172: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 28 13:30:48.172: INFO: validating pod update-demo-nautilus-rrhfs
Dec 28 13:30:48.206: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 28 13:30:48.206: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 28 13:30:48.206: INFO: update-demo-nautilus-rrhfs is verified up and running
STEP: using delete to clean up resources
Dec 28 13:30:48.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6243'
Dec 28 13:30:48.343: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 28 13:30:48.343: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 28 13:30:48.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6243'
Dec 28 13:30:48.445: INFO: stderr: "No resources found.\n"
Dec 28 13:30:48.445: INFO: stdout: ""
Dec 28 13:30:48.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6243 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 28 13:30:48.549: INFO: stderr: ""
Dec 28 13:30:48.550: INFO: stdout: "update-demo-nautilus-nbxkt\nupdate-demo-nautilus-rrhfs\n"
Dec 28 13:30:49.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6243'
Dec 28 13:30:49.183: INFO: stderr: "No resources found.\n"
Dec 28 13:30:49.183: INFO: stdout: ""
Dec 28 13:30:49.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6243 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 28 13:30:49.291: INFO: stderr: ""
Dec 28 13:30:49.291: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:30:49.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6243" for this suite.
Dec 28 13:31:11.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:31:11.450: INFO: namespace kubectl-6243 deletion completed in 22.151384386s

• [SLOW TEST:37.840 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:31:11.452: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Dec 28 13:31:11.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8130'
Dec 28 13:31:11.894: INFO: stderr: ""
Dec 28 13:31:11.894: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 28 13:31:11.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8130'
Dec 28 13:31:12.024: INFO: stderr: ""
Dec 28 13:31:12.024: INFO: stdout: "update-demo-nautilus-pbc5f "
STEP: Replicas for name=update-demo: expected=2 actual=1
Dec 28 13:31:17.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8130'
Dec 28 13:31:18.240: INFO: stderr: ""
Dec 28 13:31:18.240: INFO: stdout: "update-demo-nautilus-pbc5f update-demo-nautilus-xwhqc "
Dec 28 13:31:18.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pbc5f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8130'
Dec 28 13:31:19.143: INFO: stderr: ""
Dec 28 13:31:19.143: INFO: stdout: ""
Dec 28 13:31:19.143: INFO: update-demo-nautilus-pbc5f is created but not running
Dec 28 13:31:24.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8130'
Dec 28 13:31:24.269: INFO: stderr: ""
Dec 28 13:31:24.269: INFO: stdout: "update-demo-nautilus-pbc5f update-demo-nautilus-xwhqc "
Dec 28 13:31:24.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pbc5f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8130'
Dec 28 13:31:24.389: INFO: stderr: ""
Dec 28 13:31:24.389: INFO: stdout: "true"
Dec 28 13:31:24.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pbc5f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8130'
Dec 28 13:31:24.548: INFO: stderr: ""
Dec 28 13:31:24.548: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 28 13:31:24.548: INFO: validating pod update-demo-nautilus-pbc5f
Dec 28 13:31:24.568: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 28 13:31:24.568: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 28 13:31:24.568: INFO: update-demo-nautilus-pbc5f is verified up and running
Dec 28 13:31:24.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xwhqc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8130'
Dec 28 13:31:24.651: INFO: stderr: ""
Dec 28 13:31:24.651: INFO: stdout: "true"
Dec 28 13:31:24.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xwhqc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8130'
Dec 28 13:31:24.732: INFO: stderr: ""
Dec 28 13:31:24.732: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 28 13:31:24.732: INFO: validating pod update-demo-nautilus-xwhqc
Dec 28 13:31:24.737: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 28 13:31:24.737: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 28 13:31:24.737: INFO: update-demo-nautilus-xwhqc is verified up and running
STEP: rolling-update to new replication controller
Dec 28 13:31:24.770: INFO: scanned /root for discovery docs: 
Dec 28 13:31:24.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-8130'
Dec 28 13:31:56.724: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 28 13:31:56.724: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 28 13:31:56.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8130'
Dec 28 13:31:56.883: INFO: stderr: ""
Dec 28 13:31:56.883: INFO: stdout: "update-demo-kitten-jp42d update-demo-kitten-tprlp "
Dec 28 13:31:56.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jp42d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8130'
Dec 28 13:31:57.013: INFO: stderr: ""
Dec 28 13:31:57.013: INFO: stdout: "true"
Dec 28 13:31:57.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jp42d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8130'
Dec 28 13:31:57.170: INFO: stderr: ""
Dec 28 13:31:57.170: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 28 13:31:57.170: INFO: validating pod update-demo-kitten-jp42d
Dec 28 13:31:57.189: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 28 13:31:57.189: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 28 13:31:57.189: INFO: update-demo-kitten-jp42d is verified up and running
Dec 28 13:31:57.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tprlp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8130'
Dec 28 13:31:57.291: INFO: stderr: ""
Dec 28 13:31:57.291: INFO: stdout: "true"
Dec 28 13:31:57.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tprlp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8130'
Dec 28 13:31:57.373: INFO: stderr: ""
Dec 28 13:31:57.373: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 28 13:31:57.373: INFO: validating pod update-demo-kitten-tprlp
Dec 28 13:31:57.413: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 28 13:31:57.413: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 28 13:31:57.413: INFO: update-demo-kitten-tprlp is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:31:57.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8130" for this suite.
Dec 28 13:32:19.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:32:19.553: INFO: namespace kubectl-8130 deletion completed in 22.133748184s

• [SLOW TEST:68.102 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:32:19.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 28 13:32:19.644: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d653a8e1-7c6c-4f1a-a7ac-6ebfad33ddec" in namespace "downward-api-3463" to be "success or failure"
Dec 28 13:32:19.704: INFO: Pod "downwardapi-volume-d653a8e1-7c6c-4f1a-a7ac-6ebfad33ddec": Phase="Pending", Reason="", readiness=false. Elapsed: 59.52563ms
Dec 28 13:32:21.721: INFO: Pod "downwardapi-volume-d653a8e1-7c6c-4f1a-a7ac-6ebfad33ddec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076916929s
Dec 28 13:32:23.747: INFO: Pod "downwardapi-volume-d653a8e1-7c6c-4f1a-a7ac-6ebfad33ddec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102163689s
Dec 28 13:32:25.763: INFO: Pod "downwardapi-volume-d653a8e1-7c6c-4f1a-a7ac-6ebfad33ddec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118352676s
Dec 28 13:32:27.776: INFO: Pod "downwardapi-volume-d653a8e1-7c6c-4f1a-a7ac-6ebfad33ddec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.131982011s
STEP: Saw pod success
Dec 28 13:32:27.777: INFO: Pod "downwardapi-volume-d653a8e1-7c6c-4f1a-a7ac-6ebfad33ddec" satisfied condition "success or failure"
Dec 28 13:32:27.781: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d653a8e1-7c6c-4f1a-a7ac-6ebfad33ddec container client-container: 
STEP: delete the pod
Dec 28 13:32:27.933: INFO: Waiting for pod downwardapi-volume-d653a8e1-7c6c-4f1a-a7ac-6ebfad33ddec to disappear
Dec 28 13:32:27.944: INFO: Pod downwardapi-volume-d653a8e1-7c6c-4f1a-a7ac-6ebfad33ddec no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:32:27.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3463" for this suite.
Dec 28 13:32:33.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:32:34.170: INFO: namespace downward-api-3463 deletion completed in 6.21866065s

• [SLOW TEST:14.616 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:32:34.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-67k8
STEP: Creating a pod to test atomic-volume-subpath
Dec 28 13:32:34.315: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-67k8" in namespace "subpath-8906" to be "success or failure"
Dec 28 13:32:34.329: INFO: Pod "pod-subpath-test-configmap-67k8": Phase="Pending", Reason="", readiness=false. Elapsed: 13.176272ms
Dec 28 13:32:36.337: INFO: Pod "pod-subpath-test-configmap-67k8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021753396s
Dec 28 13:32:38.354: INFO: Pod "pod-subpath-test-configmap-67k8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038825392s
Dec 28 13:32:40.363: INFO: Pod "pod-subpath-test-configmap-67k8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04769931s
Dec 28 13:32:42.376: INFO: Pod "pod-subpath-test-configmap-67k8": Phase="Running", Reason="", readiness=true. Elapsed: 8.060951851s
Dec 28 13:32:44.541: INFO: Pod "pod-subpath-test-configmap-67k8": Phase="Running", Reason="", readiness=true. Elapsed: 10.225133074s
Dec 28 13:32:46.558: INFO: Pod "pod-subpath-test-configmap-67k8": Phase="Running", Reason="", readiness=true. Elapsed: 12.242272677s
Dec 28 13:32:48.576: INFO: Pod "pod-subpath-test-configmap-67k8": Phase="Running", Reason="", readiness=true. Elapsed: 14.260212142s
Dec 28 13:32:50.600: INFO: Pod "pod-subpath-test-configmap-67k8": Phase="Running", Reason="", readiness=true. Elapsed: 16.284152688s
Dec 28 13:32:52.617: INFO: Pod "pod-subpath-test-configmap-67k8": Phase="Running", Reason="", readiness=true. Elapsed: 18.301550749s
Dec 28 13:32:54.638: INFO: Pod "pod-subpath-test-configmap-67k8": Phase="Running", Reason="", readiness=true. Elapsed: 20.322197032s
Dec 28 13:32:56.666: INFO: Pod "pod-subpath-test-configmap-67k8": Phase="Running", Reason="", readiness=true. Elapsed: 22.350071172s
Dec 28 13:32:58.677: INFO: Pod "pod-subpath-test-configmap-67k8": Phase="Running", Reason="", readiness=true. Elapsed: 24.361272346s
Dec 28 13:33:00.685: INFO: Pod "pod-subpath-test-configmap-67k8": Phase="Running", Reason="", readiness=true. Elapsed: 26.369621244s
Dec 28 13:33:02.699: INFO: Pod "pod-subpath-test-configmap-67k8": Phase="Running", Reason="", readiness=true. Elapsed: 28.383782813s
Dec 28 13:33:04.709: INFO: Pod "pod-subpath-test-configmap-67k8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.393945066s
STEP: Saw pod success
Dec 28 13:33:04.710: INFO: Pod "pod-subpath-test-configmap-67k8" satisfied condition "success or failure"
Dec 28 13:33:04.715: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-67k8 container test-container-subpath-configmap-67k8: 
STEP: delete the pod
Dec 28 13:33:05.052: INFO: Waiting for pod pod-subpath-test-configmap-67k8 to disappear
Dec 28 13:33:05.058: INFO: Pod pod-subpath-test-configmap-67k8 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-67k8
Dec 28 13:33:05.058: INFO: Deleting pod "pod-subpath-test-configmap-67k8" in namespace "subpath-8906"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:33:05.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8906" for this suite.
Dec 28 13:33:11.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:33:11.174: INFO: namespace subpath-8906 deletion completed in 6.106374473s

• [SLOW TEST:37.004 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:33:11.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 28 13:33:11.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Dec 28 13:33:11.379: INFO: stderr: ""
Dec 28 13:33:11.379: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:33:11.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2936" for this suite.
Dec 28 13:33:17.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:33:17.535: INFO: namespace kubectl-2936 deletion completed in 6.147357567s

• [SLOW TEST:6.361 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:33:17.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1228 13:33:58.251896       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 28 13:33:58.251: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:33:58.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2456" for this suite.
Dec 28 13:34:11.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:34:12.770: INFO: namespace gc-2456 deletion completed in 14.51182727s

• [SLOW TEST:55.235 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:34:12.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 28 13:34:14.656: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2496376a-2e08-4e86-9139-2ef10a37d73b" in namespace "downward-api-8744" to be "success or failure"
Dec 28 13:34:14.721: INFO: Pod "downwardapi-volume-2496376a-2e08-4e86-9139-2ef10a37d73b": Phase="Pending", Reason="", readiness=false. Elapsed: 64.341008ms
Dec 28 13:34:17.809: INFO: Pod "downwardapi-volume-2496376a-2e08-4e86-9139-2ef10a37d73b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.152104975s
Dec 28 13:34:19.823: INFO: Pod "downwardapi-volume-2496376a-2e08-4e86-9139-2ef10a37d73b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.166808808s
Dec 28 13:34:21.832: INFO: Pod "downwardapi-volume-2496376a-2e08-4e86-9139-2ef10a37d73b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.175271479s
Dec 28 13:34:23.840: INFO: Pod "downwardapi-volume-2496376a-2e08-4e86-9139-2ef10a37d73b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.183130312s
Dec 28 13:34:25.849: INFO: Pod "downwardapi-volume-2496376a-2e08-4e86-9139-2ef10a37d73b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.191975306s
Dec 28 13:34:27.861: INFO: Pod "downwardapi-volume-2496376a-2e08-4e86-9139-2ef10a37d73b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.204010333s
STEP: Saw pod success
Dec 28 13:34:27.861: INFO: Pod "downwardapi-volume-2496376a-2e08-4e86-9139-2ef10a37d73b" satisfied condition "success or failure"
Dec 28 13:34:27.867: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-2496376a-2e08-4e86-9139-2ef10a37d73b container client-container: 
STEP: delete the pod
Dec 28 13:34:27.987: INFO: Waiting for pod downwardapi-volume-2496376a-2e08-4e86-9139-2ef10a37d73b to disappear
Dec 28 13:34:27.998: INFO: Pod downwardapi-volume-2496376a-2e08-4e86-9139-2ef10a37d73b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:34:27.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8744" for this suite.
Dec 28 13:34:34.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:34:34.202: INFO: namespace downward-api-8744 deletion completed in 6.183381441s

• [SLOW TEST:21.432 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:34:34.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-4afdcffc-1150-413c-866c-aa1ff78ed4c0
STEP: Creating a pod to test consume configMaps
Dec 28 13:34:34.354: INFO: Waiting up to 5m0s for pod "pod-configmaps-f3c98159-0273-4f0e-bf87-d33bb73ec67b" in namespace "configmap-3808" to be "success or failure"
Dec 28 13:34:34.366: INFO: Pod "pod-configmaps-f3c98159-0273-4f0e-bf87-d33bb73ec67b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.102875ms
Dec 28 13:34:36.394: INFO: Pod "pod-configmaps-f3c98159-0273-4f0e-bf87-d33bb73ec67b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039966862s
Dec 28 13:34:38.403: INFO: Pod "pod-configmaps-f3c98159-0273-4f0e-bf87-d33bb73ec67b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048942644s
Dec 28 13:34:40.414: INFO: Pod "pod-configmaps-f3c98159-0273-4f0e-bf87-d33bb73ec67b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059684617s
Dec 28 13:34:42.431: INFO: Pod "pod-configmaps-f3c98159-0273-4f0e-bf87-d33bb73ec67b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.076110959s
STEP: Saw pod success
Dec 28 13:34:42.431: INFO: Pod "pod-configmaps-f3c98159-0273-4f0e-bf87-d33bb73ec67b" satisfied condition "success or failure"
Dec 28 13:34:42.435: INFO: Trying to get logs from node iruya-node pod pod-configmaps-f3c98159-0273-4f0e-bf87-d33bb73ec67b container configmap-volume-test: 
STEP: delete the pod
Dec 28 13:34:42.637: INFO: Waiting for pod pod-configmaps-f3c98159-0273-4f0e-bf87-d33bb73ec67b to disappear
Dec 28 13:34:42.643: INFO: Pod pod-configmaps-f3c98159-0273-4f0e-bf87-d33bb73ec67b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:34:42.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3808" for this suite.
Dec 28 13:34:48.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:34:48.786: INFO: namespace configmap-3808 deletion completed in 6.138322587s

• [SLOW TEST:14.583 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:34:48.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-97be1913-0040-4d99-b5d4-fb12abcfebce
STEP: Creating a pod to test consume configMaps
Dec 28 13:34:48.905: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-72f13559-61fa-4e2d-9323-1b810b67fe12" in namespace "projected-9432" to be "success or failure"
Dec 28 13:34:48.922: INFO: Pod "pod-projected-configmaps-72f13559-61fa-4e2d-9323-1b810b67fe12": Phase="Pending", Reason="", readiness=false. Elapsed: 16.998777ms
Dec 28 13:34:51.809: INFO: Pod "pod-projected-configmaps-72f13559-61fa-4e2d-9323-1b810b67fe12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.904118146s
Dec 28 13:34:53.818: INFO: Pod "pod-projected-configmaps-72f13559-61fa-4e2d-9323-1b810b67fe12": Phase="Pending", Reason="", readiness=false. Elapsed: 4.913356351s
Dec 28 13:34:55.830: INFO: Pod "pod-projected-configmaps-72f13559-61fa-4e2d-9323-1b810b67fe12": Phase="Pending", Reason="", readiness=false. Elapsed: 6.925140677s
Dec 28 13:34:57.838: INFO: Pod "pod-projected-configmaps-72f13559-61fa-4e2d-9323-1b810b67fe12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.933256062s
STEP: Saw pod success
Dec 28 13:34:57.838: INFO: Pod "pod-projected-configmaps-72f13559-61fa-4e2d-9323-1b810b67fe12" satisfied condition "success or failure"
Dec 28 13:34:57.842: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-72f13559-61fa-4e2d-9323-1b810b67fe12 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 28 13:34:57.900: INFO: Waiting for pod pod-projected-configmaps-72f13559-61fa-4e2d-9323-1b810b67fe12 to disappear
Dec 28 13:34:57.950: INFO: Pod pod-projected-configmaps-72f13559-61fa-4e2d-9323-1b810b67fe12 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:34:57.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9432" for this suite.
Dec 28 13:35:03.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:35:04.146: INFO: namespace projected-9432 deletion completed in 6.181920207s

• [SLOW TEST:15.359 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:35:04.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-7901
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 28 13:35:04.280: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 28 13:35:44.497: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-7901 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 13:35:44.497: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 13:35:44.923: INFO: Waiting for endpoints: map[]
Dec 28 13:35:44.929: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-7901 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 13:35:44.929: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 13:35:45.227: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:35:45.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7901" for this suite.
Dec 28 13:36:11.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:36:11.446: INFO: namespace pod-network-test-7901 deletion completed in 26.208334439s

• [SLOW TEST:67.299 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:36:11.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Dec 28 13:36:11.496: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Dec 28 13:36:11.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7363'
Dec 28 13:36:12.101: INFO: stderr: ""
Dec 28 13:36:12.101: INFO: stdout: "service/redis-slave created\n"
Dec 28 13:36:12.101: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Dec 28 13:36:12.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7363'
Dec 28 13:36:13.054: INFO: stderr: ""
Dec 28 13:36:13.054: INFO: stdout: "service/redis-master created\n"
Dec 28 13:36:13.055: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Dec 28 13:36:13.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7363'
Dec 28 13:36:13.668: INFO: stderr: ""
Dec 28 13:36:13.668: INFO: stdout: "service/frontend created\n"
Dec 28 13:36:13.670: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Dec 28 13:36:13.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7363'
Dec 28 13:36:14.073: INFO: stderr: ""
Dec 28 13:36:14.073: INFO: stdout: "deployment.apps/frontend created\n"
Dec 28 13:36:14.074: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Dec 28 13:36:14.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7363'
Dec 28 13:36:14.781: INFO: stderr: ""
Dec 28 13:36:14.781: INFO: stdout: "deployment.apps/redis-master created\n"
Dec 28 13:36:14.782: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Dec 28 13:36:14.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7363'
Dec 28 13:36:16.292: INFO: stderr: ""
Dec 28 13:36:16.292: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Dec 28 13:36:16.292: INFO: Waiting for all frontend pods to be Running.
Dec 28 13:36:41.348: INFO: Waiting for frontend to serve content.
Dec 28 13:36:41.419: INFO: Trying to add a new entry to the guestbook.
Dec 28 13:36:41.703: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Dec 28 13:36:41.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7363'
Dec 28 13:36:42.041: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 28 13:36:42.042: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Dec 28 13:36:42.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7363'
Dec 28 13:36:42.306: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 28 13:36:42.306: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 28 13:36:42.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7363'
Dec 28 13:36:42.530: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 28 13:36:42.530: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 28 13:36:42.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7363'
Dec 28 13:36:42.664: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 28 13:36:42.664: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 28 13:36:42.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7363'
Dec 28 13:36:42.798: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 28 13:36:42.798: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 28 13:36:42.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7363'
Dec 28 13:36:42.914: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 28 13:36:42.914: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:36:42.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7363" for this suite.
Dec 28 13:37:31.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:37:31.216: INFO: namespace kubectl-7363 deletion completed in 48.297023418s

• [SLOW TEST:79.770 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:37:31.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 28 13:37:59.500: INFO: Container started at 2019-12-28 13:37:38 +0000 UTC, pod became ready at 2019-12-28 13:37:57 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:37:59.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9662" for this suite.
Dec 28 13:38:21.613: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:38:21.917: INFO: namespace container-probe-9662 deletion completed in 22.358665237s

• [SLOW TEST:50.700 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:38:21.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Dec 28 13:38:22.044: INFO: Waiting up to 5m0s for pod "var-expansion-4bea7b54-8979-4f79-b869-970e060aa017" in namespace "var-expansion-6225" to be "success or failure"
Dec 28 13:38:22.053: INFO: Pod "var-expansion-4bea7b54-8979-4f79-b869-970e060aa017": Phase="Pending", Reason="", readiness=false. Elapsed: 9.048082ms
Dec 28 13:38:24.067: INFO: Pod "var-expansion-4bea7b54-8979-4f79-b869-970e060aa017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023415197s
Dec 28 13:38:26.079: INFO: Pod "var-expansion-4bea7b54-8979-4f79-b869-970e060aa017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035105282s
Dec 28 13:38:28.091: INFO: Pod "var-expansion-4bea7b54-8979-4f79-b869-970e060aa017": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047445049s
Dec 28 13:38:30.097: INFO: Pod "var-expansion-4bea7b54-8979-4f79-b869-970e060aa017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.053439934s
STEP: Saw pod success
Dec 28 13:38:30.097: INFO: Pod "var-expansion-4bea7b54-8979-4f79-b869-970e060aa017" satisfied condition "success or failure"
Dec 28 13:38:30.101: INFO: Trying to get logs from node iruya-node pod var-expansion-4bea7b54-8979-4f79-b869-970e060aa017 container dapi-container: 
STEP: delete the pod
Dec 28 13:38:30.499: INFO: Waiting for pod var-expansion-4bea7b54-8979-4f79-b869-970e060aa017 to disappear
Dec 28 13:38:30.504: INFO: Pod var-expansion-4bea7b54-8979-4f79-b869-970e060aa017 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:38:30.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6225" for this suite.
Dec 28 13:38:36.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:38:36.687: INFO: namespace var-expansion-6225 deletion completed in 6.177124407s

• [SLOW TEST:14.770 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:38:36.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Dec 28 13:38:36.772: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 28 13:38:36.781: INFO: Waiting for terminating namespaces to be deleted...
Dec 28 13:38:36.783: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Dec 28 13:38:36.795: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Dec 28 13:38:36.795: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 28 13:38:36.795: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Dec 28 13:38:36.795: INFO: 	Container weave ready: true, restart count 0
Dec 28 13:38:36.795: INFO: 	Container weave-npc ready: true, restart count 0
Dec 28 13:38:36.795: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Dec 28 13:38:36.805: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Dec 28 13:38:36.805: INFO: 	Container kube-apiserver ready: true, restart count 0
Dec 28 13:38:36.805: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Dec 28 13:38:36.805: INFO: 	Container kube-scheduler ready: true, restart count 10
Dec 28 13:38:36.805: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 28 13:38:36.805: INFO: 	Container coredns ready: true, restart count 0
Dec 28 13:38:36.805: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Dec 28 13:38:36.805: INFO: 	Container etcd ready: true, restart count 0
Dec 28 13:38:36.805: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Dec 28 13:38:36.805: INFO: 	Container weave ready: true, restart count 0
Dec 28 13:38:36.805: INFO: 	Container weave-npc ready: true, restart count 0
Dec 28 13:38:36.805: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 28 13:38:36.805: INFO: 	Container coredns ready: true, restart count 0
Dec 28 13:38:36.805: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Dec 28 13:38:36.805: INFO: 	Container kube-controller-manager ready: true, restart count 14
Dec 28 13:38:36.805: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Dec 28 13:38:36.805: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Dec 28 13:38:36.912: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Dec 28 13:38:36.912: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Dec 28 13:38:36.912: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Dec 28 13:38:36.912: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Dec 28 13:38:36.912: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Dec 28 13:38:36.912: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Dec 28 13:38:36.912: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Dec 28 13:38:36.912: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Dec 28 13:38:36.912: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Dec 28 13:38:36.912: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-aa2fb329-d2dc-4671-9ffc-99d2e78dce16.15e48c8f629b0192], Reason = [Scheduled], Message = [Successfully assigned sched-pred-487/filler-pod-aa2fb329-d2dc-4671-9ffc-99d2e78dce16 to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-aa2fb329-d2dc-4671-9ffc-99d2e78dce16.15e48c9082fe730a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-aa2fb329-d2dc-4671-9ffc-99d2e78dce16.15e48c913a6e1061], Reason = [Created], Message = [Created container filler-pod-aa2fb329-d2dc-4671-9ffc-99d2e78dce16]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-aa2fb329-d2dc-4671-9ffc-99d2e78dce16.15e48c9165f2d552], Reason = [Started], Message = [Started container filler-pod-aa2fb329-d2dc-4671-9ffc-99d2e78dce16]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-de5fbac0-4762-4232-9bf8-01b9aa1b8641.15e48c8f63e48849], Reason = [Scheduled], Message = [Successfully assigned sched-pred-487/filler-pod-de5fbac0-4762-4232-9bf8-01b9aa1b8641 to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-de5fbac0-4762-4232-9bf8-01b9aa1b8641.15e48c906dde94c9], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-de5fbac0-4762-4232-9bf8-01b9aa1b8641.15e48c916b151307], Reason = [Created], Message = [Created container filler-pod-de5fbac0-4762-4232-9bf8-01b9aa1b8641]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-de5fbac0-4762-4232-9bf8-01b9aa1b8641.15e48c918a04d155], Reason = [Started], Message = [Started container filler-pod-de5fbac0-4762-4232-9bf8-01b9aa1b8641]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e48c91b91453db], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:38:48.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-487" for this suite.
Dec 28 13:38:56.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:38:56.255: INFO: namespace sched-pred-487 deletion completed in 8.152623274s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:19.567 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:38:56.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 28 13:38:57.806: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3d62b1a9-a337-4d25-9745-405d31810b2e" in namespace "projected-7963" to be "success or failure"
Dec 28 13:38:57.831: INFO: Pod "downwardapi-volume-3d62b1a9-a337-4d25-9745-405d31810b2e": Phase="Pending", Reason="", readiness=false. Elapsed: 25.351215ms
Dec 28 13:38:59.869: INFO: Pod "downwardapi-volume-3d62b1a9-a337-4d25-9745-405d31810b2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062805687s
Dec 28 13:39:01.911: INFO: Pod "downwardapi-volume-3d62b1a9-a337-4d25-9745-405d31810b2e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105145981s
Dec 28 13:39:03.943: INFO: Pod "downwardapi-volume-3d62b1a9-a337-4d25-9745-405d31810b2e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136757737s
Dec 28 13:39:05.953: INFO: Pod "downwardapi-volume-3d62b1a9-a337-4d25-9745-405d31810b2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.146937488s
STEP: Saw pod success
Dec 28 13:39:05.953: INFO: Pod "downwardapi-volume-3d62b1a9-a337-4d25-9745-405d31810b2e" satisfied condition "success or failure"
Dec 28 13:39:05.956: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3d62b1a9-a337-4d25-9745-405d31810b2e container client-container: 
STEP: delete the pod
Dec 28 13:39:06.059: INFO: Waiting for pod downwardapi-volume-3d62b1a9-a337-4d25-9745-405d31810b2e to disappear
Dec 28 13:39:06.064: INFO: Pod downwardapi-volume-3d62b1a9-a337-4d25-9745-405d31810b2e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:39:06.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7963" for this suite.
Dec 28 13:39:12.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:39:12.286: INFO: namespace projected-7963 deletion completed in 6.211104704s

• [SLOW TEST:16.031 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:39:12.286: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 28 13:39:12.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-1075'
Dec 28 13:39:12.622: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 28 13:39:12.623: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Dec 28 13:39:14.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-1075'
Dec 28 13:39:14.916: INFO: stderr: ""
Dec 28 13:39:14.917: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:39:14.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1075" for this suite.
Dec 28 13:39:20.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:39:21.063: INFO: namespace kubectl-1075 deletion completed in 6.135789252s

• [SLOW TEST:8.777 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:39:21.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 28 13:39:21.213: INFO: Waiting up to 5m0s for pod "pod-a0bb2048-61d7-4f2c-a1c4-37b4f4a3daa1" in namespace "emptydir-1143" to be "success or failure"
Dec 28 13:39:21.217: INFO: Pod "pod-a0bb2048-61d7-4f2c-a1c4-37b4f4a3daa1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.25454ms
Dec 28 13:39:23.228: INFO: Pod "pod-a0bb2048-61d7-4f2c-a1c4-37b4f4a3daa1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015626154s
Dec 28 13:39:25.238: INFO: Pod "pod-a0bb2048-61d7-4f2c-a1c4-37b4f4a3daa1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02535276s
Dec 28 13:39:27.248: INFO: Pod "pod-a0bb2048-61d7-4f2c-a1c4-37b4f4a3daa1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035335039s
Dec 28 13:39:29.258: INFO: Pod "pod-a0bb2048-61d7-4f2c-a1c4-37b4f4a3daa1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04506378s
STEP: Saw pod success
Dec 28 13:39:29.258: INFO: Pod "pod-a0bb2048-61d7-4f2c-a1c4-37b4f4a3daa1" satisfied condition "success or failure"
Dec 28 13:39:29.262: INFO: Trying to get logs from node iruya-node pod pod-a0bb2048-61d7-4f2c-a1c4-37b4f4a3daa1 container test-container: 
STEP: delete the pod
Dec 28 13:39:29.332: INFO: Waiting for pod pod-a0bb2048-61d7-4f2c-a1c4-37b4f4a3daa1 to disappear
Dec 28 13:39:29.401: INFO: Pod pod-a0bb2048-61d7-4f2c-a1c4-37b4f4a3daa1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:39:29.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1143" for this suite.
Dec 28 13:39:35.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:39:35.587: INFO: namespace emptydir-1143 deletion completed in 6.175168794s

• [SLOW TEST:14.523 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:39:35.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Dec 28 13:39:35.754: INFO: Waiting up to 5m0s for pod "client-containers-deb64e01-79c4-4703-8ccc-743eb6c7f26e" in namespace "containers-8477" to be "success or failure"
Dec 28 13:39:35.769: INFO: Pod "client-containers-deb64e01-79c4-4703-8ccc-743eb6c7f26e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.740465ms
Dec 28 13:39:37.788: INFO: Pod "client-containers-deb64e01-79c4-4703-8ccc-743eb6c7f26e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034436529s
Dec 28 13:39:39.827: INFO: Pod "client-containers-deb64e01-79c4-4703-8ccc-743eb6c7f26e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072840572s
Dec 28 13:39:41.841: INFO: Pod "client-containers-deb64e01-79c4-4703-8ccc-743eb6c7f26e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086726939s
Dec 28 13:39:43.866: INFO: Pod "client-containers-deb64e01-79c4-4703-8ccc-743eb6c7f26e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.111667073s
STEP: Saw pod success
Dec 28 13:39:43.866: INFO: Pod "client-containers-deb64e01-79c4-4703-8ccc-743eb6c7f26e" satisfied condition "success or failure"
Dec 28 13:39:43.882: INFO: Trying to get logs from node iruya-node pod client-containers-deb64e01-79c4-4703-8ccc-743eb6c7f26e container test-container: 
STEP: delete the pod
Dec 28 13:39:43.985: INFO: Waiting for pod client-containers-deb64e01-79c4-4703-8ccc-743eb6c7f26e to disappear
Dec 28 13:39:44.005: INFO: Pod client-containers-deb64e01-79c4-4703-8ccc-743eb6c7f26e no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:39:44.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8477" for this suite.
Dec 28 13:39:50.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:39:50.315: INFO: namespace containers-8477 deletion completed in 6.303764134s

• [SLOW TEST:14.727 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:39:50.316: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 28 13:39:50.510: INFO: Waiting up to 5m0s for pod "pod-dc8a9d54-1a68-441a-a8d2-8001122c0550" in namespace "emptydir-4214" to be "success or failure"
Dec 28 13:39:50.655: INFO: Pod "pod-dc8a9d54-1a68-441a-a8d2-8001122c0550": Phase="Pending", Reason="", readiness=false. Elapsed: 144.703998ms
Dec 28 13:39:52.673: INFO: Pod "pod-dc8a9d54-1a68-441a-a8d2-8001122c0550": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162616366s
Dec 28 13:39:54.680: INFO: Pod "pod-dc8a9d54-1a68-441a-a8d2-8001122c0550": Phase="Pending", Reason="", readiness=false. Elapsed: 4.16971799s
Dec 28 13:39:56.694: INFO: Pod "pod-dc8a9d54-1a68-441a-a8d2-8001122c0550": Phase="Pending", Reason="", readiness=false. Elapsed: 6.183940307s
Dec 28 13:39:58.708: INFO: Pod "pod-dc8a9d54-1a68-441a-a8d2-8001122c0550": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.197131396s
STEP: Saw pod success
Dec 28 13:39:58.708: INFO: Pod "pod-dc8a9d54-1a68-441a-a8d2-8001122c0550" satisfied condition "success or failure"
Dec 28 13:39:58.716: INFO: Trying to get logs from node iruya-node pod pod-dc8a9d54-1a68-441a-a8d2-8001122c0550 container test-container: 
STEP: delete the pod
Dec 28 13:39:58.769: INFO: Waiting for pod pod-dc8a9d54-1a68-441a-a8d2-8001122c0550 to disappear
Dec 28 13:39:58.871: INFO: Pod pod-dc8a9d54-1a68-441a-a8d2-8001122c0550 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:39:58.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4214" for this suite.
Dec 28 13:40:04.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:40:05.048: INFO: namespace emptydir-4214 deletion completed in 6.170251649s

• [SLOW TEST:14.733 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:40:05.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-a1765bc3-7d31-4e48-bb9b-bd425e731e8e
STEP: Creating a pod to test consume secrets
Dec 28 13:40:05.247: INFO: Waiting up to 5m0s for pod "pod-secrets-1fdda621-ec5f-498f-93f9-d1eab68b75ca" in namespace "secrets-8239" to be "success or failure"
Dec 28 13:40:05.253: INFO: Pod "pod-secrets-1fdda621-ec5f-498f-93f9-d1eab68b75ca": Phase="Pending", Reason="", readiness=false. Elapsed: 5.760045ms
Dec 28 13:40:07.262: INFO: Pod "pod-secrets-1fdda621-ec5f-498f-93f9-d1eab68b75ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014325208s
Dec 28 13:40:09.592: INFO: Pod "pod-secrets-1fdda621-ec5f-498f-93f9-d1eab68b75ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.344915216s
Dec 28 13:40:11.602: INFO: Pod "pod-secrets-1fdda621-ec5f-498f-93f9-d1eab68b75ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.354366444s
Dec 28 13:40:13.613: INFO: Pod "pod-secrets-1fdda621-ec5f-498f-93f9-d1eab68b75ca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.365115791s
Dec 28 13:40:15.621: INFO: Pod "pod-secrets-1fdda621-ec5f-498f-93f9-d1eab68b75ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.374059012s
STEP: Saw pod success
Dec 28 13:40:15.622: INFO: Pod "pod-secrets-1fdda621-ec5f-498f-93f9-d1eab68b75ca" satisfied condition "success or failure"
Dec 28 13:40:15.626: INFO: Trying to get logs from node iruya-node pod pod-secrets-1fdda621-ec5f-498f-93f9-d1eab68b75ca container secret-volume-test: 
STEP: delete the pod
Dec 28 13:40:15.696: INFO: Waiting for pod pod-secrets-1fdda621-ec5f-498f-93f9-d1eab68b75ca to disappear
Dec 28 13:40:15.700: INFO: Pod pod-secrets-1fdda621-ec5f-498f-93f9-d1eab68b75ca no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:40:15.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8239" for this suite.
Dec 28 13:40:21.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:40:21.902: INFO: namespace secrets-8239 deletion completed in 6.195651161s

• [SLOW TEST:16.854 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:40:21.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 28 13:40:22.079: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Dec 28 13:40:22.144: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 28 13:40:30.184: INFO: Creating deployment "test-rolling-update-deployment"
Dec 28 13:40:30.193: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Dec 28 13:40:30.229: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Dec 28 13:40:32.245: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Dec 28 13:40:32.254: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713137230, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713137230, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713137230, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713137230, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 13:40:34.286: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713137230, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713137230, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713137230, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713137230, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 13:40:36.333: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713137230, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713137230, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713137230, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713137230, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 13:40:38.283: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713137230, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713137230, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713137238, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713137230, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 13:40:40.260: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 28 13:40:40.271: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-3859,SelfLink:/apis/apps/v1/namespaces/deployment-3859/deployments/test-rolling-update-deployment,UID:36bd0cd0-8c75-4331-bd0e-53d8d52a0740,ResourceVersion:18397280,Generation:1,CreationTimestamp:2019-12-28 13:40:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-28 13:40:30 +0000 UTC 2019-12-28 13:40:30 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-28 13:40:38 +0000 UTC 2019-12-28 13:40:30 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 28 13:40:40.274: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-3859,SelfLink:/apis/apps/v1/namespaces/deployment-3859/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:7982738b-4494-43ec-88a4-afea642d9fe8,ResourceVersion:18397269,Generation:1,CreationTimestamp:2019-12-28 13:40:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 36bd0cd0-8c75-4331-bd0e-53d8d52a0740 0xc0031e9a17 0xc0031e9a18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 28 13:40:40.274: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Dec 28 13:40:40.275: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-3859,SelfLink:/apis/apps/v1/namespaces/deployment-3859/replicasets/test-rolling-update-controller,UID:63fa2d00-1f29-4844-a23e-3e0f7fb74e91,ResourceVersion:18397278,Generation:2,CreationTimestamp:2019-12-28 13:40:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 36bd0cd0-8c75-4331-bd0e-53d8d52a0740 0xc0031e9947 0xc0031e9948}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 28 13:40:40.277: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-lw642" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-lw642,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-3859,SelfLink:/api/v1/namespaces/deployment-3859/pods/test-rolling-update-deployment-79f6b9d75c-lw642,UID:d3eeccc6-cca3-4833-b1d2-769db9f952b4,ResourceVersion:18397268,Generation:0,CreationTimestamp:2019-12-28 13:40:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 7982738b-4494-43ec-88a4-afea642d9fe8 0xc002c806e7 0xc002c806e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqg68 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqg68,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-zqg68 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c80760} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c80780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:40:30 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:40:38 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:40:38 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:40:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-28 13:40:30 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-28 13:40:36 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://d467eabdef24575edbe5230907e314fc38b8fdda871d9d8adfad77dd357ba6cb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:40:40.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3859" for this suite.
Dec 28 13:40:46.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:40:47.116: INFO: namespace deployment-3859 deletion completed in 6.835102228s

• [SLOW TEST:25.214 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:40:47.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Dec 28 13:40:57.801: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2603 pod-service-account-c6a4982b-5174-4113-9684-a60a8d7ecc3e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Dec 28 13:41:00.409: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2603 pod-service-account-c6a4982b-5174-4113-9684-a60a8d7ecc3e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Dec 28 13:41:00.813: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2603 pod-service-account-c6a4982b-5174-4113-9684-a60a8d7ecc3e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:41:01.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-2603" for this suite.
Dec 28 13:41:07.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:41:07.680: INFO: namespace svcaccounts-2603 deletion completed in 6.369101009s

• [SLOW TEST:20.564 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:41:07.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Dec 28 13:41:07.823: INFO: Pod name pod-release: Found 0 pods out of 1
Dec 28 13:41:12.846: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:41:13.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8618" for this suite.
Dec 28 13:41:19.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:41:20.131: INFO: namespace replication-controller-8618 deletion completed in 6.201930092s

• [SLOW TEST:12.451 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:41:20.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 28 13:41:32.909: INFO: Successfully updated pod "annotationupdatec906fc8a-ffed-464b-a0c2-4f95325719f3"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:41:35.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3026" for this suite.
Dec 28 13:41:57.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:41:57.141: INFO: namespace downward-api-3026 deletion completed in 22.116662815s

• [SLOW TEST:37.009 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:41:57.142: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:42:05.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3376" for this suite.
Dec 28 13:43:07.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:43:07.511: INFO: namespace kubelet-test-3376 deletion completed in 1m2.153474327s

• [SLOW TEST:70.369 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:43:07.512: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 28 13:43:07.596: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:43:21.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-652" for this suite.
Dec 28 13:43:27.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:43:28.002: INFO: namespace init-container-652 deletion completed in 6.715153394s

• [SLOW TEST:20.491 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:43:28.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 28 13:43:28.275: INFO: Waiting up to 5m0s for pod "downward-api-bf4a041f-5a27-4d93-b2a7-bbfb292c5ca2" in namespace "downward-api-7478" to be "success or failure"
Dec 28 13:43:28.310: INFO: Pod "downward-api-bf4a041f-5a27-4d93-b2a7-bbfb292c5ca2": Phase="Pending", Reason="", readiness=false. Elapsed: 34.747455ms
Dec 28 13:43:30.321: INFO: Pod "downward-api-bf4a041f-5a27-4d93-b2a7-bbfb292c5ca2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046174089s
Dec 28 13:43:32.333: INFO: Pod "downward-api-bf4a041f-5a27-4d93-b2a7-bbfb292c5ca2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057490359s
Dec 28 13:43:34.346: INFO: Pod "downward-api-bf4a041f-5a27-4d93-b2a7-bbfb292c5ca2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070978931s
Dec 28 13:43:36.356: INFO: Pod "downward-api-bf4a041f-5a27-4d93-b2a7-bbfb292c5ca2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.080637983s
STEP: Saw pod success
Dec 28 13:43:36.356: INFO: Pod "downward-api-bf4a041f-5a27-4d93-b2a7-bbfb292c5ca2" satisfied condition "success or failure"
Dec 28 13:43:36.363: INFO: Trying to get logs from node iruya-node pod downward-api-bf4a041f-5a27-4d93-b2a7-bbfb292c5ca2 container dapi-container: 
STEP: delete the pod
Dec 28 13:43:36.476: INFO: Waiting for pod downward-api-bf4a041f-5a27-4d93-b2a7-bbfb292c5ca2 to disappear
Dec 28 13:43:36.488: INFO: Pod downward-api-bf4a041f-5a27-4d93-b2a7-bbfb292c5ca2 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:43:36.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7478" for this suite.
Dec 28 13:43:42.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:43:42.626: INFO: namespace downward-api-7478 deletion completed in 6.131204136s

• [SLOW TEST:14.624 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:43:42.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-8653
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-8653
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8653
Dec 28 13:43:42.882: INFO: Found 0 stateful pods, waiting for 1
Dec 28 13:43:52.892: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Dec 28 13:43:52.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 28 13:43:53.557: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 28 13:43:53.557: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 28 13:43:53.557: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 28 13:43:53.571: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 28 13:44:03.580: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 28 13:44:03.580: INFO: Waiting for statefulset status.replicas updated to 0
Dec 28 13:44:03.695: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Dec 28 13:44:03.695: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:43:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:43:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:43:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:43:42 +0000 UTC  }]
Dec 28 13:44:03.696: INFO: 
Dec 28 13:44:03.696: INFO: StatefulSet ss has not reached scale 3, at 1
Dec 28 13:44:05.161: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.905034252s
Dec 28 13:44:06.227: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.440089726s
Dec 28 13:44:07.242: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.373988369s
Dec 28 13:44:08.262: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.358834849s
Dec 28 13:44:10.622: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.338907769s
Dec 28 13:44:12.199: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.978557721s
Dec 28 13:44:13.204: INFO: Verifying statefulset ss doesn't scale past 3 for another 401.758571ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8653
Dec 28 13:44:14.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:44:14.866: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 28 13:44:14.866: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 28 13:44:14.867: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 28 13:44:14.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:44:15.428: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Dec 28 13:44:15.429: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 28 13:44:15.429: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 28 13:44:15.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:44:15.977: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Dec 28 13:44:15.977: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 28 13:44:15.977: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 28 13:44:15.985: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 13:44:15.985: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 13:44:15.985: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Dec 28 13:44:15.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 28 13:44:16.602: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 28 13:44:16.602: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 28 13:44:16.602: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 28 13:44:16.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 28 13:44:16.928: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 28 13:44:16.928: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 28 13:44:16.928: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 28 13:44:16.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 28 13:44:17.435: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 28 13:44:17.435: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 28 13:44:17.435: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 28 13:44:17.435: INFO: Waiting for statefulset status.replicas updated to 0
Dec 28 13:44:17.446: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Dec 28 13:44:27.466: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 28 13:44:27.466: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 28 13:44:27.466: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 28 13:44:27.490: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 28 13:44:27.490: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:43:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:43:42 +0000 UTC  }]
Dec 28 13:44:27.491: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  }]
Dec 28 13:44:27.491: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  }]
Dec 28 13:44:27.491: INFO: 
Dec 28 13:44:27.491: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 28 13:44:28.914: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 28 13:44:28.915: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:43:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:43:42 +0000 UTC  }]
Dec 28 13:44:28.915: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  }]
Dec 28 13:44:28.915: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  }]
Dec 28 13:44:28.915: INFO: 
Dec 28 13:44:28.915: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 28 13:44:29.925: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 28 13:44:29.925: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:43:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:43:42 +0000 UTC  }]
Dec 28 13:44:29.926: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  }]
Dec 28 13:44:29.926: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  }]
Dec 28 13:44:29.926: INFO: 
Dec 28 13:44:29.926: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 28 13:44:30.950: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 28 13:44:30.950: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:43:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:43:42 +0000 UTC  }]
Dec 28 13:44:30.950: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  }]
Dec 28 13:44:30.950: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  }]
Dec 28 13:44:30.950: INFO: 
Dec 28 13:44:30.950: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 28 13:44:32.443: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 28 13:44:32.443: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:43:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:43:42 +0000 UTC  }]
Dec 28 13:44:32.443: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  }]
Dec 28 13:44:32.443: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  }]
Dec 28 13:44:32.443: INFO: 
Dec 28 13:44:32.443: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 28 13:44:33.454: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 28 13:44:33.454: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:43:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:43:42 +0000 UTC  }]
Dec 28 13:44:33.454: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  }]
Dec 28 13:44:33.454: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  }]
Dec 28 13:44:33.454: INFO: 
Dec 28 13:44:33.454: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 28 13:44:34.473: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 28 13:44:34.473: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:43:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:43:42 +0000 UTC  }]
Dec 28 13:44:34.473: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  }]
Dec 28 13:44:34.473: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  }]
Dec 28 13:44:34.473: INFO: 
Dec 28 13:44:34.473: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 28 13:44:35.490: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 28 13:44:35.490: INFO: ss-0  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:43:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:43:42 +0000 UTC  }]
Dec 28 13:44:35.490: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  }]
Dec 28 13:44:35.490: INFO: 
Dec 28 13:44:35.490: INFO: StatefulSet ss has not reached scale 0, at 2
Dec 28 13:44:36.508: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 28 13:44:36.509: INFO: ss-0  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:43:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:43:42 +0000 UTC  }]
Dec 28 13:44:36.509: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:44:03 +0000 UTC  }]
Dec 28 13:44:36.509: INFO: 
Dec 28 13:44:36.509: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8653
Dec 28 13:44:37.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:44:37.809: INFO: rc: 1
Dec 28 13:44:37.809: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc002326270 exit status 1   true [0xc0028c0ad8 0xc0028c0af0 0xc0028c0b08] [0xc0028c0ad8 0xc0028c0af0 0xc0028c0b08] [0xc0028c0ae8 0xc0028c0b00] [0xba6c50 0xba6c50] 0xc00144bce0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Dec 28 13:44:47.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:44:47.972: INFO: rc: 1
Dec 28 13:44:47.972: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001fbf680 exit status 1   true [0xc001292540 0xc001292558 0xc001292570] [0xc001292540 0xc001292558 0xc001292570] [0xc001292550 0xc001292568] [0xba6c50 0xba6c50] 0xc0028766c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 28 13:44:57.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:44:58.129: INFO: rc: 1
Dec 28 13:44:58.130: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002278600 exit status 1   true [0xc002158e50 0xc002158e90 0xc002158eb8] [0xc002158e50 0xc002158e90 0xc002158eb8] [0xc002158e70 0xc002158ea8] [0xba6c50 0xba6c50] 0xc002cac600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 28 13:45:08.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:45:08.282: INFO: rc: 1
Dec 28 13:45:08.282: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002326360 exit status 1   true [0xc0028c0b10 0xc0028c0b28 0xc0028c0b58] [0xc0028c0b10 0xc0028c0b28 0xc0028c0b58] [0xc0028c0b20 0xc0028c0b48] [0xba6c50 0xba6c50] 0xc001e86300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 28 13:45:18.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:45:18.409: INFO: rc: 1
Dec 28 13:45:18.409: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002cc2090 exit status 1   true [0xc000bce080 0xc000bce1c8 0xc000bce358] [0xc000bce080 0xc000bce1c8 0xc000bce358] [0xc000bce198 0xc000bce298] [0xba6c50 0xba6c50] 0xc0023202a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 28 13:45:28.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:45:28.626: INFO: rc: 1
Dec 28 13:45:28.627: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002cc2180 exit status 1   true [0xc000bce3a0 0xc000bce560 0xc000bce6b8] [0xc000bce3a0 0xc000bce560 0xc000bce6b8] [0xc000bce4b8 0xc000bce630] [0xba6c50 0xba6c50] 0xc002320f00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 28 13:45:38.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:45:38.799: INFO: rc: 1
Dec 28 13:45:38.800: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0030c80c0 exit status 1   true [0xc0028c0000 0xc0028c0038 0xc0028c0068] [0xc0028c0000 0xc0028c0038 0xc0028c0068] [0xc0028c0030 0xc0028c0058] [0xba6c50 0xba6c50] 0xc00305c420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 28 13:45:48.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:45:48.932: INFO: rc: 1
Dec 28 13:45:48.932: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001aac090 exit status 1   true [0xc0013c4000 0xc0013c4018 0xc0013c4030] [0xc0013c4000 0xc0013c4018 0xc0013c4030] [0xc0013c4010 0xc0013c4028] [0xba6c50 0xba6c50] 0xc002834240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 28 13:45:58.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:45:59.084: INFO: rc: 1
Dec 28 13:45:59.084: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001aac150 exit status 1   true [0xc0013c4038 0xc0013c4050 0xc0013c4068] [0xc0013c4038 0xc0013c4050 0xc0013c4068] [0xc0013c4048 0xc0013c4060] [0xba6c50 0xba6c50] 0xc0028345a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 28 13:46:09.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:46:09.257: INFO: rc: 1
Dec 28 13:46:09.258: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0030c81b0 exit status 1   true [0xc0028c0080 0xc0028c00c0 0xc0028c0110] [0xc0028c0080 0xc0028c00c0 0xc0028c0110] [0xc0028c0098 0xc0028c00f0] [0xba6c50 0xba6c50] 0xc00305cba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 28 13:46:19.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:46:19.366: INFO: rc: 1
Dec 28 13:46:19.367: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0030c8270 exit status 1   true [0xc0028c0128 0xc0028c0150 0xc0028c0190] [0xc0028c0128 0xc0028c0150 0xc0028c0190] [0xc0028c0148 0xc0028c0178] [0xba6c50 0xba6c50] 0xc00305d020 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 28 13:46:29.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:46:29.591: INFO: rc: 1
Dec 28 13:46:29.592: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0030c8360 exit status 1   true [0xc0028c01a8 0xc0028c01f0 0xc0028c0208] [0xc0028c01a8 0xc0028c01f0 0xc0028c0208] [0xc0028c01e8 0xc0028c0200] [0xba6c50 0xba6c50] 0xc00305d3e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 28 13:46:39.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:46:39.799: INFO: rc: 1
Dec 28 13:46:39.799: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0030c8420 exit status 1   true [0xc0028c0220 0xc0028c0248 0xc0028c0290] [0xc0028c0220 0xc0028c0248 0xc0028c0290] [0xc0028c0240 0xc0028c0270] [0xba6c50 0xba6c50] 0xc00305d6e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 28 13:46:49.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:46:49.982: INFO: rc: 1
Dec 28 13:46:49.982: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002cc26c0 exit status 1   true [0xc000bce6f0 0xc000bce7f8 0xc000bce948] [0xc000bce6f0 0xc000bce7f8 0xc000bce948] [0xc000bce7e0 0xc000bce860] [0xba6c50 0xba6c50] 0xc002321560 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 28 13:46:59.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:47:00.117: INFO: rc: 1
Dec 28 13:47:00.118: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002cc27e0 exit status 1   true [0xc000bce980 0xc000bceae8 0xc000bcecd0] [0xc000bce980 0xc000bceae8 0xc000bcecd0] [0xc000bcea98 0xc000bcec80] [0xba6c50 0xba6c50] 0xc002321980 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 28 13:47:10.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:47:10.217: INFO: rc: 1
Dec 28 13:47:10.217: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001aac240 exit status 1   true [0xc0013c4070 0xc0013c4088 0xc0013c40a0] [0xc0013c4070 0xc0013c4088 0xc0013c40a0] [0xc0013c4080 0xc0013c4098] [0xba6c50 0xba6c50] 0xc002834ae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 28 13:47:20.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:47:20.428: INFO: rc: 1
Dec 28 13:47:20.429: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0013e0090 exit status 1   true [0xc000bce080 0xc000bce1c8 0xc000bce358] [0xc000bce080 0xc000bce1c8 0xc000bce358] [0xc000bce198 0xc000bce298] [0xba6c50 0xba6c50] 0xc002320720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 28 13:47:30.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:47:30.616: INFO: rc: 1
Dec 28 13:47:30.617: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002cc20f0 exit status 1   true [0xc0028c0000 0xc0028c0038 0xc0028c0068] [0xc0028c0000 0xc0028c0038 0xc0028c0068] [0xc0028c0030 0xc0028c0058] [0xba6c50 0xba6c50] 0xc00305c000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 28 13:47:40.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:47:40.772: INFO: rc: 1
Dec 28 13:47:40.772: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002cc21e0 exit status 1   true [0xc0028c0080 0xc0028c00c0 0xc0028c0110] [0xc0028c0080 0xc0028c00c0 0xc0028c0110] [0xc0028c0098 0xc0028c00f0] [0xba6c50 0xba6c50] 0xc00305c540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 28 13:47:50.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:47:50.994: INFO: rc: 1
Dec 28 13:47:50.995: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002cc2720 exit status 1   true [0xc0028c0128 0xc0028c0150 0xc0028c0190] [0xc0028c0128 0xc0028c0150 0xc0028c0190] [0xc0028c0148 0xc0028c0178] [0xba6c50 0xba6c50] 0xc00305cd80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 28 13:48:00.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:48:01.137: INFO: rc: 1
Dec 28 13:48:01.137: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0013e01b0 exit status 1   true [0xc000bce3a0 0xc000bce560 0xc000bce6b8] [0xc000bce3a0 0xc000bce560 0xc000bce6b8] [0xc000bce4b8 0xc000bce630] [0xba6c50 0xba6c50] 0xc0023211a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 28 13:48:11.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:48:11.335: INFO: rc: 1
Dec 28 13:48:11.335: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002cc2810 exit status 1   true [0xc0028c01a8 0xc0028c01f0 0xc0028c0208] [0xc0028c01a8 0xc0028c01f0 0xc0028c0208] [0xc0028c01e8 0xc0028c0200] [0xba6c50 0xba6c50] 0xc00305d1a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 28 13:48:21.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:48:21.504: INFO: rc: 1
Dec 28 13:48:21.505: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002cc28d0 exit status 1   true [0xc0028c0220 0xc0028c0248 0xc0028c0290] [0xc0028c0220 0xc0028c0248 0xc0028c0290] [0xc0028c0240 0xc0028c0270] [0xba6c50 0xba6c50] 0xc00305d4a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 28 13:48:31.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:48:31.657: INFO: rc: 1
Dec 28 13:48:31.658: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002cc2990 exit status 1   true [0xc0028c02a8 0xc0028c02f8 0xc0028c0320] [0xc0028c02a8 0xc0028c02f8 0xc0028c0320] [0xc0028c02d8 0xc0028c0318] [0xba6c50 0xba6c50] 0xc00305d7a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 28 13:48:41.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:48:41.831: INFO: rc: 1
Dec 28 13:48:41.831: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002cc2a50 exit status 1   true [0xc0028c0328 0xc0028c0350 0xc0028c0368] [0xc0028c0328 0xc0028c0350 0xc0028c0368] [0xc0028c0348 0xc0028c0360] [0xba6c50 0xba6c50] 0xc00305daa0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 28 13:48:51.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:48:51.972: INFO: rc: 1
Dec 28 13:48:51.972: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002cc2b10 exit status 1   true [0xc0028c0370 0xc0028c0388 0xc0028c03d8] [0xc0028c0370 0xc0028c0388 0xc0028c03d8] [0xc0028c0380 0xc0028c03c0] [0xba6c50 0xba6c50] 0xc00305dda0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 28 13:49:01.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:49:02.225: INFO: rc: 1
Dec 28 13:49:02.225: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0030c8120 exit status 1   true [0xc0013c4000 0xc0013c4018 0xc0013c4030] [0xc0013c4000 0xc0013c4018 0xc0013c4030] [0xc0013c4010 0xc0013c4028] [0xba6c50 0xba6c50] 0xc002834300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 28 13:49:12.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:49:12.320: INFO: rc: 1
Dec 28 13:49:12.321: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002cc2c30 exit status 1   true [0xc0028c03e8 0xc0028c0438 0xc0028c0450] [0xc0028c03e8 0xc0028c0438 0xc0028c0450] [0xc0028c0420 0xc0028c0448] [0xba6c50 0xba6c50] 0xc001d46300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 28 13:49:22.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:49:22.464: INFO: rc: 1
Dec 28 13:49:22.464: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0013e00c0 exit status 1   true [0xc000bce080 0xc000bce1c8 0xc000bce358] [0xc000bce080 0xc000bce1c8 0xc000bce358] [0xc000bce198 0xc000bce298] [0xba6c50 0xba6c50] 0xc00305c420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 28 13:49:32.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:49:32.663: INFO: rc: 1
Dec 28 13:49:32.664: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002cc20c0 exit status 1   true [0xc0028c0000 0xc0028c0038 0xc0028c0068] [0xc0028c0000 0xc0028c0038 0xc0028c0068] [0xc0028c0030 0xc0028c0058] [0xba6c50 0xba6c50] 0xc002320060 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 28 13:49:42.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8653 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:49:42.835: INFO: rc: 1
Dec 28 13:49:42.836: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Dec 28 13:49:42.836: INFO: Scaling statefulset ss to 0
Dec 28 13:49:42.868: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 28 13:49:42.872: INFO: Deleting all statefulset in ns statefulset-8653
Dec 28 13:49:42.876: INFO: Scaling statefulset ss to 0
Dec 28 13:49:42.892: INFO: Waiting for statefulset status.replicas updated to 0
Dec 28 13:49:42.895: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:49:42.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8653" for this suite.
Dec 28 13:49:49.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:49:49.152: INFO: namespace statefulset-8653 deletion completed in 6.210696683s

• [SLOW TEST:366.525 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:49:49.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 28 13:49:49.350: INFO: Waiting up to 5m0s for pod "pod-14dd4bb3-7897-4013-9525-792532d2a0e0" in namespace "emptydir-2971" to be "success or failure"
Dec 28 13:49:49.370: INFO: Pod "pod-14dd4bb3-7897-4013-9525-792532d2a0e0": Phase="Pending", Reason="", readiness=false. Elapsed: 20.488018ms
Dec 28 13:49:51.379: INFO: Pod "pod-14dd4bb3-7897-4013-9525-792532d2a0e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02919075s
Dec 28 13:49:53.394: INFO: Pod "pod-14dd4bb3-7897-4013-9525-792532d2a0e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044289653s
Dec 28 13:49:55.403: INFO: Pod "pod-14dd4bb3-7897-4013-9525-792532d2a0e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052732253s
Dec 28 13:49:57.411: INFO: Pod "pod-14dd4bb3-7897-4013-9525-792532d2a0e0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060723762s
Dec 28 13:49:59.423: INFO: Pod "pod-14dd4bb3-7897-4013-9525-792532d2a0e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073413363s
STEP: Saw pod success
Dec 28 13:49:59.424: INFO: Pod "pod-14dd4bb3-7897-4013-9525-792532d2a0e0" satisfied condition "success or failure"
Dec 28 13:49:59.429: INFO: Trying to get logs from node iruya-node pod pod-14dd4bb3-7897-4013-9525-792532d2a0e0 container test-container: 
STEP: delete the pod
Dec 28 13:49:59.614: INFO: Waiting for pod pod-14dd4bb3-7897-4013-9525-792532d2a0e0 to disappear
Dec 28 13:49:59.626: INFO: Pod pod-14dd4bb3-7897-4013-9525-792532d2a0e0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:49:59.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2971" for this suite.
Dec 28 13:50:05.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:50:05.863: INFO: namespace emptydir-2971 deletion completed in 6.231755048s

• [SLOW TEST:16.711 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:50:05.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:50:16.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7493" for this suite.
Dec 28 13:51:00.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:51:00.250: INFO: namespace kubelet-test-7493 deletion completed in 44.168752879s

• [SLOW TEST:54.386 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:51:00.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 28 13:51:00.483: INFO: Create a RollingUpdate DaemonSet
Dec 28 13:51:00.493: INFO: Check that daemon pods launch on every node of the cluster
Dec 28 13:51:00.504: INFO: Number of nodes with available pods: 0
Dec 28 13:51:00.504: INFO: Node iruya-node is running more than one daemon pod
Dec 28 13:51:01.522: INFO: Number of nodes with available pods: 0
Dec 28 13:51:01.522: INFO: Node iruya-node is running more than one daemon pod
Dec 28 13:51:02.714: INFO: Number of nodes with available pods: 0
Dec 28 13:51:02.714: INFO: Node iruya-node is running more than one daemon pod
Dec 28 13:51:03.530: INFO: Number of nodes with available pods: 0
Dec 28 13:51:03.531: INFO: Node iruya-node is running more than one daemon pod
Dec 28 13:51:04.525: INFO: Number of nodes with available pods: 0
Dec 28 13:51:04.525: INFO: Node iruya-node is running more than one daemon pod
Dec 28 13:51:05.520: INFO: Number of nodes with available pods: 0
Dec 28 13:51:05.520: INFO: Node iruya-node is running more than one daemon pod
Dec 28 13:51:07.574: INFO: Number of nodes with available pods: 0
Dec 28 13:51:07.574: INFO: Node iruya-node is running more than one daemon pod
Dec 28 13:51:09.410: INFO: Number of nodes with available pods: 0
Dec 28 13:51:09.410: INFO: Node iruya-node is running more than one daemon pod
Dec 28 13:51:09.866: INFO: Number of nodes with available pods: 0
Dec 28 13:51:09.867: INFO: Node iruya-node is running more than one daemon pod
Dec 28 13:51:10.525: INFO: Number of nodes with available pods: 0
Dec 28 13:51:10.526: INFO: Node iruya-node is running more than one daemon pod
Dec 28 13:51:11.519: INFO: Number of nodes with available pods: 1
Dec 28 13:51:11.519: INFO: Node iruya-node is running more than one daemon pod
Dec 28 13:51:12.533: INFO: Number of nodes with available pods: 2
Dec 28 13:51:12.533: INFO: Number of running nodes: 2, number of available pods: 2
Dec 28 13:51:12.533: INFO: Update the DaemonSet to trigger a rollout
Dec 28 13:51:12.553: INFO: Updating DaemonSet daemon-set
Dec 28 13:51:20.670: INFO: Roll back the DaemonSet before rollout is complete
Dec 28 13:51:20.685: INFO: Updating DaemonSet daemon-set
Dec 28 13:51:20.685: INFO: Make sure DaemonSet rollback is complete
Dec 28 13:51:20.704: INFO: Wrong image for pod: daemon-set-kdxkc. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 28 13:51:20.704: INFO: Pod daemon-set-kdxkc is not available
Dec 28 13:51:22.510: INFO: Wrong image for pod: daemon-set-kdxkc. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 28 13:51:22.510: INFO: Pod daemon-set-kdxkc is not available
Dec 28 13:51:23.213: INFO: Wrong image for pod: daemon-set-kdxkc. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 28 13:51:23.213: INFO: Pod daemon-set-kdxkc is not available
Dec 28 13:51:24.194: INFO: Wrong image for pod: daemon-set-kdxkc. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 28 13:51:24.194: INFO: Pod daemon-set-kdxkc is not available
Dec 28 13:51:25.198: INFO: Wrong image for pod: daemon-set-kdxkc. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 28 13:51:25.198: INFO: Pod daemon-set-kdxkc is not available
Dec 28 13:51:26.406: INFO: Pod daemon-set-x7rd5 is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9191, will wait for the garbage collector to delete the pods
Dec 28 13:51:26.873: INFO: Deleting DaemonSet.extensions daemon-set took: 25.486718ms
Dec 28 13:51:27.374: INFO: Terminating DaemonSet.extensions daemon-set pods took: 501.090852ms
Dec 28 13:51:35.013: INFO: Number of nodes with available pods: 0
Dec 28 13:51:35.013: INFO: Number of running nodes: 0, number of available pods: 0
Dec 28 13:51:35.016: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9191/daemonsets","resourceVersion":"18398632"},"items":null}

Dec 28 13:51:35.018: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9191/pods","resourceVersion":"18398632"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:51:35.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9191" for this suite.
Dec 28 13:51:41.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:51:41.185: INFO: namespace daemonsets-9191 deletion completed in 6.154714395s

• [SLOW TEST:40.934 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:51:41.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-e7f71e5e-4769-4269-8acf-f418a9133436
STEP: Creating a pod to test consume secrets
Dec 28 13:51:41.450: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-658c53ec-9721-45de-b0ac-8333b72f7749" in namespace "projected-2889" to be "success or failure"
Dec 28 13:51:41.474: INFO: Pod "pod-projected-secrets-658c53ec-9721-45de-b0ac-8333b72f7749": Phase="Pending", Reason="", readiness=false. Elapsed: 24.071224ms
Dec 28 13:51:43.487: INFO: Pod "pod-projected-secrets-658c53ec-9721-45de-b0ac-8333b72f7749": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037624068s
Dec 28 13:51:45.496: INFO: Pod "pod-projected-secrets-658c53ec-9721-45de-b0ac-8333b72f7749": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045778694s
Dec 28 13:51:47.503: INFO: Pod "pod-projected-secrets-658c53ec-9721-45de-b0ac-8333b72f7749": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053039882s
Dec 28 13:51:49.511: INFO: Pod "pod-projected-secrets-658c53ec-9721-45de-b0ac-8333b72f7749": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061566959s
Dec 28 13:51:51.521: INFO: Pod "pod-projected-secrets-658c53ec-9721-45de-b0ac-8333b72f7749": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071413463s
STEP: Saw pod success
Dec 28 13:51:51.521: INFO: Pod "pod-projected-secrets-658c53ec-9721-45de-b0ac-8333b72f7749" satisfied condition "success or failure"
Dec 28 13:51:51.527: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-658c53ec-9721-45de-b0ac-8333b72f7749 container projected-secret-volume-test: 
STEP: delete the pod
Dec 28 13:51:51.590: INFO: Waiting for pod pod-projected-secrets-658c53ec-9721-45de-b0ac-8333b72f7749 to disappear
Dec 28 13:51:51.644: INFO: Pod pod-projected-secrets-658c53ec-9721-45de-b0ac-8333b72f7749 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:51:51.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2889" for this suite.
Dec 28 13:51:57.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:51:57.831: INFO: namespace projected-2889 deletion completed in 6.179256313s

• [SLOW TEST:16.644 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:51:57.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 28 13:51:57.935: INFO: Waiting up to 5m0s for pod "downward-api-aabe6054-8293-44e8-afc8-f168b072e40d" in namespace "downward-api-4065" to be "success or failure"
Dec 28 13:51:57.960: INFO: Pod "downward-api-aabe6054-8293-44e8-afc8-f168b072e40d": Phase="Pending", Reason="", readiness=false. Elapsed: 24.721144ms
Dec 28 13:51:59.973: INFO: Pod "downward-api-aabe6054-8293-44e8-afc8-f168b072e40d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037652847s
Dec 28 13:52:01.982: INFO: Pod "downward-api-aabe6054-8293-44e8-afc8-f168b072e40d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047163929s
Dec 28 13:52:03.999: INFO: Pod "downward-api-aabe6054-8293-44e8-afc8-f168b072e40d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064433624s
Dec 28 13:52:06.010: INFO: Pod "downward-api-aabe6054-8293-44e8-afc8-f168b072e40d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.075147376s
STEP: Saw pod success
Dec 28 13:52:06.010: INFO: Pod "downward-api-aabe6054-8293-44e8-afc8-f168b072e40d" satisfied condition "success or failure"
Dec 28 13:52:06.015: INFO: Trying to get logs from node iruya-node pod downward-api-aabe6054-8293-44e8-afc8-f168b072e40d container dapi-container: 
STEP: delete the pod
Dec 28 13:52:06.105: INFO: Waiting for pod downward-api-aabe6054-8293-44e8-afc8-f168b072e40d to disappear
Dec 28 13:52:06.126: INFO: Pod downward-api-aabe6054-8293-44e8-afc8-f168b072e40d no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:52:06.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4065" for this suite.
Dec 28 13:52:12.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:52:12.530: INFO: namespace downward-api-4065 deletion completed in 6.397098751s

• [SLOW TEST:14.699 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:52:12.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 28 13:52:12.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3664'
Dec 28 13:52:15.508: INFO: stderr: ""
Dec 28 13:52:15.508: INFO: stdout: "replicationcontroller/redis-master created\n"
Dec 28 13:52:15.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3664'
Dec 28 13:52:16.332: INFO: stderr: ""
Dec 28 13:52:16.332: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 28 13:52:17.342: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 13:52:17.342: INFO: Found 0 / 1
Dec 28 13:52:18.350: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 13:52:18.350: INFO: Found 0 / 1
Dec 28 13:52:19.343: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 13:52:19.343: INFO: Found 0 / 1
Dec 28 13:52:20.357: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 13:52:20.357: INFO: Found 0 / 1
Dec 28 13:52:21.372: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 13:52:21.373: INFO: Found 0 / 1
Dec 28 13:52:22.351: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 13:52:22.351: INFO: Found 0 / 1
Dec 28 13:52:23.341: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 13:52:23.341: INFO: Found 0 / 1
Dec 28 13:52:24.344: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 13:52:24.344: INFO: Found 1 / 1
Dec 28 13:52:24.344: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 28 13:52:24.351: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 13:52:24.351: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 28 13:52:24.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-gc949 --namespace=kubectl-3664'
Dec 28 13:52:24.651: INFO: stderr: ""
Dec 28 13:52:24.651: INFO: stdout: "Name:           redis-master-gc949\nNamespace:      kubectl-3664\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Sat, 28 Dec 2019 13:52:15 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://83962342e3882c29608e53fb076f8d937e0f3b90e05049f410b5a2fadce99ea1\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sat, 28 Dec 2019 13:52:22 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-6x8sl (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-6x8sl:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-6x8sl\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  9s    default-scheduler    Successfully assigned kubectl-3664/redis-master-gc949 to iruya-node\n  Normal  Pulled     5s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    3s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    2s    kubelet, iruya-node  Started container redis-master\n"
Dec 28 13:52:24.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-3664'
Dec 28 13:52:24.784: INFO: stderr: ""
Dec 28 13:52:24.785: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-3664\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  9s    replication-controller  Created pod: redis-master-gc949\n"
Dec 28 13:52:24.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-3664'
Dec 28 13:52:24.931: INFO: stderr: ""
Dec 28 13:52:24.931: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-3664\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.111.106.92\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Dec 28 13:52:24.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Dec 28 13:52:25.099: INFO: stderr: ""
Dec 28 13:52:25.099: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Sat, 28 Dec 2019 13:52:08 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sat, 28 Dec 2019 13:52:08 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sat, 28 Dec 2019 13:52:08 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sat, 28 Dec 2019 13:52:08 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         146d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         77d\n  kubectl-3664               redis-master-gc949    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Dec 28 13:52:25.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-3664'
Dec 28 13:52:25.209: INFO: stderr: ""
Dec 28 13:52:25.209: INFO: stdout: "Name:         kubectl-3664\nLabels:       e2e-framework=kubectl\n              e2e-run=fbeb1dac-0546-4439-981b-b1a7fb506aa6\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:52:25.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3664" for this suite.
Dec 28 13:52:47.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:52:47.368: INFO: namespace kubectl-3664 deletion completed in 22.153859035s

• [SLOW TEST:34.837 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:52:47.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Dec 28 13:52:47.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5359 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Dec 28 13:52:57.783: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Dec 28 13:52:57.783: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:52:59.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5359" for this suite.
Dec 28 13:53:05.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:53:06.048: INFO: namespace kubectl-5359 deletion completed in 6.231283077s

• [SLOW TEST:18.679 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:53:06.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 28 13:53:06.112: INFO: Creating deployment "nginx-deployment"
Dec 28 13:53:06.171: INFO: Waiting for observed generation 1
Dec 28 13:53:09.058: INFO: Waiting for all required pods to come up
Dec 28 13:53:09.842: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Dec 28 13:53:38.198: INFO: Waiting for deployment "nginx-deployment" to complete
Dec 28 13:53:38.207: INFO: Updating deployment "nginx-deployment" with a non-existent image
Dec 28 13:53:38.218: INFO: Updating deployment nginx-deployment
Dec 28 13:53:38.218: INFO: Waiting for observed generation 2
Dec 28 13:53:40.424: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Dec 28 13:53:40.433: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Dec 28 13:53:40.437: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 28 13:53:40.635: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Dec 28 13:53:40.635: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Dec 28 13:53:40.638: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 28 13:53:40.642: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Dec 28 13:53:40.642: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Dec 28 13:53:40.668: INFO: Updating deployment nginx-deployment
Dec 28 13:53:40.668: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Dec 28 13:53:41.397: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Dec 28 13:53:41.876: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 28 13:53:42.558: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-1573,SelfLink:/apis/apps/v1/namespaces/deployment-1573/deployments/nginx-deployment,UID:bd0abc20-4207-417e-a25f-74ac7065c37f,ResourceVersion:18399118,Generation:3,CreationTimestamp:2019-12-28 13:53:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2019-12-28 13:53:40 +0000 UTC 2019-12-28 13:53:06 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2019-12-28 13:53:41 +0000 UTC 2019-12-28 13:53:41 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Dec 28 13:53:42.811: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-1573,SelfLink:/apis/apps/v1/namespaces/deployment-1573/replicasets/nginx-deployment-55fb7cb77f,UID:90a8f520-91fc-4afe-a1fa-51c9bbdd971b,ResourceVersion:18399163,Generation:3,CreationTimestamp:2019-12-28 13:53:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment bd0abc20-4207-417e-a25f-74ac7065c37f 0xc003273f27 0xc003273f28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 28 13:53:42.812: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Dec 28 13:53:42.812: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-1573,SelfLink:/apis/apps/v1/namespaces/deployment-1573/replicasets/nginx-deployment-7b8c6f4498,UID:e3dcaac4-4240-46a7-a69a-4a849da70a17,ResourceVersion:18399162,Generation:3,CreationTimestamp:2019-12-28 13:53:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment bd0abc20-4207-417e-a25f-74ac7065c37f 0xc003273ff7 0xc003273ff8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Dec 28 13:53:43.560: INFO: Pod "nginx-deployment-55fb7cb77f-27x22" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-27x22,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-55fb7cb77f-27x22,UID:7869e32a-7f8e-4c0a-a7c8-85a8721f4252,ResourceVersion:18399100,Generation:0,CreationTimestamp:2019-12-28 13:53:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90a8f520-91fc-4afe-a1fa-51c9bbdd971b 0xc001c3fe27 0xc001c3fe28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c3fea0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c3fec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:38 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-28 13:53:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.560: INFO: Pod "nginx-deployment-55fb7cb77f-2ppr5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2ppr5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-55fb7cb77f-2ppr5,UID:f50e3221-aad8-4e81-8068-09f78930ee16,ResourceVersion:18399082,Generation:0,CreationTimestamp:2019-12-28 13:53:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90a8f520-91fc-4afe-a1fa-51c9bbdd971b 0xc001c3ff97 0xc001c3ff98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002552000} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002552020}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:38 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-28 13:53:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.560: INFO: Pod "nginx-deployment-55fb7cb77f-55gck" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-55gck,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-55fb7cb77f-55gck,UID:cb60e6b3-3f9a-44ee-846a-126adde23c90,ResourceVersion:18399148,Generation:0,CreationTimestamp:2019-12-28 13:53:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90a8f520-91fc-4afe-a1fa-51c9bbdd971b 0xc002552117 0xc002552118}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002552190} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025521b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:42 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.561: INFO: Pod "nginx-deployment-55fb7cb77f-97qbw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-97qbw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-55fb7cb77f-97qbw,UID:abe4002e-6857-4ffe-be3b-babd3ea6fe09,ResourceVersion:18399083,Generation:0,CreationTimestamp:2019-12-28 13:53:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90a8f520-91fc-4afe-a1fa-51c9bbdd971b 0xc002552237 0xc002552238}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025522b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025522d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:38 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-28 13:53:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.561: INFO: Pod "nginx-deployment-55fb7cb77f-9vfkg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9vfkg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-55fb7cb77f-9vfkg,UID:2899cef2-d88e-4198-868b-12a87167950b,ResourceVersion:18399134,Generation:0,CreationTimestamp:2019-12-28 13:53:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90a8f520-91fc-4afe-a1fa-51c9bbdd971b 0xc0025523a7 0xc0025523a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002552420} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002552440}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:41 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.561: INFO: Pod "nginx-deployment-55fb7cb77f-ccx7q" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ccx7q,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-55fb7cb77f-ccx7q,UID:9a3ef756-1817-45ab-8585-e4e48cf79a03,ResourceVersion:18399135,Generation:0,CreationTimestamp:2019-12-28 13:53:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90a8f520-91fc-4afe-a1fa-51c9bbdd971b 0xc0025524c7 0xc0025524c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002552530} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002552550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:41 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.561: INFO: Pod "nginx-deployment-55fb7cb77f-dhzzr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dhzzr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-55fb7cb77f-dhzzr,UID:86670e0d-235c-414a-ac28-690b4e484c1c,ResourceVersion:18399145,Generation:0,CreationTimestamp:2019-12-28 13:53:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90a8f520-91fc-4afe-a1fa-51c9bbdd971b 0xc0025525f7 0xc0025525f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002552670} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002552690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:42 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.562: INFO: Pod "nginx-deployment-55fb7cb77f-fln2n" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fln2n,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-55fb7cb77f-fln2n,UID:126b66f3-55a3-490f-a1d6-471bd5a68d21,ResourceVersion:18399160,Generation:0,CreationTimestamp:2019-12-28 13:53:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90a8f520-91fc-4afe-a1fa-51c9bbdd971b 0xc002552727 0xc002552728}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002552790} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025527b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:42 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.562: INFO: Pod "nginx-deployment-55fb7cb77f-gwzft" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gwzft,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-55fb7cb77f-gwzft,UID:82c32ea2-de71-4430-8e63-039d26528ea5,ResourceVersion:18399144,Generation:0,CreationTimestamp:2019-12-28 13:53:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90a8f520-91fc-4afe-a1fa-51c9bbdd971b 0xc002552837 0xc002552838}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025528a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025528c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:42 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.562: INFO: Pod "nginx-deployment-55fb7cb77f-msnzk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-msnzk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-55fb7cb77f-msnzk,UID:519297a9-4940-40e3-87b8-ed1c999fdb07,ResourceVersion:18399097,Generation:0,CreationTimestamp:2019-12-28 13:53:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90a8f520-91fc-4afe-a1fa-51c9bbdd971b 0xc002552947 0xc002552948}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025529b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025529d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:38 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-28 13:53:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.563: INFO: Pod "nginx-deployment-55fb7cb77f-svzxv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-svzxv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-55fb7cb77f-svzxv,UID:9088d1fd-eb03-4828-8da1-9b957495e760,ResourceVersion:18399103,Generation:0,CreationTimestamp:2019-12-28 13:53:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90a8f520-91fc-4afe-a1fa-51c9bbdd971b 0xc002552ab7 0xc002552ab8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002552b30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002552b50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:38 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-28 13:53:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.563: INFO: Pod "nginx-deployment-55fb7cb77f-vb8x6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vb8x6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-55fb7cb77f-vb8x6,UID:44304da2-c4bb-4715-83a9-084a30a2be94,ResourceVersion:18399170,Generation:0,CreationTimestamp:2019-12-28 13:53:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90a8f520-91fc-4afe-a1fa-51c9bbdd971b 0xc002552c27 0xc002552c28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002552c90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002552cb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-28 13:53:42 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.563: INFO: Pod "nginx-deployment-55fb7cb77f-vthh8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vthh8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-55fb7cb77f-vthh8,UID:a16192ac-591f-4eac-86c4-f498de26e1a1,ResourceVersion:18399139,Generation:0,CreationTimestamp:2019-12-28 13:53:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90a8f520-91fc-4afe-a1fa-51c9bbdd971b 0xc002552d87 0xc002552d88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002552e00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002552e20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:42 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.563: INFO: Pod "nginx-deployment-7b8c6f4498-49f5v" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-49f5v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-7b8c6f4498-49f5v,UID:a557420c-ba29-48ab-8971-1259c81bd508,ResourceVersion:18399158,Generation:0,CreationTimestamp:2019-12-28 13:53:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e3dcaac4-4240-46a7-a69a-4a849da70a17 0xc002552ea7 0xc002552ea8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002552f20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002552f40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:42 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.564: INFO: Pod "nginx-deployment-7b8c6f4498-4grrt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4grrt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-7b8c6f4498-4grrt,UID:e0b18d24-5f1d-4a55-9a12-55bcffeaf649,ResourceVersion:18399157,Generation:0,CreationTimestamp:2019-12-28 13:53:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e3dcaac4-4240-46a7-a69a-4a849da70a17 0xc002552fc7 0xc002552fc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002553040} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002553060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:42 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.564: INFO: Pod "nginx-deployment-7b8c6f4498-89tpv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-89tpv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-7b8c6f4498-89tpv,UID:1e8af424-1113-4335-93d9-d2d71e9af73d,ResourceVersion:18399137,Generation:0,CreationTimestamp:2019-12-28 13:53:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e3dcaac4-4240-46a7-a69a-4a849da70a17 0xc0025530e7 0xc0025530e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002553160} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002553180}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:42 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.564: INFO: Pod "nginx-deployment-7b8c6f4498-8bk79" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8bk79,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-7b8c6f4498-8bk79,UID:df1787cd-ae0e-4c6f-88f8-4cb3f1d627f3,ResourceVersion:18399039,Generation:0,CreationTimestamp:2019-12-28 13:53:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e3dcaac4-4240-46a7-a69a-4a849da70a17 0xc002553207 0xc002553208}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002553280} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025532a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:06 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:37 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:37 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:06 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-28 13:53:06 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-28 13:53:35 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a5957101ddac189654dea102763022e8c5f441877cf6c173f6b6dfad409c5c6b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.564: INFO: Pod "nginx-deployment-7b8c6f4498-8r5gx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8r5gx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-7b8c6f4498-8r5gx,UID:3aaf179a-368a-43f5-9b27-bb56f5d13dd7,ResourceVersion:18399120,Generation:0,CreationTimestamp:2019-12-28 13:53:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e3dcaac4-4240-46a7-a69a-4a849da70a17 0xc002553377 0xc002553378}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025533f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002553410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:41 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.565: INFO: Pod "nginx-deployment-7b8c6f4498-925tn" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-925tn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-7b8c6f4498-925tn,UID:92311a6a-3c6e-4c5d-80fb-19d6d1b9ec44,ResourceVersion:18399036,Generation:0,CreationTimestamp:2019-12-28 13:53:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e3dcaac4-4240-46a7-a69a-4a849da70a17 0xc002553497 0xc002553498}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002553500} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002553520}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:07 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:36 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:06 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.8,StartTime:2019-12-28 13:53:07 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-28 13:53:35 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://55c199432329e0ea907b4527c53bd8ea48cf61ba9b61f0ca19b41d14ca5101b5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.565: INFO: Pod "nginx-deployment-7b8c6f4498-dvkjl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dvkjl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-7b8c6f4498-dvkjl,UID:c10b1a55-b026-4e7b-8fd8-fe3fd31a0ed9,ResourceVersion:18399146,Generation:0,CreationTimestamp:2019-12-28 13:53:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e3dcaac4-4240-46a7-a69a-4a849da70a17 0xc0025535f7 0xc0025535f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002553670} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002553690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:42 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.565: INFO: Pod "nginx-deployment-7b8c6f4498-fczcm" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fczcm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-7b8c6f4498-fczcm,UID:5774c7be-4070-48e2-b1a9-8020b3f6ef49,ResourceVersion:18399025,Generation:0,CreationTimestamp:2019-12-28 13:53:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e3dcaac4-4240-46a7-a69a-4a849da70a17 0xc002553717 0xc002553718}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025537a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025537c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:09 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:36 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:06 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2019-12-28 13:53:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-28 13:53:36 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://49c182fc6ccaa805c6b81276062110253e8646899fbabfc306add1f9909b1066}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.565: INFO: Pod "nginx-deployment-7b8c6f4498-gq7cf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gq7cf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-7b8c6f4498-gq7cf,UID:ffe832d7-4847-436f-b656-d78a5bea6008,ResourceVersion:18399159,Generation:0,CreationTimestamp:2019-12-28 13:53:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e3dcaac4-4240-46a7-a69a-4a849da70a17 0xc002553897 0xc002553898}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002553900} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002553920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:42 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.565: INFO: Pod "nginx-deployment-7b8c6f4498-lr289" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lr289,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-7b8c6f4498-lr289,UID:f4e986e4-30a5-4b02-9e63-18c218f3df6d,ResourceVersion:18399030,Generation:0,CreationTimestamp:2019-12-28 13:53:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e3dcaac4-4240-46a7-a69a-4a849da70a17 0xc0025539b7 0xc0025539b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002553a20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002553a40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:06 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:36 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:06 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2019-12-28 13:53:06 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-28 13:53:34 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://4bcc36cdc1123cc86551236c14f2f43afd2aaa89f539e82520395f15d10b7dc4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.566: INFO: Pod "nginx-deployment-7b8c6f4498-m9jdd" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-m9jdd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-7b8c6f4498-m9jdd,UID:cb1b5867-a2ac-4a6e-8531-cb2f276e3b99,ResourceVersion:18399033,Generation:0,CreationTimestamp:2019-12-28 13:53:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e3dcaac4-4240-46a7-a69a-4a849da70a17 0xc002553b27 0xc002553b28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002553b90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002553bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:06 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:36 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:06 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2019-12-28 13:53:06 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-28 13:53:35 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://72a3db319e72424853d8eea3cfe3ed7cb99052d623fd2b8a6624e5f2302f7229}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.566: INFO: Pod "nginx-deployment-7b8c6f4498-mv6vd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mv6vd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-7b8c6f4498-mv6vd,UID:ece28802-f923-425b-9482-4e94b7d5d9b7,ResourceVersion:18399142,Generation:0,CreationTimestamp:2019-12-28 13:53:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e3dcaac4-4240-46a7-a69a-4a849da70a17 0xc002553c97 0xc002553c98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002553d00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002553d20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:42 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.566: INFO: Pod "nginx-deployment-7b8c6f4498-nttl4" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nttl4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-7b8c6f4498-nttl4,UID:e51c3c44-889c-4cc7-baf7-7732179571f0,ResourceVersion:18399015,Generation:0,CreationTimestamp:2019-12-28 13:53:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e3dcaac4-4240-46a7-a69a-4a849da70a17 0xc002553db7 0xc002553db8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002553e20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002553e40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:06 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:35 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:35 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:06 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2019-12-28 13:53:06 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-28 13:53:28 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://78a98caa3350bd57c0ea649d41c0203948a7bd632b735709bbaf0498d7c24c17}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.566: INFO: Pod "nginx-deployment-7b8c6f4498-qglwl" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qglwl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-7b8c6f4498-qglwl,UID:a48bfafa-0ab3-475d-bf83-b667e51b72b8,ResourceVersion:18399044,Generation:0,CreationTimestamp:2019-12-28 13:53:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e3dcaac4-4240-46a7-a69a-4a849da70a17 0xc002553f17 0xc002553f18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002553fc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002553fe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:06 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:37 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:37 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:06 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-28 13:53:06 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-28 13:53:33 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://5ea75c62c1a0390093424276e57742251a59a2cd8da0424dd06999582854d268}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.567: INFO: Pod "nginx-deployment-7b8c6f4498-r42mq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-r42mq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-7b8c6f4498-r42mq,UID:aae5e60d-6ed0-4a0a-bba2-851da0356600,ResourceVersion:18399172,Generation:0,CreationTimestamp:2019-12-28 13:53:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e3dcaac4-4240-46a7-a69a-4a849da70a17 0xc0008e50e7 0xc0008e50e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0008e5160} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0008e5180}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-28 13:53:42 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.567: INFO: Pod "nginx-deployment-7b8c6f4498-vpk22" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vpk22,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-7b8c6f4498-vpk22,UID:4da764f2-f5a7-4d3a-bfbe-e7ca5efe1a20,ResourceVersion:18399005,Generation:0,CreationTimestamp:2019-12-28 13:53:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e3dcaac4-4240-46a7-a69a-4a849da70a17 0xc0008e5247 0xc0008e5248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0008e52b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0008e52d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:06 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:34 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:34 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:06 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2019-12-28 13:53:06 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-28 13:53:31 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c906b209baf251f54ed299225d532a33ee9187c7b517f72ad6a8cd474f371754}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.567: INFO: Pod "nginx-deployment-7b8c6f4498-wnssk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wnssk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-7b8c6f4498-wnssk,UID:c7529f12-7ed9-4406-aa08-e00c461ce0e7,ResourceVersion:18399161,Generation:0,CreationTimestamp:2019-12-28 13:53:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e3dcaac4-4240-46a7-a69a-4a849da70a17 0xc0008e53a7 0xc0008e53a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0008e5420} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0008e5440}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-28 13:53:42 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.567: INFO: Pod "nginx-deployment-7b8c6f4498-xjvfn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xjvfn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-7b8c6f4498-xjvfn,UID:c3d93846-c8bb-4cf3-8a5a-e8158f7639f2,ResourceVersion:18399153,Generation:0,CreationTimestamp:2019-12-28 13:53:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e3dcaac4-4240-46a7-a69a-4a849da70a17 0xc0008e5507 0xc0008e5508}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0008e5570} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0008e55a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:42 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.568: INFO: Pod "nginx-deployment-7b8c6f4498-zc5j8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zc5j8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-7b8c6f4498-zc5j8,UID:e22032de-1b6e-4568-b0ef-e0f7f3a19755,ResourceVersion:18399141,Generation:0,CreationTimestamp:2019-12-28 13:53:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e3dcaac4-4240-46a7-a69a-4a849da70a17 0xc0008e5627 0xc0008e5628}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0008e56a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0008e56c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:42 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 13:53:43.568: INFO: Pod "nginx-deployment-7b8c6f4498-zzz8s" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zzz8s,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1573,SelfLink:/api/v1/namespaces/deployment-1573/pods/nginx-deployment-7b8c6f4498-zzz8s,UID:4a0939e5-ff88-459f-baae-fc5bb7f2ce71,ResourceVersion:18399152,Generation:0,CreationTimestamp:2019-12-28 13:53:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e3dcaac4-4240-46a7-a69a-4a849da70a17 0xc0008e5747 0xc0008e5748}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8h4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8h4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8h4hm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0008e57b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0008e57d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 13:53:42 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:53:43.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1573" for this suite.
Dec 28 13:54:35.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:54:35.345: INFO: namespace deployment-1573 deletion completed in 50.09320373s

• [SLOW TEST:89.297 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:54:35.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 28 13:57:39.833: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 28 13:57:39.896: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 28 13:57:41.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 28 13:57:41.907: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 28 13:57:43.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 28 13:57:43.913: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 28 13:57:45.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 28 13:57:45.907: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 28 13:57:47.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 28 13:57:47.907: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 28 13:57:49.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 28 13:57:49.911: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 28 13:57:51.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 28 13:57:51.909: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 28 13:57:53.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 28 13:57:53.915: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 28 13:57:55.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 28 13:57:55.914: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 28 13:57:57.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 28 13:57:57.916: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 28 13:57:59.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 28 13:57:59.906: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 28 13:58:01.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 28 13:58:01.907: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 28 13:58:03.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 28 13:58:03.912: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 28 13:58:05.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 28 13:58:05.907: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 28 13:58:07.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 28 13:58:07.909: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:58:07.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2163" for this suite.
Dec 28 13:58:29.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:58:30.215: INFO: namespace container-lifecycle-hook-2163 deletion completed in 22.298166617s

• [SLOW TEST:234.870 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:58:30.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 28 13:58:30.363: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bd271dfd-462c-406c-8e9a-ac08d79285d5" in namespace "projected-5086" to be "success or failure"
Dec 28 13:58:30.368: INFO: Pod "downwardapi-volume-bd271dfd-462c-406c-8e9a-ac08d79285d5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.181793ms
Dec 28 13:58:32.380: INFO: Pod "downwardapi-volume-bd271dfd-462c-406c-8e9a-ac08d79285d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017568299s
Dec 28 13:58:34.388: INFO: Pod "downwardapi-volume-bd271dfd-462c-406c-8e9a-ac08d79285d5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025221335s
Dec 28 13:58:36.394: INFO: Pod "downwardapi-volume-bd271dfd-462c-406c-8e9a-ac08d79285d5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03092337s
Dec 28 13:58:38.404: INFO: Pod "downwardapi-volume-bd271dfd-462c-406c-8e9a-ac08d79285d5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.041316226s
Dec 28 13:58:40.416: INFO: Pod "downwardapi-volume-bd271dfd-462c-406c-8e9a-ac08d79285d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.053723676s
STEP: Saw pod success
Dec 28 13:58:40.417: INFO: Pod "downwardapi-volume-bd271dfd-462c-406c-8e9a-ac08d79285d5" satisfied condition "success or failure"
Dec 28 13:58:40.422: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-bd271dfd-462c-406c-8e9a-ac08d79285d5 container client-container: 
STEP: delete the pod
Dec 28 13:58:40.501: INFO: Waiting for pod downwardapi-volume-bd271dfd-462c-406c-8e9a-ac08d79285d5 to disappear
Dec 28 13:58:40.571: INFO: Pod downwardapi-volume-bd271dfd-462c-406c-8e9a-ac08d79285d5 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:58:40.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5086" for this suite.
Dec 28 13:58:46.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:58:46.752: INFO: namespace projected-5086 deletion completed in 6.170879667s

• [SLOW TEST:16.535 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:58:46.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-5828/secret-test-3732de6e-f592-4b71-a54a-64814461279d
STEP: Creating a pod to test consume secrets
Dec 28 13:58:46.906: INFO: Waiting up to 5m0s for pod "pod-configmaps-85dc333c-3eca-46c7-a5c6-de0d9e3a1e2d" in namespace "secrets-5828" to be "success or failure"
Dec 28 13:58:46.918: INFO: Pod "pod-configmaps-85dc333c-3eca-46c7-a5c6-de0d9e3a1e2d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.30752ms
Dec 28 13:58:48.932: INFO: Pod "pod-configmaps-85dc333c-3eca-46c7-a5c6-de0d9e3a1e2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025276382s
Dec 28 13:58:50.958: INFO: Pod "pod-configmaps-85dc333c-3eca-46c7-a5c6-de0d9e3a1e2d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051209742s
Dec 28 13:58:52.974: INFO: Pod "pod-configmaps-85dc333c-3eca-46c7-a5c6-de0d9e3a1e2d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067560569s
Dec 28 13:58:54.984: INFO: Pod "pod-configmaps-85dc333c-3eca-46c7-a5c6-de0d9e3a1e2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077923866s
STEP: Saw pod success
Dec 28 13:58:54.984: INFO: Pod "pod-configmaps-85dc333c-3eca-46c7-a5c6-de0d9e3a1e2d" satisfied condition "success or failure"
Dec 28 13:58:54.989: INFO: Trying to get logs from node iruya-node pod pod-configmaps-85dc333c-3eca-46c7-a5c6-de0d9e3a1e2d container env-test: 
STEP: delete the pod
Dec 28 13:58:55.052: INFO: Waiting for pod pod-configmaps-85dc333c-3eca-46c7-a5c6-de0d9e3a1e2d to disappear
Dec 28 13:58:55.072: INFO: Pod pod-configmaps-85dc333c-3eca-46c7-a5c6-de0d9e3a1e2d no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:58:55.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5828" for this suite.
Dec 28 13:59:01.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:59:01.253: INFO: namespace secrets-5828 deletion completed in 6.173707091s

• [SLOW TEST:14.500 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:59:01.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 28 13:59:19.504: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 28 13:59:19.525: INFO: Pod pod-with-prestop-http-hook still exists
Dec 28 13:59:21.525: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 28 13:59:21.535: INFO: Pod pod-with-prestop-http-hook still exists
Dec 28 13:59:23.526: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 28 13:59:23.536: INFO: Pod pod-with-prestop-http-hook still exists
Dec 28 13:59:25.526: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 28 13:59:25.535: INFO: Pod pod-with-prestop-http-hook still exists
Dec 28 13:59:27.526: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 28 13:59:27.535: INFO: Pod pod-with-prestop-http-hook still exists
Dec 28 13:59:29.526: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 28 13:59:29.535: INFO: Pod pod-with-prestop-http-hook still exists
Dec 28 13:59:31.526: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 28 13:59:31.535: INFO: Pod pod-with-prestop-http-hook still exists
Dec 28 13:59:33.526: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 28 13:59:33.538: INFO: Pod pod-with-prestop-http-hook still exists
Dec 28 13:59:35.526: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 28 13:59:35.534: INFO: Pod pod-with-prestop-http-hook still exists
Dec 28 13:59:37.526: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 28 13:59:37.534: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 13:59:37.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9067" for this suite.
Dec 28 13:59:59.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:59:59.773: INFO: namespace container-lifecycle-hook-9067 deletion completed in 22.194569926s

• [SLOW TEST:58.519 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 13:59:59.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 28 13:59:59.981: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:00:01.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5456" for this suite.
Dec 28 14:00:07.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:00:07.327: INFO: namespace custom-resource-definition-5456 deletion completed in 6.158998803s

• [SLOW TEST:7.553 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:00:07.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Dec 28 14:00:07.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6760'
Dec 28 14:00:07.874: INFO: stderr: ""
Dec 28 14:00:07.875: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 28 14:00:07.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6760'
Dec 28 14:00:08.111: INFO: stderr: ""
Dec 28 14:00:08.111: INFO: stdout: "update-demo-nautilus-9zgr4 update-demo-nautilus-jsvgx "
Dec 28 14:00:08.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9zgr4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6760'
Dec 28 14:00:08.218: INFO: stderr: ""
Dec 28 14:00:08.218: INFO: stdout: ""
Dec 28 14:00:08.218: INFO: update-demo-nautilus-9zgr4 is created but not running
Dec 28 14:00:13.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6760'
Dec 28 14:00:15.120: INFO: stderr: ""
Dec 28 14:00:15.120: INFO: stdout: "update-demo-nautilus-9zgr4 update-demo-nautilus-jsvgx "
Dec 28 14:00:15.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9zgr4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6760'
Dec 28 14:00:15.523: INFO: stderr: ""
Dec 28 14:00:15.523: INFO: stdout: ""
Dec 28 14:00:15.523: INFO: update-demo-nautilus-9zgr4 is created but not running
Dec 28 14:00:20.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6760'
Dec 28 14:00:20.659: INFO: stderr: ""
Dec 28 14:00:20.659: INFO: stdout: "update-demo-nautilus-9zgr4 update-demo-nautilus-jsvgx "
Dec 28 14:00:20.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9zgr4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6760'
Dec 28 14:00:20.792: INFO: stderr: ""
Dec 28 14:00:20.792: INFO: stdout: "true"
Dec 28 14:00:20.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9zgr4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6760'
Dec 28 14:00:20.896: INFO: stderr: ""
Dec 28 14:00:20.896: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 28 14:00:20.896: INFO: validating pod update-demo-nautilus-9zgr4
Dec 28 14:00:20.913: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 28 14:00:20.913: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 28 14:00:20.913: INFO: update-demo-nautilus-9zgr4 is verified up and running
Dec 28 14:00:20.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jsvgx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6760'
Dec 28 14:00:21.037: INFO: stderr: ""
Dec 28 14:00:21.037: INFO: stdout: "true"
Dec 28 14:00:21.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jsvgx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6760'
Dec 28 14:00:21.170: INFO: stderr: ""
Dec 28 14:00:21.170: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 28 14:00:21.170: INFO: validating pod update-demo-nautilus-jsvgx
Dec 28 14:00:21.176: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 28 14:00:21.176: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 28 14:00:21.176: INFO: update-demo-nautilus-jsvgx is verified up and running
STEP: scaling down the replication controller
Dec 28 14:00:21.178: INFO: scanned /root for discovery docs: 
Dec 28 14:00:21.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-6760'
Dec 28 14:00:22.313: INFO: stderr: ""
Dec 28 14:00:22.313: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 28 14:00:22.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6760'
Dec 28 14:00:22.578: INFO: stderr: ""
Dec 28 14:00:22.578: INFO: stdout: "update-demo-nautilus-9zgr4 update-demo-nautilus-jsvgx "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 28 14:00:27.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6760'
Dec 28 14:00:27.759: INFO: stderr: ""
Dec 28 14:00:27.759: INFO: stdout: "update-demo-nautilus-jsvgx "
Dec 28 14:00:27.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jsvgx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6760'
Dec 28 14:00:27.905: INFO: stderr: ""
Dec 28 14:00:27.905: INFO: stdout: "true"
Dec 28 14:00:27.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jsvgx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6760'
Dec 28 14:00:28.027: INFO: stderr: ""
Dec 28 14:00:28.027: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 28 14:00:28.027: INFO: validating pod update-demo-nautilus-jsvgx
Dec 28 14:00:28.038: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 28 14:00:28.039: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 28 14:00:28.039: INFO: update-demo-nautilus-jsvgx is verified up and running
STEP: scaling up the replication controller
Dec 28 14:00:28.044: INFO: scanned /root for discovery docs: 
Dec 28 14:00:28.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-6760'
Dec 28 14:00:29.219: INFO: stderr: ""
Dec 28 14:00:29.219: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 28 14:00:29.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6760'
Dec 28 14:00:29.332: INFO: stderr: ""
Dec 28 14:00:29.332: INFO: stdout: "update-demo-nautilus-794pr update-demo-nautilus-jsvgx "
Dec 28 14:00:29.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-794pr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6760'
Dec 28 14:00:29.468: INFO: stderr: ""
Dec 28 14:00:29.468: INFO: stdout: ""
Dec 28 14:00:29.468: INFO: update-demo-nautilus-794pr is created but not running
Dec 28 14:00:34.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6760'
Dec 28 14:00:34.634: INFO: stderr: ""
Dec 28 14:00:34.634: INFO: stdout: "update-demo-nautilus-794pr update-demo-nautilus-jsvgx "
Dec 28 14:00:34.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-794pr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6760'
Dec 28 14:00:34.741: INFO: stderr: ""
Dec 28 14:00:34.741: INFO: stdout: ""
Dec 28 14:00:34.741: INFO: update-demo-nautilus-794pr is created but not running
Dec 28 14:00:39.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6760'
Dec 28 14:00:39.930: INFO: stderr: ""
Dec 28 14:00:39.931: INFO: stdout: "update-demo-nautilus-794pr update-demo-nautilus-jsvgx "
Dec 28 14:00:39.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-794pr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6760'
Dec 28 14:00:40.078: INFO: stderr: ""
Dec 28 14:00:40.078: INFO: stdout: "true"
Dec 28 14:00:40.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-794pr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6760'
Dec 28 14:00:40.164: INFO: stderr: ""
Dec 28 14:00:40.164: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 28 14:00:40.164: INFO: validating pod update-demo-nautilus-794pr
Dec 28 14:00:40.179: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 28 14:00:40.179: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 28 14:00:40.179: INFO: update-demo-nautilus-794pr is verified up and running
Dec 28 14:00:40.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jsvgx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6760'
Dec 28 14:00:40.286: INFO: stderr: ""
Dec 28 14:00:40.286: INFO: stdout: "true"
Dec 28 14:00:40.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jsvgx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6760'
Dec 28 14:00:40.409: INFO: stderr: ""
Dec 28 14:00:40.409: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 28 14:00:40.409: INFO: validating pod update-demo-nautilus-jsvgx
Dec 28 14:00:40.413: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 28 14:00:40.413: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 28 14:00:40.413: INFO: update-demo-nautilus-jsvgx is verified up and running
STEP: using delete to clean up resources
Dec 28 14:00:40.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6760'
Dec 28 14:00:40.534: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 28 14:00:40.534: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 28 14:00:40.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6760'
Dec 28 14:00:40.652: INFO: stderr: "No resources found.\n"
Dec 28 14:00:40.652: INFO: stdout: ""
Dec 28 14:00:40.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6760 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 28 14:00:40.743: INFO: stderr: ""
Dec 28 14:00:40.743: INFO: stdout: "update-demo-nautilus-794pr\nupdate-demo-nautilus-jsvgx\n"
Dec 28 14:00:41.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6760'
Dec 28 14:00:42.153: INFO: stderr: "No resources found.\n"
Dec 28 14:00:42.153: INFO: stdout: ""
Dec 28 14:00:42.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6760 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 28 14:00:42.425: INFO: stderr: ""
Dec 28 14:00:42.425: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:00:42.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6760" for this suite.
Dec 28 14:01:04.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:01:04.699: INFO: namespace kubectl-6760 deletion completed in 22.253688427s

• [SLOW TEST:57.372 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:01:04.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 28 14:01:04.840: INFO: Waiting up to 5m0s for pod "downward-api-373c9c5f-f954-4325-9ae3-b8d0d5fc7dbb" in namespace "downward-api-1607" to be "success or failure"
Dec 28 14:01:04.854: INFO: Pod "downward-api-373c9c5f-f954-4325-9ae3-b8d0d5fc7dbb": Phase="Pending", Reason="", readiness=false. Elapsed: 13.277586ms
Dec 28 14:01:06.874: INFO: Pod "downward-api-373c9c5f-f954-4325-9ae3-b8d0d5fc7dbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033569839s
Dec 28 14:01:08.891: INFO: Pod "downward-api-373c9c5f-f954-4325-9ae3-b8d0d5fc7dbb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050682656s
Dec 28 14:01:10.902: INFO: Pod "downward-api-373c9c5f-f954-4325-9ae3-b8d0d5fc7dbb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06180191s
Dec 28 14:01:12.915: INFO: Pod "downward-api-373c9c5f-f954-4325-9ae3-b8d0d5fc7dbb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074774121s
Dec 28 14:01:14.931: INFO: Pod "downward-api-373c9c5f-f954-4325-9ae3-b8d0d5fc7dbb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.090354905s
STEP: Saw pod success
Dec 28 14:01:14.931: INFO: Pod "downward-api-373c9c5f-f954-4325-9ae3-b8d0d5fc7dbb" satisfied condition "success or failure"
Dec 28 14:01:14.947: INFO: Trying to get logs from node iruya-node pod downward-api-373c9c5f-f954-4325-9ae3-b8d0d5fc7dbb container dapi-container: 
STEP: delete the pod
Dec 28 14:01:16.134: INFO: Waiting for pod downward-api-373c9c5f-f954-4325-9ae3-b8d0d5fc7dbb to disappear
Dec 28 14:01:16.150: INFO: Pod downward-api-373c9c5f-f954-4325-9ae3-b8d0d5fc7dbb no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:01:16.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1607" for this suite.
Dec 28 14:01:22.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:01:22.419: INFO: namespace downward-api-1607 deletion completed in 6.24836643s

• [SLOW TEST:17.719 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:01:22.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 28 14:01:22.741: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"017dfe0d-0f96-4fe3-a562-fb62db747deb", Controller:(*bool)(0xc0032737fa), BlockOwnerDeletion:(*bool)(0xc0032737fb)}}
Dec 28 14:01:22.835: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"f234c296-385f-4ef9-b608-0c933d6a49fb", Controller:(*bool)(0xc0032739ea), BlockOwnerDeletion:(*bool)(0xc0032739eb)}}
Dec 28 14:01:22.890: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"73b45d47-7a90-4910-bb79-c284a377cfee", Controller:(*bool)(0xc00234f982), BlockOwnerDeletion:(*bool)(0xc00234f983)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:01:27.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3052" for this suite.
Dec 28 14:01:34.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:01:34.175: INFO: namespace gc-3052 deletion completed in 6.21355201s

• [SLOW TEST:11.755 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:01:34.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-7332d7dd-bcad-400c-b767-82523d06b13c
STEP: Creating a pod to test consume secrets
Dec 28 14:01:34.289: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7ffb9eee-5d6b-43fe-9b35-86d6128392b6" in namespace "projected-1530" to be "success or failure"
Dec 28 14:01:34.315: INFO: Pod "pod-projected-secrets-7ffb9eee-5d6b-43fe-9b35-86d6128392b6": Phase="Pending", Reason="", readiness=false. Elapsed: 25.907047ms
Dec 28 14:01:36.328: INFO: Pod "pod-projected-secrets-7ffb9eee-5d6b-43fe-9b35-86d6128392b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039429683s
Dec 28 14:01:38.345: INFO: Pod "pod-projected-secrets-7ffb9eee-5d6b-43fe-9b35-86d6128392b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056055777s
Dec 28 14:01:40.353: INFO: Pod "pod-projected-secrets-7ffb9eee-5d6b-43fe-9b35-86d6128392b6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063836572s
Dec 28 14:01:42.363: INFO: Pod "pod-projected-secrets-7ffb9eee-5d6b-43fe-9b35-86d6128392b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.07407049s
STEP: Saw pod success
Dec 28 14:01:42.363: INFO: Pod "pod-projected-secrets-7ffb9eee-5d6b-43fe-9b35-86d6128392b6" satisfied condition "success or failure"
Dec 28 14:01:42.366: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-7ffb9eee-5d6b-43fe-9b35-86d6128392b6 container secret-volume-test: 
STEP: delete the pod
Dec 28 14:01:42.443: INFO: Waiting for pod pod-projected-secrets-7ffb9eee-5d6b-43fe-9b35-86d6128392b6 to disappear
Dec 28 14:01:42.518: INFO: Pod pod-projected-secrets-7ffb9eee-5d6b-43fe-9b35-86d6128392b6 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:01:42.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1530" for this suite.
Dec 28 14:01:48.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:01:48.705: INFO: namespace projected-1530 deletion completed in 6.1788402s

• [SLOW TEST:14.530 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:01:48.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 28 14:01:48.876: INFO: Number of nodes with available pods: 0
Dec 28 14:01:48.876: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:01:50.971: INFO: Number of nodes with available pods: 0
Dec 28 14:01:50.971: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:01:52.053: INFO: Number of nodes with available pods: 0
Dec 28 14:01:52.053: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:01:52.910: INFO: Number of nodes with available pods: 0
Dec 28 14:01:52.910: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:01:54.147: INFO: Number of nodes with available pods: 0
Dec 28 14:01:54.147: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:01:55.795: INFO: Number of nodes with available pods: 0
Dec 28 14:01:55.796: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:01:56.190: INFO: Number of nodes with available pods: 0
Dec 28 14:01:56.190: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:01:57.353: INFO: Number of nodes with available pods: 0
Dec 28 14:01:57.353: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:01:57.892: INFO: Number of nodes with available pods: 0
Dec 28 14:01:57.892: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:01:58.914: INFO: Number of nodes with available pods: 1
Dec 28 14:01:58.914: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:01:59.893: INFO: Number of nodes with available pods: 2
Dec 28 14:01:59.893: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Dec 28 14:01:59.992: INFO: Number of nodes with available pods: 1
Dec 28 14:01:59.992: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:02:01.023: INFO: Number of nodes with available pods: 1
Dec 28 14:02:01.023: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:02:02.010: INFO: Number of nodes with available pods: 1
Dec 28 14:02:02.014: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:02:03.016: INFO: Number of nodes with available pods: 1
Dec 28 14:02:03.016: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:02:04.012: INFO: Number of nodes with available pods: 1
Dec 28 14:02:04.012: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:02:05.020: INFO: Number of nodes with available pods: 1
Dec 28 14:02:05.020: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:02:06.008: INFO: Number of nodes with available pods: 1
Dec 28 14:02:06.008: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:02:07.012: INFO: Number of nodes with available pods: 1
Dec 28 14:02:07.012: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:02:08.069: INFO: Number of nodes with available pods: 1
Dec 28 14:02:08.069: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:02:09.004: INFO: Number of nodes with available pods: 1
Dec 28 14:02:09.004: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:02:10.013: INFO: Number of nodes with available pods: 2
Dec 28 14:02:10.014: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-856, will wait for the garbage collector to delete the pods
Dec 28 14:02:10.082: INFO: Deleting DaemonSet.extensions daemon-set took: 8.975732ms
Dec 28 14:02:10.383: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.467609ms
Dec 28 14:02:26.603: INFO: Number of nodes with available pods: 0
Dec 28 14:02:26.603: INFO: Number of running nodes: 0, number of available pods: 0
Dec 28 14:02:26.612: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-856/daemonsets","resourceVersion":"18400427"},"items":null}

Dec 28 14:02:26.629: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-856/pods","resourceVersion":"18400427"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:02:26.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-856" for this suite.
Dec 28 14:02:34.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:02:34.832: INFO: namespace daemonsets-856 deletion completed in 8.182086253s

• [SLOW TEST:46.127 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:02:34.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W1228 14:02:39.399718       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 28 14:02:39.399: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:02:39.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5473" for this suite.
Dec 28 14:02:47.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:02:47.671: INFO: namespace gc-5473 deletion completed in 8.26526832s

• [SLOW TEST:12.839 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:02:47.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-885ec2b3-5d85-4278-a361-99e93fbe230b
STEP: Creating a pod to test consume configMaps
Dec 28 14:02:47.827: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2fde6c3b-96ed-4cd2-856c-19e5f5865e4d" in namespace "projected-1275" to be "success or failure"
Dec 28 14:02:47.874: INFO: Pod "pod-projected-configmaps-2fde6c3b-96ed-4cd2-856c-19e5f5865e4d": Phase="Pending", Reason="", readiness=false. Elapsed: 47.060776ms
Dec 28 14:02:49.884: INFO: Pod "pod-projected-configmaps-2fde6c3b-96ed-4cd2-856c-19e5f5865e4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056245738s
Dec 28 14:02:51.891: INFO: Pod "pod-projected-configmaps-2fde6c3b-96ed-4cd2-856c-19e5f5865e4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063532156s
Dec 28 14:02:53.907: INFO: Pod "pod-projected-configmaps-2fde6c3b-96ed-4cd2-856c-19e5f5865e4d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079729739s
Dec 28 14:02:55.923: INFO: Pod "pod-projected-configmaps-2fde6c3b-96ed-4cd2-856c-19e5f5865e4d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095316191s
Dec 28 14:02:57.934: INFO: Pod "pod-projected-configmaps-2fde6c3b-96ed-4cd2-856c-19e5f5865e4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.106242203s
STEP: Saw pod success
Dec 28 14:02:57.934: INFO: Pod "pod-projected-configmaps-2fde6c3b-96ed-4cd2-856c-19e5f5865e4d" satisfied condition "success or failure"
Dec 28 14:02:57.937: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-2fde6c3b-96ed-4cd2-856c-19e5f5865e4d container projected-configmap-volume-test: 
STEP: delete the pod
Dec 28 14:02:58.092: INFO: Waiting for pod pod-projected-configmaps-2fde6c3b-96ed-4cd2-856c-19e5f5865e4d to disappear
Dec 28 14:02:58.102: INFO: Pod pod-projected-configmaps-2fde6c3b-96ed-4cd2-856c-19e5f5865e4d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:02:58.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1275" for this suite.
Dec 28 14:03:04.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:03:04.250: INFO: namespace projected-1275 deletion completed in 6.139617358s

• [SLOW TEST:16.578 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:03:04.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 28 14:03:04.387: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9eb9b361-3c69-4d20-8001-318f0d38bae6" in namespace "projected-3328" to be "success or failure"
Dec 28 14:03:04.405: INFO: Pod "downwardapi-volume-9eb9b361-3c69-4d20-8001-318f0d38bae6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.52123ms
Dec 28 14:03:06.414: INFO: Pod "downwardapi-volume-9eb9b361-3c69-4d20-8001-318f0d38bae6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026781515s
Dec 28 14:03:08.421: INFO: Pod "downwardapi-volume-9eb9b361-3c69-4d20-8001-318f0d38bae6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034045791s
Dec 28 14:03:10.430: INFO: Pod "downwardapi-volume-9eb9b361-3c69-4d20-8001-318f0d38bae6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043520439s
Dec 28 14:03:12.443: INFO: Pod "downwardapi-volume-9eb9b361-3c69-4d20-8001-318f0d38bae6": Phase="Running", Reason="", readiness=true. Elapsed: 8.055672942s
Dec 28 14:03:14.453: INFO: Pod "downwardapi-volume-9eb9b361-3c69-4d20-8001-318f0d38bae6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065881937s
STEP: Saw pod success
Dec 28 14:03:14.453: INFO: Pod "downwardapi-volume-9eb9b361-3c69-4d20-8001-318f0d38bae6" satisfied condition "success or failure"
Dec 28 14:03:14.458: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-9eb9b361-3c69-4d20-8001-318f0d38bae6 container client-container: 
STEP: delete the pod
Dec 28 14:03:14.822: INFO: Waiting for pod downwardapi-volume-9eb9b361-3c69-4d20-8001-318f0d38bae6 to disappear
Dec 28 14:03:14.859: INFO: Pod downwardapi-volume-9eb9b361-3c69-4d20-8001-318f0d38bae6 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:03:14.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3328" for this suite.
Dec 28 14:03:21.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:03:21.128: INFO: namespace projected-3328 deletion completed in 6.251037258s

• [SLOW TEST:16.878 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:03:21.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 28 14:03:21.223: INFO: Waiting up to 5m0s for pod "downwardapi-volume-615ac07e-f3b1-4197-bfd7-e39e60ce2daf" in namespace "downward-api-8598" to be "success or failure"
Dec 28 14:03:21.245: INFO: Pod "downwardapi-volume-615ac07e-f3b1-4197-bfd7-e39e60ce2daf": Phase="Pending", Reason="", readiness=false. Elapsed: 22.070278ms
Dec 28 14:03:23.254: INFO: Pod "downwardapi-volume-615ac07e-f3b1-4197-bfd7-e39e60ce2daf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031634895s
Dec 28 14:03:25.261: INFO: Pod "downwardapi-volume-615ac07e-f3b1-4197-bfd7-e39e60ce2daf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038012423s
Dec 28 14:03:27.271: INFO: Pod "downwardapi-volume-615ac07e-f3b1-4197-bfd7-e39e60ce2daf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04772036s
Dec 28 14:03:29.324: INFO: Pod "downwardapi-volume-615ac07e-f3b1-4197-bfd7-e39e60ce2daf": Phase="Running", Reason="", readiness=true. Elapsed: 8.100824636s
Dec 28 14:03:31.332: INFO: Pod "downwardapi-volume-615ac07e-f3b1-4197-bfd7-e39e60ce2daf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.109464777s
STEP: Saw pod success
Dec 28 14:03:31.333: INFO: Pod "downwardapi-volume-615ac07e-f3b1-4197-bfd7-e39e60ce2daf" satisfied condition "success or failure"
Dec 28 14:03:31.337: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-615ac07e-f3b1-4197-bfd7-e39e60ce2daf container client-container: 
STEP: delete the pod
Dec 28 14:03:31.433: INFO: Waiting for pod downwardapi-volume-615ac07e-f3b1-4197-bfd7-e39e60ce2daf to disappear
Dec 28 14:03:31.440: INFO: Pod downwardapi-volume-615ac07e-f3b1-4197-bfd7-e39e60ce2daf no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:03:31.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8598" for this suite.
Dec 28 14:03:37.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:03:37.685: INFO: namespace downward-api-8598 deletion completed in 6.238856827s

• [SLOW TEST:16.556 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:03:37.685: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W1228 14:03:55.207825       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 28 14:03:55.208: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:03:55.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8325" for this suite.
Dec 28 14:04:13.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:04:14.045: INFO: namespace gc-8325 deletion completed in 18.162676714s

• [SLOW TEST:36.360 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:04:14.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:04:26.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9622" for this suite.
Dec 28 14:04:32.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:04:32.500: INFO: namespace kubelet-test-9622 deletion completed in 6.166199409s

• [SLOW TEST:18.455 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:04:32.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 28 14:04:40.765: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:04:40.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7681" for this suite.
Dec 28 14:04:46.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:04:47.006: INFO: namespace container-runtime-7681 deletion completed in 6.201508843s

• [SLOW TEST:14.505 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:04:47.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 28 14:04:47.128: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:05:03.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7494" for this suite.
Dec 28 14:05:25.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:05:25.956: INFO: namespace init-container-7494 deletion completed in 22.172159114s

• [SLOW TEST:38.950 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:05:25.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-d7c0cf0d-d859-46a7-8373-3013b85607ea
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-d7c0cf0d-d859-46a7-8373-3013b85607ea
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:05:36.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8430" for this suite.
Dec 28 14:05:58.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:05:58.410: INFO: namespace projected-8430 deletion completed in 22.132817774s

• [SLOW TEST:32.454 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:05:58.412: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Dec 28 14:06:06.614: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-29c01b52-b082-439d-be78-084de6020e42,GenerateName:,Namespace:events-896,SelfLink:/api/v1/namespaces/events-896/pods/send-events-29c01b52-b082-439d-be78-084de6020e42,UID:58654587-fd6b-4d89-960b-aab4b5ffbef3,ResourceVersion:18401092,Generation:0,CreationTimestamp:2019-12-28 14:05:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 540254578,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-thdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-thdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-thdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022a13f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022a1410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 14:05:58 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 14:06:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 14:06:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 14:05:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-28 14:05:58 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-12-28 14:06:05 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://ae86518014d9e533396c53fdce3e53e2c7e50e3bdb54ea36696feb291bb62808}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Dec 28 14:06:08.622: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Dec 28 14:06:10.630: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:06:10.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-896" for this suite.
Dec 28 14:07:02.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:07:02.814: INFO: namespace events-896 deletion completed in 52.157304573s

• [SLOW TEST:64.403 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:07:02.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 28 14:07:02.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5276'
Dec 28 14:07:05.272: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 28 14:07:05.272: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Dec 28 14:07:05.349: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-m9m59]
Dec 28 14:07:05.349: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-m9m59" in namespace "kubectl-5276" to be "running and ready"
Dec 28 14:07:05.404: INFO: Pod "e2e-test-nginx-rc-m9m59": Phase="Pending", Reason="", readiness=false. Elapsed: 55.51575ms
Dec 28 14:07:07.417: INFO: Pod "e2e-test-nginx-rc-m9m59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067851729s
Dec 28 14:07:09.428: INFO: Pod "e2e-test-nginx-rc-m9m59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078737878s
Dec 28 14:07:11.438: INFO: Pod "e2e-test-nginx-rc-m9m59": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089312795s
Dec 28 14:07:13.448: INFO: Pod "e2e-test-nginx-rc-m9m59": Phase="Running", Reason="", readiness=true. Elapsed: 8.099014078s
Dec 28 14:07:13.448: INFO: Pod "e2e-test-nginx-rc-m9m59" satisfied condition "running and ready"
Dec 28 14:07:13.448: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-m9m59]
Dec 28 14:07:13.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-5276'
Dec 28 14:07:13.625: INFO: stderr: ""
Dec 28 14:07:13.625: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Dec 28 14:07:13.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5276'
Dec 28 14:07:13.739: INFO: stderr: ""
Dec 28 14:07:13.739: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:07:13.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5276" for this suite.
Dec 28 14:07:35.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:07:35.958: INFO: namespace kubectl-5276 deletion completed in 22.209915122s

• [SLOW TEST:33.143 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:07:35.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 28 14:07:46.739: INFO: Successfully updated pod "labelsupdate0ba03fef-938d-4953-a3fe-b0eb574c1331"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:07:48.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2277" for this suite.
Dec 28 14:08:10.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:08:10.931: INFO: namespace projected-2277 deletion completed in 22.10375385s

• [SLOW TEST:34.972 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:08:10.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-01fb9052-4c29-4217-9fd3-bf24860de59e
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:08:10.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1772" for this suite.
Dec 28 14:08:17.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:08:17.126: INFO: namespace configmap-1772 deletion completed in 6.123858765s

• [SLOW TEST:6.195 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:08:17.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 28 14:08:17.293: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Dec 28 14:08:17.314: INFO: Number of nodes with available pods: 0
Dec 28 14:08:17.315: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:08:19.607: INFO: Number of nodes with available pods: 0
Dec 28 14:08:19.607: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:08:20.340: INFO: Number of nodes with available pods: 0
Dec 28 14:08:20.341: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:08:21.339: INFO: Number of nodes with available pods: 0
Dec 28 14:08:21.339: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:08:22.386: INFO: Number of nodes with available pods: 0
Dec 28 14:08:22.386: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:08:24.948: INFO: Number of nodes with available pods: 0
Dec 28 14:08:24.948: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:08:25.337: INFO: Number of nodes with available pods: 0
Dec 28 14:08:25.337: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:08:26.409: INFO: Number of nodes with available pods: 0
Dec 28 14:08:26.409: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:08:27.332: INFO: Number of nodes with available pods: 0
Dec 28 14:08:27.332: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:08:28.334: INFO: Number of nodes with available pods: 2
Dec 28 14:08:28.334: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Dec 28 14:08:28.420: INFO: Wrong image for pod: daemon-set-nxtzb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:28.420: INFO: Wrong image for pod: daemon-set-pw66c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:29.438: INFO: Wrong image for pod: daemon-set-nxtzb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:29.438: INFO: Wrong image for pod: daemon-set-pw66c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:30.434: INFO: Wrong image for pod: daemon-set-nxtzb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:30.434: INFO: Wrong image for pod: daemon-set-pw66c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:31.436: INFO: Wrong image for pod: daemon-set-nxtzb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:31.436: INFO: Wrong image for pod: daemon-set-pw66c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:32.434: INFO: Wrong image for pod: daemon-set-nxtzb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:32.434: INFO: Wrong image for pod: daemon-set-pw66c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:33.434: INFO: Wrong image for pod: daemon-set-nxtzb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:33.434: INFO: Wrong image for pod: daemon-set-pw66c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:34.465: INFO: Wrong image for pod: daemon-set-nxtzb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:34.465: INFO: Pod daemon-set-nxtzb is not available
Dec 28 14:08:34.465: INFO: Wrong image for pod: daemon-set-pw66c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:35.437: INFO: Wrong image for pod: daemon-set-nxtzb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:35.438: INFO: Pod daemon-set-nxtzb is not available
Dec 28 14:08:35.438: INFO: Wrong image for pod: daemon-set-pw66c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:36.433: INFO: Wrong image for pod: daemon-set-nxtzb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:36.433: INFO: Pod daemon-set-nxtzb is not available
Dec 28 14:08:36.433: INFO: Wrong image for pod: daemon-set-pw66c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:37.435: INFO: Pod daemon-set-ptz9j is not available
Dec 28 14:08:37.435: INFO: Wrong image for pod: daemon-set-pw66c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:38.434: INFO: Pod daemon-set-ptz9j is not available
Dec 28 14:08:38.434: INFO: Wrong image for pod: daemon-set-pw66c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:39.441: INFO: Pod daemon-set-ptz9j is not available
Dec 28 14:08:39.441: INFO: Wrong image for pod: daemon-set-pw66c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:40.432: INFO: Pod daemon-set-ptz9j is not available
Dec 28 14:08:40.432: INFO: Wrong image for pod: daemon-set-pw66c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:41.439: INFO: Pod daemon-set-ptz9j is not available
Dec 28 14:08:41.439: INFO: Wrong image for pod: daemon-set-pw66c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:42.437: INFO: Pod daemon-set-ptz9j is not available
Dec 28 14:08:42.437: INFO: Wrong image for pod: daemon-set-pw66c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:43.488: INFO: Wrong image for pod: daemon-set-pw66c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:44.436: INFO: Wrong image for pod: daemon-set-pw66c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:45.849: INFO: Wrong image for pod: daemon-set-pw66c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:46.653: INFO: Wrong image for pod: daemon-set-pw66c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:47.465: INFO: Wrong image for pod: daemon-set-pw66c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:48.434: INFO: Wrong image for pod: daemon-set-pw66c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:48.434: INFO: Pod daemon-set-pw66c is not available
Dec 28 14:08:49.436: INFO: Wrong image for pod: daemon-set-pw66c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:49.436: INFO: Pod daemon-set-pw66c is not available
Dec 28 14:08:50.435: INFO: Wrong image for pod: daemon-set-pw66c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:50.435: INFO: Pod daemon-set-pw66c is not available
Dec 28 14:08:51.432: INFO: Wrong image for pod: daemon-set-pw66c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:51.433: INFO: Pod daemon-set-pw66c is not available
Dec 28 14:08:52.430: INFO: Wrong image for pod: daemon-set-pw66c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:52.430: INFO: Pod daemon-set-pw66c is not available
Dec 28 14:08:53.496: INFO: Wrong image for pod: daemon-set-pw66c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:53.496: INFO: Pod daemon-set-pw66c is not available
Dec 28 14:08:54.437: INFO: Wrong image for pod: daemon-set-pw66c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:54.437: INFO: Pod daemon-set-pw66c is not available
Dec 28 14:08:55.436: INFO: Wrong image for pod: daemon-set-pw66c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:55.436: INFO: Pod daemon-set-pw66c is not available
Dec 28 14:08:56.432: INFO: Wrong image for pod: daemon-set-pw66c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:56.432: INFO: Pod daemon-set-pw66c is not available
Dec 28 14:08:57.434: INFO: Wrong image for pod: daemon-set-pw66c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 14:08:57.434: INFO: Pod daemon-set-pw66c is not available
Dec 28 14:08:58.432: INFO: Pod daemon-set-xd5nb is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Dec 28 14:08:58.446: INFO: Number of nodes with available pods: 1
Dec 28 14:08:58.446: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:08:59.466: INFO: Number of nodes with available pods: 1
Dec 28 14:08:59.466: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:09:00.462: INFO: Number of nodes with available pods: 1
Dec 28 14:09:00.462: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:09:01.462: INFO: Number of nodes with available pods: 1
Dec 28 14:09:01.462: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:09:02.988: INFO: Number of nodes with available pods: 1
Dec 28 14:09:02.988: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:09:03.474: INFO: Number of nodes with available pods: 1
Dec 28 14:09:03.474: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:09:04.462: INFO: Number of nodes with available pods: 1
Dec 28 14:09:04.462: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:09:05.459: INFO: Number of nodes with available pods: 1
Dec 28 14:09:05.459: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:09:06.472: INFO: Number of nodes with available pods: 2
Dec 28 14:09:06.472: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5391, will wait for the garbage collector to delete the pods
Dec 28 14:09:06.573: INFO: Deleting DaemonSet.extensions daemon-set took: 13.105951ms
Dec 28 14:09:06.873: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.499553ms
Dec 28 14:09:16.679: INFO: Number of nodes with available pods: 0
Dec 28 14:09:16.680: INFO: Number of running nodes: 0, number of available pods: 0
Dec 28 14:09:16.683: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5391/daemonsets","resourceVersion":"18401508"},"items":null}

Dec 28 14:09:16.685: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5391/pods","resourceVersion":"18401508"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:09:16.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5391" for this suite.
Dec 28 14:09:22.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:09:22.883: INFO: namespace daemonsets-5391 deletion completed in 6.167315807s

• [SLOW TEST:65.757 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:09:22.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6551
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Dec 28 14:09:23.051: INFO: Found 0 stateful pods, waiting for 3
Dec 28 14:09:33.061: INFO: Found 1 stateful pods, waiting for 3
Dec 28 14:09:43.070: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 14:09:43.070: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 14:09:43.070: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 28 14:09:53.060: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 14:09:53.060: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 14:09:53.060: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 14:09:53.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6551 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 28 14:09:53.583: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 28 14:09:53.583: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 28 14:09:53.583: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 28 14:10:03.683: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Dec 28 14:10:13.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6551 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 14:10:14.176: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 28 14:10:14.176: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 28 14:10:14.176: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 28 14:10:24.212: INFO: Waiting for StatefulSet statefulset-6551/ss2 to complete update
Dec 28 14:10:24.212: INFO: Waiting for Pod statefulset-6551/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 28 14:10:24.212: INFO: Waiting for Pod statefulset-6551/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 28 14:10:24.212: INFO: Waiting for Pod statefulset-6551/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 28 14:10:34.237: INFO: Waiting for StatefulSet statefulset-6551/ss2 to complete update
Dec 28 14:10:34.237: INFO: Waiting for Pod statefulset-6551/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 28 14:10:34.237: INFO: Waiting for Pod statefulset-6551/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 28 14:10:44.228: INFO: Waiting for StatefulSet statefulset-6551/ss2 to complete update
Dec 28 14:10:44.228: INFO: Waiting for Pod statefulset-6551/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 28 14:10:44.228: INFO: Waiting for Pod statefulset-6551/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 28 14:10:54.231: INFO: Waiting for StatefulSet statefulset-6551/ss2 to complete update
Dec 28 14:10:54.231: INFO: Waiting for Pod statefulset-6551/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 28 14:11:04.245: INFO: Waiting for StatefulSet statefulset-6551/ss2 to complete update
STEP: Rolling back to a previous revision
Dec 28 14:11:14.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6551 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 28 14:11:14.792: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 28 14:11:14.792: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 28 14:11:14.792: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 28 14:11:14.873: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Dec 28 14:11:25.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6551 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 14:11:25.637: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 28 14:11:25.637: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 28 14:11:25.637: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 28 14:11:35.698: INFO: Waiting for StatefulSet statefulset-6551/ss2 to complete update
Dec 28 14:11:35.698: INFO: Waiting for Pod statefulset-6551/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 28 14:11:35.698: INFO: Waiting for Pod statefulset-6551/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 28 14:11:35.698: INFO: Waiting for Pod statefulset-6551/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 28 14:11:46.565: INFO: Waiting for StatefulSet statefulset-6551/ss2 to complete update
Dec 28 14:11:46.565: INFO: Waiting for Pod statefulset-6551/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 28 14:11:46.565: INFO: Waiting for Pod statefulset-6551/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 28 14:11:55.735: INFO: Waiting for StatefulSet statefulset-6551/ss2 to complete update
Dec 28 14:11:55.735: INFO: Waiting for Pod statefulset-6551/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 28 14:11:55.735: INFO: Waiting for Pod statefulset-6551/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 28 14:12:05.715: INFO: Waiting for StatefulSet statefulset-6551/ss2 to complete update
Dec 28 14:12:05.715: INFO: Waiting for Pod statefulset-6551/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 28 14:12:15.749: INFO: Waiting for StatefulSet statefulset-6551/ss2 to complete update
Dec 28 14:12:15.749: INFO: Waiting for Pod statefulset-6551/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 28 14:12:25.714: INFO: Deleting all statefulset in ns statefulset-6551
Dec 28 14:12:25.719: INFO: Scaling statefulset ss2 to 0
Dec 28 14:13:05.761: INFO: Waiting for statefulset status.replicas updated to 0
Dec 28 14:13:05.768: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:13:05.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6551" for this suite.
Dec 28 14:13:13.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:13:14.030: INFO: namespace statefulset-6551 deletion completed in 8.203766594s

• [SLOW TEST:231.147 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:13:14.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 28 14:13:14.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-1634'
Dec 28 14:13:14.353: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 28 14:13:14.353: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Dec 28 14:13:14.366: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Dec 28 14:13:14.399: INFO: scanned /root for discovery docs: 
Dec 28 14:13:14.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-1634'
Dec 28 14:13:37.657: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 28 14:13:37.657: INFO: stdout: "Created e2e-test-nginx-rc-8eece036522e122608071e34721fad5f\nScaling up e2e-test-nginx-rc-8eece036522e122608071e34721fad5f from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-8eece036522e122608071e34721fad5f up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-8eece036522e122608071e34721fad5f to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Dec 28 14:13:37.657: INFO: stdout: "Created e2e-test-nginx-rc-8eece036522e122608071e34721fad5f\nScaling up e2e-test-nginx-rc-8eece036522e122608071e34721fad5f from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-8eece036522e122608071e34721fad5f up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-8eece036522e122608071e34721fad5f to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Dec 28 14:13:37.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-1634'
Dec 28 14:13:37.873: INFO: stderr: ""
Dec 28 14:13:37.873: INFO: stdout: "e2e-test-nginx-rc-8eece036522e122608071e34721fad5f-rqqnh "
Dec 28 14:13:37.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-8eece036522e122608071e34721fad5f-rqqnh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1634'
Dec 28 14:13:38.024: INFO: stderr: ""
Dec 28 14:13:38.024: INFO: stdout: "true"
Dec 28 14:13:38.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-8eece036522e122608071e34721fad5f-rqqnh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1634'
Dec 28 14:13:38.125: INFO: stderr: ""
Dec 28 14:13:38.125: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Dec 28 14:13:38.125: INFO: e2e-test-nginx-rc-8eece036522e122608071e34721fad5f-rqqnh is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Dec 28 14:13:38.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-1634'
Dec 28 14:13:38.212: INFO: stderr: ""
Dec 28 14:13:38.212: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:13:38.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1634" for this suite.
Dec 28 14:13:44.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:13:44.465: INFO: namespace kubectl-1634 deletion completed in 6.189615045s

• [SLOW TEST:30.434 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:13:44.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 28 14:13:44.615: INFO: Waiting up to 5m0s for pod "pod-8974692a-b0eb-4bd9-8b83-6b6ddba73224" in namespace "emptydir-307" to be "success or failure"
Dec 28 14:13:44.624: INFO: Pod "pod-8974692a-b0eb-4bd9-8b83-6b6ddba73224": Phase="Pending", Reason="", readiness=false. Elapsed: 8.681291ms
Dec 28 14:13:46.646: INFO: Pod "pod-8974692a-b0eb-4bd9-8b83-6b6ddba73224": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031370217s
Dec 28 14:13:48.652: INFO: Pod "pod-8974692a-b0eb-4bd9-8b83-6b6ddba73224": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037191548s
Dec 28 14:13:50.665: INFO: Pod "pod-8974692a-b0eb-4bd9-8b83-6b6ddba73224": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05047667s
Dec 28 14:13:52.672: INFO: Pod "pod-8974692a-b0eb-4bd9-8b83-6b6ddba73224": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057116717s
Dec 28 14:13:54.687: INFO: Pod "pod-8974692a-b0eb-4bd9-8b83-6b6ddba73224": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072440031s
STEP: Saw pod success
Dec 28 14:13:54.688: INFO: Pod "pod-8974692a-b0eb-4bd9-8b83-6b6ddba73224" satisfied condition "success or failure"
Dec 28 14:13:54.698: INFO: Trying to get logs from node iruya-node pod pod-8974692a-b0eb-4bd9-8b83-6b6ddba73224 container test-container: 
STEP: delete the pod
Dec 28 14:13:54.757: INFO: Waiting for pod pod-8974692a-b0eb-4bd9-8b83-6b6ddba73224 to disappear
Dec 28 14:13:54.777: INFO: Pod pod-8974692a-b0eb-4bd9-8b83-6b6ddba73224 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:13:54.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-307" for this suite.
Dec 28 14:14:00.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:14:00.940: INFO: namespace emptydir-307 deletion completed in 6.157350628s

• [SLOW TEST:16.474 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:14:00.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Dec 28 14:14:09.231: INFO: Pod pod-hostip-cd04b56a-b087-4e7c-8a87-c33b7533e901 has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:14:09.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9420" for this suite.
Dec 28 14:14:31.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:14:31.421: INFO: namespace pods-9420 deletion completed in 22.180572791s

• [SLOW TEST:30.480 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:14:31.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-908b1fa6-377c-4ada-8cea-b516c612742e
STEP: Creating a pod to test consume secrets
Dec 28 14:14:31.550: INFO: Waiting up to 5m0s for pod "pod-secrets-c4e13a16-8ebc-48fc-9630-5e28b25276cb" in namespace "secrets-8564" to be "success or failure"
Dec 28 14:14:31.571: INFO: Pod "pod-secrets-c4e13a16-8ebc-48fc-9630-5e28b25276cb": Phase="Pending", Reason="", readiness=false. Elapsed: 21.094393ms
Dec 28 14:14:33.590: INFO: Pod "pod-secrets-c4e13a16-8ebc-48fc-9630-5e28b25276cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039542635s
Dec 28 14:14:35.615: INFO: Pod "pod-secrets-c4e13a16-8ebc-48fc-9630-5e28b25276cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064957104s
Dec 28 14:14:37.624: INFO: Pod "pod-secrets-c4e13a16-8ebc-48fc-9630-5e28b25276cb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073659347s
Dec 28 14:14:39.633: INFO: Pod "pod-secrets-c4e13a16-8ebc-48fc-9630-5e28b25276cb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082398584s
Dec 28 14:14:41.640: INFO: Pod "pod-secrets-c4e13a16-8ebc-48fc-9630-5e28b25276cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.089842621s
STEP: Saw pod success
Dec 28 14:14:41.640: INFO: Pod "pod-secrets-c4e13a16-8ebc-48fc-9630-5e28b25276cb" satisfied condition "success or failure"
Dec 28 14:14:41.644: INFO: Trying to get logs from node iruya-node pod pod-secrets-c4e13a16-8ebc-48fc-9630-5e28b25276cb container secret-volume-test: 
STEP: delete the pod
Dec 28 14:14:41.754: INFO: Waiting for pod pod-secrets-c4e13a16-8ebc-48fc-9630-5e28b25276cb to disappear
Dec 28 14:14:41.766: INFO: Pod pod-secrets-c4e13a16-8ebc-48fc-9630-5e28b25276cb no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:14:41.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8564" for this suite.
Dec 28 14:14:47.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:14:47.975: INFO: namespace secrets-8564 deletion completed in 6.199957551s

• [SLOW TEST:16.554 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:14:47.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 28 14:14:48.106: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e30dc72-e6d4-47cb-9a14-ceb70f0d97e3" in namespace "projected-0" to be "success or failure"
Dec 28 14:14:48.123: INFO: Pod "downwardapi-volume-9e30dc72-e6d4-47cb-9a14-ceb70f0d97e3": Phase="Pending", Reason="", readiness=false. Elapsed: 17.215381ms
Dec 28 14:14:50.132: INFO: Pod "downwardapi-volume-9e30dc72-e6d4-47cb-9a14-ceb70f0d97e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026220276s
Dec 28 14:14:52.194: INFO: Pod "downwardapi-volume-9e30dc72-e6d4-47cb-9a14-ceb70f0d97e3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087826317s
Dec 28 14:14:54.266: INFO: Pod "downwardapi-volume-9e30dc72-e6d4-47cb-9a14-ceb70f0d97e3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.159526198s
Dec 28 14:14:56.284: INFO: Pod "downwardapi-volume-9e30dc72-e6d4-47cb-9a14-ceb70f0d97e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.178096128s
STEP: Saw pod success
Dec 28 14:14:56.285: INFO: Pod "downwardapi-volume-9e30dc72-e6d4-47cb-9a14-ceb70f0d97e3" satisfied condition "success or failure"
Dec 28 14:14:56.292: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-9e30dc72-e6d4-47cb-9a14-ceb70f0d97e3 container client-container: 
STEP: delete the pod
Dec 28 14:14:56.356: INFO: Waiting for pod downwardapi-volume-9e30dc72-e6d4-47cb-9a14-ceb70f0d97e3 to disappear
Dec 28 14:14:56.387: INFO: Pod downwardapi-volume-9e30dc72-e6d4-47cb-9a14-ceb70f0d97e3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:14:56.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-0" for this suite.
Dec 28 14:15:02.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:15:02.570: INFO: namespace projected-0 deletion completed in 6.175309923s

• [SLOW TEST:14.594 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:15:02.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3585.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3585.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 28 14:15:16.776: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-3585/dns-test-739b9e83-9f9b-4c01-a836-d700eeb943d8: the server could not find the requested resource (get pods dns-test-739b9e83-9f9b-4c01-a836-d700eeb943d8)
Dec 28 14:15:16.783: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-3585/dns-test-739b9e83-9f9b-4c01-a836-d700eeb943d8: the server could not find the requested resource (get pods dns-test-739b9e83-9f9b-4c01-a836-d700eeb943d8)
Dec 28 14:15:16.786: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3585/dns-test-739b9e83-9f9b-4c01-a836-d700eeb943d8: the server could not find the requested resource (get pods dns-test-739b9e83-9f9b-4c01-a836-d700eeb943d8)
Dec 28 14:15:16.792: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3585/dns-test-739b9e83-9f9b-4c01-a836-d700eeb943d8: the server could not find the requested resource (get pods dns-test-739b9e83-9f9b-4c01-a836-d700eeb943d8)
Dec 28 14:15:16.797: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-3585/dns-test-739b9e83-9f9b-4c01-a836-d700eeb943d8: the server could not find the requested resource (get pods dns-test-739b9e83-9f9b-4c01-a836-d700eeb943d8)
Dec 28 14:15:16.801: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-3585/dns-test-739b9e83-9f9b-4c01-a836-d700eeb943d8: the server could not find the requested resource (get pods dns-test-739b9e83-9f9b-4c01-a836-d700eeb943d8)
Dec 28 14:15:16.806: INFO: Unable to read jessie_udp@PodARecord from pod dns-3585/dns-test-739b9e83-9f9b-4c01-a836-d700eeb943d8: the server could not find the requested resource (get pods dns-test-739b9e83-9f9b-4c01-a836-d700eeb943d8)
Dec 28 14:15:16.810: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3585/dns-test-739b9e83-9f9b-4c01-a836-d700eeb943d8: the server could not find the requested resource (get pods dns-test-739b9e83-9f9b-4c01-a836-d700eeb943d8)
Dec 28 14:15:16.810: INFO: Lookups using dns-3585/dns-test-739b9e83-9f9b-4c01-a836-d700eeb943d8 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 28 14:15:21.954: INFO: DNS probes using dns-3585/dns-test-739b9e83-9f9b-4c01-a836-d700eeb943d8 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:15:22.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3585" for this suite.
Dec 28 14:15:28.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:15:28.214: INFO: namespace dns-3585 deletion completed in 6.142264124s

• [SLOW TEST:25.643 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:15:28.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Dec 28 14:15:28.301: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:15:28.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5799" for this suite.
Dec 28 14:15:34.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:15:34.543: INFO: namespace kubectl-5799 deletion completed in 6.139178212s

• [SLOW TEST:6.328 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:15:34.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Dec 28 14:15:34.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7213'
Dec 28 14:15:35.011: INFO: stderr: ""
Dec 28 14:15:35.011: INFO: stdout: "pod/pause created\n"
Dec 28 14:15:35.011: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Dec 28 14:15:35.012: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7213" to be "running and ready"
Dec 28 14:15:35.080: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 68.416195ms
Dec 28 14:15:37.090: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078868532s
Dec 28 14:15:39.099: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087061661s
Dec 28 14:15:41.106: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094295662s
Dec 28 14:15:43.115: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.103642147s
Dec 28 14:15:43.115: INFO: Pod "pause" satisfied condition "running and ready"
Dec 28 14:15:43.115: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Dec 28 14:15:43.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7213'
Dec 28 14:15:43.325: INFO: stderr: ""
Dec 28 14:15:43.325: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Dec 28 14:15:43.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7213'
Dec 28 14:15:43.422: INFO: stderr: ""
Dec 28 14:15:43.422: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Dec 28 14:15:43.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7213'
Dec 28 14:15:43.518: INFO: stderr: ""
Dec 28 14:15:43.518: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Dec 28 14:15:43.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7213'
Dec 28 14:15:43.608: INFO: stderr: ""
Dec 28 14:15:43.608: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Dec 28 14:15:43.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7213'
Dec 28 14:15:43.762: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 28 14:15:43.762: INFO: stdout: "pod \"pause\" force deleted\n"
Dec 28 14:15:43.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7213'
Dec 28 14:15:43.930: INFO: stderr: "No resources found.\n"
Dec 28 14:15:43.931: INFO: stdout: ""
Dec 28 14:15:43.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7213 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 28 14:15:44.036: INFO: stderr: ""
Dec 28 14:15:44.037: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:15:44.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7213" for this suite.
Dec 28 14:15:50.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:15:50.176: INFO: namespace kubectl-7213 deletion completed in 6.131807456s

• [SLOW TEST:15.632 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:15:50.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Dec 28 14:15:50.295: INFO: Waiting up to 5m0s for pod "pod-ed3db9d0-5560-4e5b-ab4e-698f4a41663d" in namespace "emptydir-8387" to be "success or failure"
Dec 28 14:15:50.304: INFO: Pod "pod-ed3db9d0-5560-4e5b-ab4e-698f4a41663d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.743165ms
Dec 28 14:15:52.312: INFO: Pod "pod-ed3db9d0-5560-4e5b-ab4e-698f4a41663d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0169039s
Dec 28 14:15:54.325: INFO: Pod "pod-ed3db9d0-5560-4e5b-ab4e-698f4a41663d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02985145s
Dec 28 14:15:56.442: INFO: Pod "pod-ed3db9d0-5560-4e5b-ab4e-698f4a41663d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.147080461s
Dec 28 14:15:58.454: INFO: Pod "pod-ed3db9d0-5560-4e5b-ab4e-698f4a41663d": Phase="Running", Reason="", readiness=true. Elapsed: 8.15914831s
Dec 28 14:16:00.470: INFO: Pod "pod-ed3db9d0-5560-4e5b-ab4e-698f4a41663d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.17452317s
STEP: Saw pod success
Dec 28 14:16:00.470: INFO: Pod "pod-ed3db9d0-5560-4e5b-ab4e-698f4a41663d" satisfied condition "success or failure"
Dec 28 14:16:00.476: INFO: Trying to get logs from node iruya-node pod pod-ed3db9d0-5560-4e5b-ab4e-698f4a41663d container test-container: 
STEP: delete the pod
Dec 28 14:16:01.016: INFO: Waiting for pod pod-ed3db9d0-5560-4e5b-ab4e-698f4a41663d to disappear
Dec 28 14:16:01.020: INFO: Pod pod-ed3db9d0-5560-4e5b-ab4e-698f4a41663d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:16:01.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8387" for this suite.
Dec 28 14:16:07.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:16:07.192: INFO: namespace emptydir-8387 deletion completed in 6.16745608s

• [SLOW TEST:17.016 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:16:07.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Dec 28 14:16:07.327: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 28 14:16:07.347: INFO: Waiting for terminating namespaces to be deleted...
Dec 28 14:16:07.379: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Dec 28 14:16:07.395: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Dec 28 14:16:07.395: INFO: 	Container weave ready: true, restart count 0
Dec 28 14:16:07.395: INFO: 	Container weave-npc ready: true, restart count 0
Dec 28 14:16:07.395: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Dec 28 14:16:07.395: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 28 14:16:07.395: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Dec 28 14:16:07.411: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Dec 28 14:16:07.411: INFO: 	Container etcd ready: true, restart count 0
Dec 28 14:16:07.411: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Dec 28 14:16:07.411: INFO: 	Container weave ready: true, restart count 0
Dec 28 14:16:07.411: INFO: 	Container weave-npc ready: true, restart count 0
Dec 28 14:16:07.411: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 28 14:16:07.411: INFO: 	Container coredns ready: true, restart count 0
Dec 28 14:16:07.411: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Dec 28 14:16:07.411: INFO: 	Container kube-controller-manager ready: true, restart count 14
Dec 28 14:16:07.411: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Dec 28 14:16:07.411: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 28 14:16:07.411: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Dec 28 14:16:07.411: INFO: 	Container kube-apiserver ready: true, restart count 0
Dec 28 14:16:07.411: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Dec 28 14:16:07.411: INFO: 	Container kube-scheduler ready: true, restart count 10
Dec 28 14:16:07.411: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 28 14:16:07.411: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-1e216ac1-e2e1-4a55-a0cf-ff0eeed3d7cf 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-1e216ac1-e2e1-4a55-a0cf-ff0eeed3d7cf off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-1e216ac1-e2e1-4a55-a0cf-ff0eeed3d7cf
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:16:27.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3229" for this suite.
Dec 28 14:16:57.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:16:57.919: INFO: namespace sched-pred-3229 deletion completed in 30.21951423s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:50.727 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:16:57.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-31e8ae02-fc00-43af-bcb8-a484235ec04c
STEP: Creating a pod to test consume configMaps
Dec 28 14:16:58.078: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6a8b9b02-6199-4ae8-9c98-907ec3594b8b" in namespace "projected-6347" to be "success or failure"
Dec 28 14:16:58.123: INFO: Pod "pod-projected-configmaps-6a8b9b02-6199-4ae8-9c98-907ec3594b8b": Phase="Pending", Reason="", readiness=false. Elapsed: 45.144056ms
Dec 28 14:17:00.133: INFO: Pod "pod-projected-configmaps-6a8b9b02-6199-4ae8-9c98-907ec3594b8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055455591s
Dec 28 14:17:02.145: INFO: Pod "pod-projected-configmaps-6a8b9b02-6199-4ae8-9c98-907ec3594b8b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067266097s
Dec 28 14:17:04.150: INFO: Pod "pod-projected-configmaps-6a8b9b02-6199-4ae8-9c98-907ec3594b8b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071737127s
Dec 28 14:17:06.163: INFO: Pod "pod-projected-configmaps-6a8b9b02-6199-4ae8-9c98-907ec3594b8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.085325192s
STEP: Saw pod success
Dec 28 14:17:06.163: INFO: Pod "pod-projected-configmaps-6a8b9b02-6199-4ae8-9c98-907ec3594b8b" satisfied condition "success or failure"
Dec 28 14:17:06.167: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-6a8b9b02-6199-4ae8-9c98-907ec3594b8b container projected-configmap-volume-test: 
STEP: delete the pod
Dec 28 14:17:06.285: INFO: Waiting for pod pod-projected-configmaps-6a8b9b02-6199-4ae8-9c98-907ec3594b8b to disappear
Dec 28 14:17:06.295: INFO: Pod pod-projected-configmaps-6a8b9b02-6199-4ae8-9c98-907ec3594b8b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:17:06.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6347" for this suite.
Dec 28 14:17:12.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:17:12.642: INFO: namespace projected-6347 deletion completed in 6.337671542s

• [SLOW TEST:14.722 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:17:12.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 28 14:17:21.316: INFO: Successfully updated pod "pod-update-ce032599-e8bc-4eac-b8ca-0e777ffaef3c"
STEP: verifying the updated pod is in kubernetes
Dec 28 14:17:21.336: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:17:21.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2356" for this suite.
Dec 28 14:17:43.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:17:43.492: INFO: namespace pods-2356 deletion completed in 22.144761252s

• [SLOW TEST:30.850 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:17:43.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-fcb3e209-313a-4a7e-bd42-4b9ae3737985
STEP: Creating a pod to test consume secrets
Dec 28 14:17:43.664: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1c7e436b-d6ba-4879-8921-0046f1a15511" in namespace "projected-3461" to be "success or failure"
Dec 28 14:17:43.678: INFO: Pod "pod-projected-secrets-1c7e436b-d6ba-4879-8921-0046f1a15511": Phase="Pending", Reason="", readiness=false. Elapsed: 13.20993ms
Dec 28 14:17:45.690: INFO: Pod "pod-projected-secrets-1c7e436b-d6ba-4879-8921-0046f1a15511": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025486361s
Dec 28 14:17:47.700: INFO: Pod "pod-projected-secrets-1c7e436b-d6ba-4879-8921-0046f1a15511": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035254574s
Dec 28 14:17:49.709: INFO: Pod "pod-projected-secrets-1c7e436b-d6ba-4879-8921-0046f1a15511": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044025744s
Dec 28 14:17:51.717: INFO: Pod "pod-projected-secrets-1c7e436b-d6ba-4879-8921-0046f1a15511": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052509912s
Dec 28 14:17:53.726: INFO: Pod "pod-projected-secrets-1c7e436b-d6ba-4879-8921-0046f1a15511": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061817167s
STEP: Saw pod success
Dec 28 14:17:53.727: INFO: Pod "pod-projected-secrets-1c7e436b-d6ba-4879-8921-0046f1a15511" satisfied condition "success or failure"
Dec 28 14:17:53.731: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-1c7e436b-d6ba-4879-8921-0046f1a15511 container projected-secret-volume-test: 
STEP: delete the pod
Dec 28 14:17:53.860: INFO: Waiting for pod pod-projected-secrets-1c7e436b-d6ba-4879-8921-0046f1a15511 to disappear
Dec 28 14:17:53.880: INFO: Pod pod-projected-secrets-1c7e436b-d6ba-4879-8921-0046f1a15511 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:17:53.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3461" for this suite.
Dec 28 14:17:59.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:18:00.031: INFO: namespace projected-3461 deletion completed in 6.137527815s

• [SLOW TEST:16.538 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:18:00.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-c82084cb-49db-4d47-9198-b9fac277c0f3
STEP: Creating a pod to test consume secrets
Dec 28 14:18:00.227: INFO: Waiting up to 5m0s for pod "pod-secrets-1dc149f7-c65e-4e95-9768-1e139a65e9be" in namespace "secrets-4814" to be "success or failure"
Dec 28 14:18:00.425: INFO: Pod "pod-secrets-1dc149f7-c65e-4e95-9768-1e139a65e9be": Phase="Pending", Reason="", readiness=false. Elapsed: 197.345113ms
Dec 28 14:18:02.438: INFO: Pod "pod-secrets-1dc149f7-c65e-4e95-9768-1e139a65e9be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210687372s
Dec 28 14:18:04.445: INFO: Pod "pod-secrets-1dc149f7-c65e-4e95-9768-1e139a65e9be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.218106844s
Dec 28 14:18:06.458: INFO: Pod "pod-secrets-1dc149f7-c65e-4e95-9768-1e139a65e9be": Phase="Pending", Reason="", readiness=false. Elapsed: 6.230901881s
Dec 28 14:18:08.479: INFO: Pod "pod-secrets-1dc149f7-c65e-4e95-9768-1e139a65e9be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.252054261s
STEP: Saw pod success
Dec 28 14:18:08.480: INFO: Pod "pod-secrets-1dc149f7-c65e-4e95-9768-1e139a65e9be" satisfied condition "success or failure"
Dec 28 14:18:08.483: INFO: Trying to get logs from node iruya-node pod pod-secrets-1dc149f7-c65e-4e95-9768-1e139a65e9be container secret-volume-test: 
STEP: delete the pod
Dec 28 14:18:08.538: INFO: Waiting for pod pod-secrets-1dc149f7-c65e-4e95-9768-1e139a65e9be to disappear
Dec 28 14:18:08.596: INFO: Pod pod-secrets-1dc149f7-c65e-4e95-9768-1e139a65e9be no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:18:08.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4814" for this suite.
Dec 28 14:18:14.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:18:14.752: INFO: namespace secrets-4814 deletion completed in 6.146463992s

• [SLOW TEST:14.721 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:18:14.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-02e83e13-c9bc-4219-97a8-f32b16e0b96a
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:18:14.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5911" for this suite.
Dec 28 14:18:20.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:18:21.039: INFO: namespace secrets-5911 deletion completed in 6.150002923s

• [SLOW TEST:6.286 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:18:21.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-42vv
STEP: Creating a pod to test atomic-volume-subpath
Dec 28 14:18:21.202: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-42vv" in namespace "subpath-3917" to be "success or failure"
Dec 28 14:18:21.207: INFO: Pod "pod-subpath-test-projected-42vv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.795442ms
Dec 28 14:18:23.219: INFO: Pod "pod-subpath-test-projected-42vv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01666181s
Dec 28 14:18:25.227: INFO: Pod "pod-subpath-test-projected-42vv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025190038s
Dec 28 14:18:27.234: INFO: Pod "pod-subpath-test-projected-42vv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032321638s
Dec 28 14:18:29.296: INFO: Pod "pod-subpath-test-projected-42vv": Phase="Running", Reason="", readiness=true. Elapsed: 8.093823119s
Dec 28 14:18:31.304: INFO: Pod "pod-subpath-test-projected-42vv": Phase="Running", Reason="", readiness=true. Elapsed: 10.102056138s
Dec 28 14:18:33.313: INFO: Pod "pod-subpath-test-projected-42vv": Phase="Running", Reason="", readiness=true. Elapsed: 12.111492589s
Dec 28 14:18:35.321: INFO: Pod "pod-subpath-test-projected-42vv": Phase="Running", Reason="", readiness=true. Elapsed: 14.119553549s
Dec 28 14:18:37.338: INFO: Pod "pod-subpath-test-projected-42vv": Phase="Running", Reason="", readiness=true. Elapsed: 16.136428246s
Dec 28 14:18:39.351: INFO: Pod "pod-subpath-test-projected-42vv": Phase="Running", Reason="", readiness=true. Elapsed: 18.149338419s
Dec 28 14:18:41.360: INFO: Pod "pod-subpath-test-projected-42vv": Phase="Running", Reason="", readiness=true. Elapsed: 20.158530432s
Dec 28 14:18:43.374: INFO: Pod "pod-subpath-test-projected-42vv": Phase="Running", Reason="", readiness=true. Elapsed: 22.171920609s
Dec 28 14:18:45.388: INFO: Pod "pod-subpath-test-projected-42vv": Phase="Running", Reason="", readiness=true. Elapsed: 24.18631155s
Dec 28 14:18:47.400: INFO: Pod "pod-subpath-test-projected-42vv": Phase="Running", Reason="", readiness=true. Elapsed: 26.197872441s
Dec 28 14:18:49.412: INFO: Pod "pod-subpath-test-projected-42vv": Phase="Running", Reason="", readiness=true. Elapsed: 28.210132286s
Dec 28 14:18:51.424: INFO: Pod "pod-subpath-test-projected-42vv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.221742598s
STEP: Saw pod success
Dec 28 14:18:51.424: INFO: Pod "pod-subpath-test-projected-42vv" satisfied condition "success or failure"
Dec 28 14:18:51.431: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-42vv container test-container-subpath-projected-42vv: 
STEP: delete the pod
Dec 28 14:18:51.493: INFO: Waiting for pod pod-subpath-test-projected-42vv to disappear
Dec 28 14:18:51.501: INFO: Pod pod-subpath-test-projected-42vv no longer exists
STEP: Deleting pod pod-subpath-test-projected-42vv
Dec 28 14:18:51.502: INFO: Deleting pod "pod-subpath-test-projected-42vv" in namespace "subpath-3917"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:18:51.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3917" for this suite.
Dec 28 14:18:57.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:18:57.739: INFO: namespace subpath-3917 deletion completed in 6.219548532s

• [SLOW TEST:36.699 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:18:57.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 28 14:18:57.910: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Dec 28 14:18:57.925: INFO: Number of nodes with available pods: 0
Dec 28 14:18:57.925: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Dec 28 14:18:57.973: INFO: Number of nodes with available pods: 0
Dec 28 14:18:57.973: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:18:58.981: INFO: Number of nodes with available pods: 0
Dec 28 14:18:58.981: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:18:59.984: INFO: Number of nodes with available pods: 0
Dec 28 14:18:59.984: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:19:00.984: INFO: Number of nodes with available pods: 0
Dec 28 14:19:00.984: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:19:01.984: INFO: Number of nodes with available pods: 0
Dec 28 14:19:01.984: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:19:02.983: INFO: Number of nodes with available pods: 0
Dec 28 14:19:02.983: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:19:03.988: INFO: Number of nodes with available pods: 0
Dec 28 14:19:03.988: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:19:04.992: INFO: Number of nodes with available pods: 0
Dec 28 14:19:04.993: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:19:05.987: INFO: Number of nodes with available pods: 0
Dec 28 14:19:05.987: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:19:06.984: INFO: Number of nodes with available pods: 1
Dec 28 14:19:06.984: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Dec 28 14:19:07.083: INFO: Number of nodes with available pods: 1
Dec 28 14:19:07.083: INFO: Number of running nodes: 0, number of available pods: 1
Dec 28 14:19:08.092: INFO: Number of nodes with available pods: 0
Dec 28 14:19:08.092: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Dec 28 14:19:08.121: INFO: Number of nodes with available pods: 0
Dec 28 14:19:08.121: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:19:09.133: INFO: Number of nodes with available pods: 0
Dec 28 14:19:09.134: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:19:10.129: INFO: Number of nodes with available pods: 0
Dec 28 14:19:10.129: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:19:11.131: INFO: Number of nodes with available pods: 0
Dec 28 14:19:11.131: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:19:12.137: INFO: Number of nodes with available pods: 0
Dec 28 14:19:12.138: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:19:13.130: INFO: Number of nodes with available pods: 0
Dec 28 14:19:13.131: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:19:14.135: INFO: Number of nodes with available pods: 0
Dec 28 14:19:14.135: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:19:15.138: INFO: Number of nodes with available pods: 0
Dec 28 14:19:15.138: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:19:16.134: INFO: Number of nodes with available pods: 0
Dec 28 14:19:16.134: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:19:17.129: INFO: Number of nodes with available pods: 0
Dec 28 14:19:17.129: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:19:18.137: INFO: Number of nodes with available pods: 0
Dec 28 14:19:18.137: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:19:19.135: INFO: Number of nodes with available pods: 0
Dec 28 14:19:19.135: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:19:20.138: INFO: Number of nodes with available pods: 0
Dec 28 14:19:20.138: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:19:21.131: INFO: Number of nodes with available pods: 0
Dec 28 14:19:21.131: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:19:22.133: INFO: Number of nodes with available pods: 1
Dec 28 14:19:22.133: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9774, will wait for the garbage collector to delete the pods
Dec 28 14:19:22.234: INFO: Deleting DaemonSet.extensions daemon-set took: 9.617266ms
Dec 28 14:19:22.534: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.713931ms
Dec 28 14:19:36.540: INFO: Number of nodes with available pods: 0
Dec 28 14:19:36.540: INFO: Number of running nodes: 0, number of available pods: 0
Dec 28 14:19:36.543: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9774/daemonsets","resourceVersion":"18403225"},"items":null}

Dec 28 14:19:36.545: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9774/pods","resourceVersion":"18403225"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:19:36.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9774" for this suite.
Dec 28 14:19:42.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:19:42.773: INFO: namespace daemonsets-9774 deletion completed in 6.13828346s

• [SLOW TEST:45.034 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:19:42.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 28 14:19:42.909: INFO: Pod name rollover-pod: Found 0 pods out of 1
Dec 28 14:19:47.922: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 28 14:19:51.944: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Dec 28 14:19:53.954: INFO: Creating deployment "test-rollover-deployment"
Dec 28 14:19:53.968: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Dec 28 14:19:55.979: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Dec 28 14:19:55.987: INFO: Ensure that both replica sets have 1 created replica
Dec 28 14:19:55.995: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Dec 28 14:19:56.004: INFO: Updating deployment test-rollover-deployment
Dec 28 14:19:56.004: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Dec 28 14:19:58.039: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Dec 28 14:19:58.052: INFO: Make sure deployment "test-rollover-deployment" is complete
Dec 28 14:19:58.060: INFO: all replica sets need to contain the pod-template-hash label
Dec 28 14:19:58.061: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139594, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139596, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139593, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 14:20:00.108: INFO: all replica sets need to contain the pod-template-hash label
Dec 28 14:20:00.109: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139594, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139596, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139593, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 14:20:02.078: INFO: all replica sets need to contain the pod-template-hash label
Dec 28 14:20:02.078: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139594, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139596, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139593, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 14:20:04.071: INFO: all replica sets need to contain the pod-template-hash label
Dec 28 14:20:04.071: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139594, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139596, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139593, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 14:20:06.073: INFO: all replica sets need to contain the pod-template-hash label
Dec 28 14:20:06.073: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139594, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139604, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139593, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 14:20:08.073: INFO: all replica sets need to contain the pod-template-hash label
Dec 28 14:20:08.074: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139594, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139604, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139593, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 14:20:10.069: INFO: all replica sets need to contain the pod-template-hash label
Dec 28 14:20:10.069: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139594, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139604, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139593, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 14:20:12.100: INFO: all replica sets need to contain the pod-template-hash label
Dec 28 14:20:12.100: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139594, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139604, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139593, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 14:20:14.072: INFO: all replica sets need to contain the pod-template-hash label
Dec 28 14:20:14.072: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139594, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139604, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713139593, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 14:20:16.072: INFO: 
Dec 28 14:20:16.072: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 28 14:20:16.083: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-6582,SelfLink:/apis/apps/v1/namespaces/deployment-6582/deployments/test-rollover-deployment,UID:56373567-89e6-4f0f-978e-63babe0c9b5d,ResourceVersion:18403365,Generation:2,CreationTimestamp:2019-12-28 14:19:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-28 14:19:54 +0000 UTC 2019-12-28 14:19:54 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-28 14:20:15 +0000 UTC 2019-12-28 14:19:53 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 28 14:20:16.087: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-6582,SelfLink:/apis/apps/v1/namespaces/deployment-6582/replicasets/test-rollover-deployment-854595fc44,UID:985e8d6e-bcbc-48fc-97c1-e38080a296b5,ResourceVersion:18403354,Generation:2,CreationTimestamp:2019-12-28 14:19:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 56373567-89e6-4f0f-978e-63babe0c9b5d 0xc003273157 0xc003273158}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 28 14:20:16.087: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Dec 28 14:20:16.088: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-6582,SelfLink:/apis/apps/v1/namespaces/deployment-6582/replicasets/test-rollover-controller,UID:6ef89b2f-750d-4f31-bd7f-6678ac4c7abe,ResourceVersion:18403364,Generation:2,CreationTimestamp:2019-12-28 14:19:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 56373567-89e6-4f0f-978e-63babe0c9b5d 0xc003273067 0xc003273068}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 28 14:20:16.088: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-6582,SelfLink:/apis/apps/v1/namespaces/deployment-6582/replicasets/test-rollover-deployment-9b8b997cf,UID:03a91c9e-6ea6-489b-bc0e-e2bb9b8a8ea3,ResourceVersion:18403317,Generation:2,CreationTimestamp:2019-12-28 14:19:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 56373567-89e6-4f0f-978e-63babe0c9b5d 0xc003273220 0xc003273221}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 28 14:20:16.093: INFO: Pod "test-rollover-deployment-854595fc44-fl4ss" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-fl4ss,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-6582,SelfLink:/api/v1/namespaces/deployment-6582/pods/test-rollover-deployment-854595fc44-fl4ss,UID:5a290c58-629b-40d2-8c4b-dfb0a010fdc0,ResourceVersion:18403338,Generation:0,CreationTimestamp:2019-12-28 14:19:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 985e8d6e-bcbc-48fc-97c1-e38080a296b5 0xc003273e67 0xc003273e68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rbvj5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rbvj5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-rbvj5 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003273ee0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003273f10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 14:19:56 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 14:20:04 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 14:20:04 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 14:19:56 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-28 14:19:56 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-28 14:20:03 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://4c3fe9e01a45f22867cba95d513158cd0402708454178224b28001e35edd470d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:20:16.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6582" for this suite.
Dec 28 14:20:24.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:20:24.212: INFO: namespace deployment-6582 deletion completed in 8.115320501s

• [SLOW TEST:41.437 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:20:24.213: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Dec 28 14:20:24.451: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2927,SelfLink:/api/v1/namespaces/watch-2927/configmaps/e2e-watch-test-watch-closed,UID:4b0e106c-4b36-41eb-bdd0-6b232b649bdc,ResourceVersion:18403414,Generation:0,CreationTimestamp:2019-12-28 14:20:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 28 14:20:24.452: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2927,SelfLink:/api/v1/namespaces/watch-2927/configmaps/e2e-watch-test-watch-closed,UID:4b0e106c-4b36-41eb-bdd0-6b232b649bdc,ResourceVersion:18403415,Generation:0,CreationTimestamp:2019-12-28 14:20:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Dec 28 14:20:24.510: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2927,SelfLink:/api/v1/namespaces/watch-2927/configmaps/e2e-watch-test-watch-closed,UID:4b0e106c-4b36-41eb-bdd0-6b232b649bdc,ResourceVersion:18403416,Generation:0,CreationTimestamp:2019-12-28 14:20:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 28 14:20:24.511: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2927,SelfLink:/api/v1/namespaces/watch-2927/configmaps/e2e-watch-test-watch-closed,UID:4b0e106c-4b36-41eb-bdd0-6b232b649bdc,ResourceVersion:18403417,Generation:0,CreationTimestamp:2019-12-28 14:20:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:20:24.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2927" for this suite.
Dec 28 14:20:30.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:20:30.758: INFO: namespace watch-2927 deletion completed in 6.231418064s

• [SLOW TEST:6.545 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:20:30.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-cef6a230-c805-4117-9f0f-307f1d94a155
STEP: Creating secret with name s-test-opt-upd-b716e366-96dc-4a96-9303-acfc447af39d
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-cef6a230-c805-4117-9f0f-307f1d94a155
STEP: Updating secret s-test-opt-upd-b716e366-96dc-4a96-9303-acfc447af39d
STEP: Creating secret with name s-test-opt-create-08a73437-ecb0-425c-a882-ec9fe209b270
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:20:47.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1267" for this suite.
Dec 28 14:21:09.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:21:09.395: INFO: namespace secrets-1267 deletion completed in 22.135131155s

• [SLOW TEST:38.636 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:21:09.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 28 14:21:18.784: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:21:18.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5132" for this suite.
Dec 28 14:21:24.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:21:24.994: INFO: namespace container-runtime-5132 deletion completed in 6.129790651s

• [SLOW TEST:15.599 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:21:24.994: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 28 14:21:25.063: INFO: Waiting up to 5m0s for pod "downwardapi-volume-78bc9965-238f-4e1d-9796-786fe9bb0d55" in namespace "projected-7055" to be "success or failure"
Dec 28 14:21:25.069: INFO: Pod "downwardapi-volume-78bc9965-238f-4e1d-9796-786fe9bb0d55": Phase="Pending", Reason="", readiness=false. Elapsed: 6.682204ms
Dec 28 14:21:27.076: INFO: Pod "downwardapi-volume-78bc9965-238f-4e1d-9796-786fe9bb0d55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013119234s
Dec 28 14:21:29.083: INFO: Pod "downwardapi-volume-78bc9965-238f-4e1d-9796-786fe9bb0d55": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020586348s
Dec 28 14:21:31.091: INFO: Pod "downwardapi-volume-78bc9965-238f-4e1d-9796-786fe9bb0d55": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028644947s
Dec 28 14:21:33.104: INFO: Pod "downwardapi-volume-78bc9965-238f-4e1d-9796-786fe9bb0d55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04108475s
STEP: Saw pod success
Dec 28 14:21:33.104: INFO: Pod "downwardapi-volume-78bc9965-238f-4e1d-9796-786fe9bb0d55" satisfied condition "success or failure"
Dec 28 14:21:33.106: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-78bc9965-238f-4e1d-9796-786fe9bb0d55 container client-container: 
STEP: delete the pod
Dec 28 14:21:33.196: INFO: Waiting for pod downwardapi-volume-78bc9965-238f-4e1d-9796-786fe9bb0d55 to disappear
Dec 28 14:21:33.204: INFO: Pod downwardapi-volume-78bc9965-238f-4e1d-9796-786fe9bb0d55 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:21:33.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7055" for this suite.
Dec 28 14:21:39.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:21:39.401: INFO: namespace projected-7055 deletion completed in 6.192133777s

• [SLOW TEST:14.407 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:21:39.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 28 14:21:39.592: INFO: Waiting up to 5m0s for pod "downward-api-c1cf8ca8-86b6-44a0-87b3-9a72676045ef" in namespace "downward-api-6168" to be "success or failure"
Dec 28 14:21:39.595: INFO: Pod "downward-api-c1cf8ca8-86b6-44a0-87b3-9a72676045ef": Phase="Pending", Reason="", readiness=false. Elapsed: 3.181725ms
Dec 28 14:21:41.616: INFO: Pod "downward-api-c1cf8ca8-86b6-44a0-87b3-9a72676045ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024481402s
Dec 28 14:21:43.628: INFO: Pod "downward-api-c1cf8ca8-86b6-44a0-87b3-9a72676045ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03653935s
Dec 28 14:21:45.676: INFO: Pod "downward-api-c1cf8ca8-86b6-44a0-87b3-9a72676045ef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084294815s
Dec 28 14:21:47.685: INFO: Pod "downward-api-c1cf8ca8-86b6-44a0-87b3-9a72676045ef": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093208384s
Dec 28 14:21:49.693: INFO: Pod "downward-api-c1cf8ca8-86b6-44a0-87b3-9a72676045ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.101297982s
STEP: Saw pod success
Dec 28 14:21:49.693: INFO: Pod "downward-api-c1cf8ca8-86b6-44a0-87b3-9a72676045ef" satisfied condition "success or failure"
Dec 28 14:21:49.698: INFO: Trying to get logs from node iruya-node pod downward-api-c1cf8ca8-86b6-44a0-87b3-9a72676045ef container dapi-container: 
STEP: delete the pod
Dec 28 14:21:49.792: INFO: Waiting for pod downward-api-c1cf8ca8-86b6-44a0-87b3-9a72676045ef to disappear
Dec 28 14:21:49.800: INFO: Pod downward-api-c1cf8ca8-86b6-44a0-87b3-9a72676045ef no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:21:49.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6168" for this suite.
Dec 28 14:21:55.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:21:55.972: INFO: namespace downward-api-6168 deletion completed in 6.167527091s

• [SLOW TEST:16.570 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:21:55.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-2878f57e-3552-4084-97b7-e87fc38e7ec2
STEP: Creating a pod to test consume configMaps
Dec 28 14:21:56.068: INFO: Waiting up to 5m0s for pod "pod-configmaps-8e9e9c8a-54ba-4e2d-be6d-9fb1d45f349f" in namespace "configmap-8881" to be "success or failure"
Dec 28 14:21:56.146: INFO: Pod "pod-configmaps-8e9e9c8a-54ba-4e2d-be6d-9fb1d45f349f": Phase="Pending", Reason="", readiness=false. Elapsed: 77.283469ms
Dec 28 14:21:58.154: INFO: Pod "pod-configmaps-8e9e9c8a-54ba-4e2d-be6d-9fb1d45f349f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085024245s
Dec 28 14:22:00.161: INFO: Pod "pod-configmaps-8e9e9c8a-54ba-4e2d-be6d-9fb1d45f349f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092180797s
Dec 28 14:22:02.176: INFO: Pod "pod-configmaps-8e9e9c8a-54ba-4e2d-be6d-9fb1d45f349f": Phase="Running", Reason="", readiness=true. Elapsed: 6.107707107s
Dec 28 14:22:04.185: INFO: Pod "pod-configmaps-8e9e9c8a-54ba-4e2d-be6d-9fb1d45f349f": Phase="Running", Reason="", readiness=true. Elapsed: 8.116580287s
Dec 28 14:22:06.196: INFO: Pod "pod-configmaps-8e9e9c8a-54ba-4e2d-be6d-9fb1d45f349f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.127290452s
STEP: Saw pod success
Dec 28 14:22:06.196: INFO: Pod "pod-configmaps-8e9e9c8a-54ba-4e2d-be6d-9fb1d45f349f" satisfied condition "success or failure"
Dec 28 14:22:06.200: INFO: Trying to get logs from node iruya-node pod pod-configmaps-8e9e9c8a-54ba-4e2d-be6d-9fb1d45f349f container configmap-volume-test: 
STEP: delete the pod
Dec 28 14:22:06.246: INFO: Waiting for pod pod-configmaps-8e9e9c8a-54ba-4e2d-be6d-9fb1d45f349f to disappear
Dec 28 14:22:06.251: INFO: Pod pod-configmaps-8e9e9c8a-54ba-4e2d-be6d-9fb1d45f349f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:22:06.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8881" for this suite.
Dec 28 14:22:12.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:22:12.460: INFO: namespace configmap-8881 deletion completed in 6.20236386s

• [SLOW TEST:16.487 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:22:12.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Dec 28 14:22:12.588: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-299" to be "success or failure"
Dec 28 14:22:12.616: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 27.84559ms
Dec 28 14:22:14.646: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058400557s
Dec 28 14:22:16.655: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067216966s
Dec 28 14:22:18.662: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074684927s
Dec 28 14:22:20.670: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082316886s
Dec 28 14:22:22.685: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.097028206s
Dec 28 14:22:24.694: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.10645017s
STEP: Saw pod success
Dec 28 14:22:24.694: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Dec 28 14:22:24.699: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Dec 28 14:22:24.857: INFO: Waiting for pod pod-host-path-test to disappear
Dec 28 14:22:24.946: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:22:24.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-299" for this suite.
Dec 28 14:22:31.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:22:31.185: INFO: namespace hostpath-299 deletion completed in 6.228829955s

• [SLOW TEST:18.724 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:22:31.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 28 14:22:39.887: INFO: Successfully updated pod "annotationupdate086975ea-691f-48e4-9051-d5b4f0031537"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:22:41.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6435" for this suite.
Dec 28 14:23:04.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:23:04.189: INFO: namespace projected-6435 deletion completed in 22.227929353s

• [SLOW TEST:33.004 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:23:04.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-cb3632bb-0ed5-4825-ba25-d111bc43a9fa
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:23:16.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8072" for this suite.
Dec 28 14:23:38.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:23:38.668: INFO: namespace configmap-8072 deletion completed in 22.209150632s

• [SLOW TEST:34.478 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:23:38.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 28 14:23:47.452: INFO: Successfully updated pod "pod-update-activedeadlineseconds-aa200453-a4ac-4d1f-80c4-7a2a5b1b6a1f"
Dec 28 14:23:47.452: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-aa200453-a4ac-4d1f-80c4-7a2a5b1b6a1f" in namespace "pods-5292" to be "terminated due to deadline exceeded"
Dec 28 14:23:47.472: INFO: Pod "pod-update-activedeadlineseconds-aa200453-a4ac-4d1f-80c4-7a2a5b1b6a1f": Phase="Running", Reason="", readiness=true. Elapsed: 19.943783ms
Dec 28 14:23:49.497: INFO: Pod "pod-update-activedeadlineseconds-aa200453-a4ac-4d1f-80c4-7a2a5b1b6a1f": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.044198657s
Dec 28 14:23:49.497: INFO: Pod "pod-update-activedeadlineseconds-aa200453-a4ac-4d1f-80c4-7a2a5b1b6a1f" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:23:49.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5292" for this suite.
Dec 28 14:23:55.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:23:55.749: INFO: namespace pods-5292 deletion completed in 6.246492721s

• [SLOW TEST:17.081 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:23:55.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W1228 14:24:07.949972       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 28 14:24:07.950: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:24:07.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3318" for this suite.
Dec 28 14:24:14.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:24:14.152: INFO: namespace gc-3318 deletion completed in 6.197920487s

• [SLOW TEST:18.403 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:24:14.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-b1684aed-f918-483b-85a4-3da6dc0fb71d
STEP: Creating a pod to test consume secrets
Dec 28 14:24:14.253: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-126c321e-c208-4685-983f-a21680c3c328" in namespace "projected-1759" to be "success or failure"
Dec 28 14:24:14.259: INFO: Pod "pod-projected-secrets-126c321e-c208-4685-983f-a21680c3c328": Phase="Pending", Reason="", readiness=false. Elapsed: 5.424857ms
Dec 28 14:24:16.278: INFO: Pod "pod-projected-secrets-126c321e-c208-4685-983f-a21680c3c328": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024328099s
Dec 28 14:24:18.284: INFO: Pod "pod-projected-secrets-126c321e-c208-4685-983f-a21680c3c328": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031019759s
Dec 28 14:24:20.293: INFO: Pod "pod-projected-secrets-126c321e-c208-4685-983f-a21680c3c328": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039947514s
Dec 28 14:24:22.304: INFO: Pod "pod-projected-secrets-126c321e-c208-4685-983f-a21680c3c328": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051158392s
STEP: Saw pod success
Dec 28 14:24:22.304: INFO: Pod "pod-projected-secrets-126c321e-c208-4685-983f-a21680c3c328" satisfied condition "success or failure"
Dec 28 14:24:22.309: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-126c321e-c208-4685-983f-a21680c3c328 container projected-secret-volume-test: 
STEP: delete the pod
Dec 28 14:24:22.357: INFO: Waiting for pod pod-projected-secrets-126c321e-c208-4685-983f-a21680c3c328 to disappear
Dec 28 14:24:22.363: INFO: Pod pod-projected-secrets-126c321e-c208-4685-983f-a21680c3c328 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:24:22.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1759" for this suite.
Dec 28 14:24:28.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:24:28.528: INFO: namespace projected-1759 deletion completed in 6.158722635s

• [SLOW TEST:14.376 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:24:28.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-ldkm2 in namespace proxy-6130
I1228 14:24:28.759276       8 runners.go:180] Created replication controller with name: proxy-service-ldkm2, namespace: proxy-6130, replica count: 1
I1228 14:24:29.810305       8 runners.go:180] proxy-service-ldkm2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 14:24:30.810794       8 runners.go:180] proxy-service-ldkm2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 14:24:31.811389       8 runners.go:180] proxy-service-ldkm2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 14:24:32.812074       8 runners.go:180] proxy-service-ldkm2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 14:24:33.812533       8 runners.go:180] proxy-service-ldkm2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 14:24:34.812990       8 runners.go:180] proxy-service-ldkm2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 14:24:35.813531       8 runners.go:180] proxy-service-ldkm2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1228 14:24:36.814023       8 runners.go:180] proxy-service-ldkm2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1228 14:24:37.814633       8 runners.go:180] proxy-service-ldkm2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1228 14:24:38.815077       8 runners.go:180] proxy-service-ldkm2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1228 14:24:39.815408       8 runners.go:180] proxy-service-ldkm2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1228 14:24:40.815791       8 runners.go:180] proxy-service-ldkm2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1228 14:24:41.816138       8 runners.go:180] proxy-service-ldkm2 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 28 14:24:41.826: INFO: setup took 13.200807434s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Dec 28 14:24:41.891: INFO: (0) /api/v1/namespaces/proxy-6130/services/proxy-service-ldkm2:portname1/proxy/: foo (200; 64.372457ms)
Dec 28 14:24:41.891: INFO: (0) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:162/proxy/: bar (200; 64.618671ms)
Dec 28 14:24:41.891: INFO: (0) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname2/proxy/: bar (200; 64.520997ms)
Dec 28 14:24:41.892: INFO: (0) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:162/proxy/: bar (200; 65.381951ms)
Dec 28 14:24:41.892: INFO: (0) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:160/proxy/: foo (200; 65.273934ms)
Dec 28 14:24:41.892: INFO: (0) /api/v1/namespaces/proxy-6130/services/proxy-service-ldkm2:portname2/proxy/: bar (200; 65.881072ms)
Dec 28 14:24:41.892: INFO: (0) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:160/proxy/: foo (200; 65.691483ms)
Dec 28 14:24:41.901: INFO: (0) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9/proxy/: test (200; 75.037477ms)
Dec 28 14:24:41.903: INFO: (0) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:1080/proxy/: test<... (200; 75.986339ms)
Dec 28 14:24:41.903: INFO: (0) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:1080/proxy/: ... (200; 76.062256ms)
Dec 28 14:24:41.904: INFO: (0) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname1/proxy/: foo (200; 77.181918ms)
Dec 28 14:24:41.922: INFO: (0) /api/v1/namespaces/proxy-6130/services/https:proxy-service-ldkm2:tlsportname1/proxy/: tls baz (200; 95.476723ms)
Dec 28 14:24:41.922: INFO: (0) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:460/proxy/: tls baz (200; 95.558412ms)
Dec 28 14:24:41.922: INFO: (0) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:443/proxy/: test (200; 14.403917ms)
Dec 28 14:24:41.938: INFO: (1) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:1080/proxy/: test<... (200; 14.705995ms)
Dec 28 14:24:41.938: INFO: (1) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:160/proxy/: foo (200; 14.352379ms)
Dec 28 14:24:41.938: INFO: (1) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:162/proxy/: bar (200; 14.847188ms)
Dec 28 14:24:41.939: INFO: (1) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:1080/proxy/: ... (200; 15.953037ms)
Dec 28 14:24:41.939: INFO: (1) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:462/proxy/: tls qux (200; 15.717792ms)
Dec 28 14:24:41.939: INFO: (1) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:460/proxy/: tls baz (200; 15.843746ms)
Dec 28 14:24:41.941: INFO: (1) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:443/proxy/: ... (200; 9.505462ms)
Dec 28 14:24:41.957: INFO: (2) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:162/proxy/: bar (200; 9.470808ms)
Dec 28 14:24:41.959: INFO: (2) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:443/proxy/: test (200; 11.781932ms)
Dec 28 14:24:41.960: INFO: (2) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:1080/proxy/: test<... (200; 12.290495ms)
Dec 28 14:24:41.961: INFO: (2) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:462/proxy/: tls qux (200; 13.289978ms)
Dec 28 14:24:41.963: INFO: (2) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:160/proxy/: foo (200; 15.380004ms)
Dec 28 14:24:41.964: INFO: (2) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:460/proxy/: tls baz (200; 15.94917ms)
Dec 28 14:24:41.967: INFO: (2) /api/v1/namespaces/proxy-6130/services/https:proxy-service-ldkm2:tlsportname2/proxy/: tls qux (200; 19.817751ms)
Dec 28 14:24:41.969: INFO: (2) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname1/proxy/: foo (200; 20.846717ms)
Dec 28 14:24:41.969: INFO: (2) /api/v1/namespaces/proxy-6130/services/proxy-service-ldkm2:portname1/proxy/: foo (200; 21.590987ms)
Dec 28 14:24:41.969: INFO: (2) /api/v1/namespaces/proxy-6130/services/https:proxy-service-ldkm2:tlsportname1/proxy/: tls baz (200; 21.698451ms)
Dec 28 14:24:41.973: INFO: (2) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname2/proxy/: bar (200; 25.201957ms)
Dec 28 14:24:41.975: INFO: (2) /api/v1/namespaces/proxy-6130/services/proxy-service-ldkm2:portname2/proxy/: bar (200; 27.199601ms)
Dec 28 14:24:41.989: INFO: (3) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:162/proxy/: bar (200; 14.047036ms)
Dec 28 14:24:41.989: INFO: (3) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:162/proxy/: bar (200; 14.353935ms)
Dec 28 14:24:41.991: INFO: (3) /api/v1/namespaces/proxy-6130/services/https:proxy-service-ldkm2:tlsportname1/proxy/: tls baz (200; 15.680989ms)
Dec 28 14:24:41.991: INFO: (3) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname2/proxy/: bar (200; 16.448155ms)
Dec 28 14:24:41.991: INFO: (3) /api/v1/namespaces/proxy-6130/services/https:proxy-service-ldkm2:tlsportname2/proxy/: tls qux (200; 16.502115ms)
Dec 28 14:24:41.991: INFO: (3) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:460/proxy/: tls baz (200; 16.474074ms)
Dec 28 14:24:41.991: INFO: (3) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:1080/proxy/: ... (200; 16.553133ms)
Dec 28 14:24:41.992: INFO: (3) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:160/proxy/: foo (200; 16.603853ms)
Dec 28 14:24:41.992: INFO: (3) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:1080/proxy/: test<... (200; 16.733112ms)
Dec 28 14:24:41.994: INFO: (3) /api/v1/namespaces/proxy-6130/services/proxy-service-ldkm2:portname2/proxy/: bar (200; 18.945937ms)
Dec 28 14:24:41.994: INFO: (3) /api/v1/namespaces/proxy-6130/services/proxy-service-ldkm2:portname1/proxy/: foo (200; 18.865694ms)
Dec 28 14:24:41.994: INFO: (3) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9/proxy/: test (200; 19.029975ms)
Dec 28 14:24:41.994: INFO: (3) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:443/proxy/: ... (200; 14.511543ms)
Dec 28 14:24:42.010: INFO: (4) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:160/proxy/: foo (200; 14.905638ms)
Dec 28 14:24:42.011: INFO: (4) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:1080/proxy/: test<... (200; 15.341253ms)
Dec 28 14:24:42.014: INFO: (4) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname1/proxy/: foo (200; 18.341946ms)
Dec 28 14:24:42.014: INFO: (4) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname2/proxy/: bar (200; 18.402522ms)
Dec 28 14:24:42.014: INFO: (4) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:162/proxy/: bar (200; 18.439678ms)
Dec 28 14:24:42.016: INFO: (4) /api/v1/namespaces/proxy-6130/services/proxy-service-ldkm2:portname2/proxy/: bar (200; 20.879112ms)
Dec 28 14:24:42.019: INFO: (4) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9/proxy/: test (200; 24.180746ms)
Dec 28 14:24:42.030: INFO: (5) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:1080/proxy/: test<... (200; 10.029955ms)
Dec 28 14:24:42.030: INFO: (5) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:462/proxy/: tls qux (200; 10.812309ms)
Dec 28 14:24:42.040: INFO: (5) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:162/proxy/: bar (200; 20.22656ms)
Dec 28 14:24:42.040: INFO: (5) /api/v1/namespaces/proxy-6130/services/proxy-service-ldkm2:portname2/proxy/: bar (200; 20.295682ms)
Dec 28 14:24:42.042: INFO: (5) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:160/proxy/: foo (200; 21.92921ms)
Dec 28 14:24:42.042: INFO: (5) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname1/proxy/: foo (200; 21.774271ms)
Dec 28 14:24:42.042: INFO: (5) /api/v1/namespaces/proxy-6130/services/https:proxy-service-ldkm2:tlsportname1/proxy/: tls baz (200; 22.282365ms)
Dec 28 14:24:42.042: INFO: (5) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9/proxy/: test (200; 22.709294ms)
Dec 28 14:24:42.054: INFO: (5) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:1080/proxy/: ... (200; 34.757388ms)
Dec 28 14:24:42.055: INFO: (5) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:443/proxy/: ... (200; 16.433922ms)
Dec 28 14:24:42.073: INFO: (6) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname1/proxy/: foo (200; 16.324027ms)
Dec 28 14:24:42.073: INFO: (6) /api/v1/namespaces/proxy-6130/services/https:proxy-service-ldkm2:tlsportname2/proxy/: tls qux (200; 16.407856ms)
Dec 28 14:24:42.073: INFO: (6) /api/v1/namespaces/proxy-6130/services/proxy-service-ldkm2:portname1/proxy/: foo (200; 17.016185ms)
Dec 28 14:24:42.073: INFO: (6) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname2/proxy/: bar (200; 16.630158ms)
Dec 28 14:24:42.073: INFO: (6) /api/v1/namespaces/proxy-6130/services/https:proxy-service-ldkm2:tlsportname1/proxy/: tls baz (200; 16.546683ms)
Dec 28 14:24:42.076: INFO: (6) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:162/proxy/: bar (200; 18.9275ms)
Dec 28 14:24:42.076: INFO: (6) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9/proxy/: test (200; 19.354313ms)
Dec 28 14:24:42.076: INFO: (6) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:1080/proxy/: test<... (200; 19.687241ms)
Dec 28 14:24:42.076: INFO: (6) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:460/proxy/: tls baz (200; 19.815497ms)
Dec 28 14:24:42.077: INFO: (6) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:160/proxy/: foo (200; 20.009712ms)
Dec 28 14:24:42.077: INFO: (6) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:160/proxy/: foo (200; 20.564208ms)
Dec 28 14:24:42.077: INFO: (6) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:162/proxy/: bar (200; 20.702126ms)
Dec 28 14:24:42.077: INFO: (6) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:462/proxy/: tls qux (200; 20.711276ms)
Dec 28 14:24:42.078: INFO: (6) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:443/proxy/: ... (200; 10.463212ms)
Dec 28 14:24:42.089: INFO: (7) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:160/proxy/: foo (200; 11.150731ms)
Dec 28 14:24:42.090: INFO: (7) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:1080/proxy/: test<... (200; 11.360041ms)
Dec 28 14:24:42.090: INFO: (7) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:443/proxy/: test (200; 14.589698ms)
Dec 28 14:24:42.093: INFO: (7) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:460/proxy/: tls baz (200; 15.03072ms)
Dec 28 14:24:42.093: INFO: (7) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:162/proxy/: bar (200; 15.018531ms)
Dec 28 14:24:42.094: INFO: (7) /api/v1/namespaces/proxy-6130/services/proxy-service-ldkm2:portname2/proxy/: bar (200; 15.816081ms)
Dec 28 14:24:42.094: INFO: (7) /api/v1/namespaces/proxy-6130/services/https:proxy-service-ldkm2:tlsportname1/proxy/: tls baz (200; 16.140559ms)
Dec 28 14:24:42.095: INFO: (7) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname1/proxy/: foo (200; 17.042797ms)
Dec 28 14:24:42.096: INFO: (7) /api/v1/namespaces/proxy-6130/services/https:proxy-service-ldkm2:tlsportname2/proxy/: tls qux (200; 17.738936ms)
Dec 28 14:24:42.096: INFO: (7) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname2/proxy/: bar (200; 17.555761ms)
Dec 28 14:24:42.102: INFO: (8) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:1080/proxy/: test<... (200; 6.656893ms)
Dec 28 14:24:42.105: INFO: (8) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:462/proxy/: tls qux (200; 9.081163ms)
Dec 28 14:24:42.105: INFO: (8) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:1080/proxy/: ... (200; 9.118459ms)
Dec 28 14:24:42.105: INFO: (8) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:160/proxy/: foo (200; 9.387876ms)
Dec 28 14:24:42.106: INFO: (8) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:160/proxy/: foo (200; 9.662114ms)
Dec 28 14:24:42.110: INFO: (8) /api/v1/namespaces/proxy-6130/services/https:proxy-service-ldkm2:tlsportname2/proxy/: tls qux (200; 14.144201ms)
Dec 28 14:24:42.110: INFO: (8) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname2/proxy/: bar (200; 14.273098ms)
Dec 28 14:24:42.110: INFO: (8) /api/v1/namespaces/proxy-6130/services/proxy-service-ldkm2:portname2/proxy/: bar (200; 14.421422ms)
Dec 28 14:24:42.110: INFO: (8) /api/v1/namespaces/proxy-6130/services/https:proxy-service-ldkm2:tlsportname1/proxy/: tls baz (200; 14.429009ms)
Dec 28 14:24:42.111: INFO: (8) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname1/proxy/: foo (200; 14.763954ms)
Dec 28 14:24:42.111: INFO: (8) /api/v1/namespaces/proxy-6130/services/proxy-service-ldkm2:portname1/proxy/: foo (200; 14.80101ms)
Dec 28 14:24:42.112: INFO: (8) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:162/proxy/: bar (200; 15.621213ms)
Dec 28 14:24:42.112: INFO: (8) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9/proxy/: test (200; 16.024695ms)
Dec 28 14:24:42.112: INFO: (8) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:443/proxy/: ... (200; 9.044382ms)
Dec 28 14:24:42.122: INFO: (9) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:160/proxy/: foo (200; 9.257997ms)
Dec 28 14:24:42.123: INFO: (9) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9/proxy/: test (200; 9.4595ms)
Dec 28 14:24:42.123: INFO: (9) /api/v1/namespaces/proxy-6130/services/proxy-service-ldkm2:portname2/proxy/: bar (200; 9.692828ms)
Dec 28 14:24:42.123: INFO: (9) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname1/proxy/: foo (200; 10.101927ms)
Dec 28 14:24:42.123: INFO: (9) /api/v1/namespaces/proxy-6130/services/https:proxy-service-ldkm2:tlsportname1/proxy/: tls baz (200; 10.052922ms)
Dec 28 14:24:42.123: INFO: (9) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:160/proxy/: foo (200; 10.083397ms)
Dec 28 14:24:42.123: INFO: (9) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname2/proxy/: bar (200; 10.409864ms)
Dec 28 14:24:42.124: INFO: (9) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:462/proxy/: tls qux (200; 10.369842ms)
Dec 28 14:24:42.124: INFO: (9) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:1080/proxy/: test<... (200; 11.145192ms)
Dec 28 14:24:42.127: INFO: (9) /api/v1/namespaces/proxy-6130/services/proxy-service-ldkm2:portname1/proxy/: foo (200; 13.691408ms)
Dec 28 14:24:42.127: INFO: (9) /api/v1/namespaces/proxy-6130/services/https:proxy-service-ldkm2:tlsportname2/proxy/: tls qux (200; 13.984975ms)
Dec 28 14:24:42.134: INFO: (10) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9/proxy/: test (200; 6.344565ms)
Dec 28 14:24:42.134: INFO: (10) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:162/proxy/: bar (200; 6.342703ms)
Dec 28 14:24:42.134: INFO: (10) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:160/proxy/: foo (200; 6.493393ms)
Dec 28 14:24:42.134: INFO: (10) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:462/proxy/: tls qux (200; 6.73636ms)
Dec 28 14:24:42.134: INFO: (10) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:443/proxy/: test<... (200; 8.882869ms)
Dec 28 14:24:42.139: INFO: (10) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname1/proxy/: foo (200; 11.866372ms)
Dec 28 14:24:42.139: INFO: (10) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:160/proxy/: foo (200; 11.861708ms)
Dec 28 14:24:42.139: INFO: (10) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:460/proxy/: tls baz (200; 11.948106ms)
Dec 28 14:24:42.139: INFO: (10) /api/v1/namespaces/proxy-6130/services/https:proxy-service-ldkm2:tlsportname2/proxy/: tls qux (200; 12.122702ms)
Dec 28 14:24:42.140: INFO: (10) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:1080/proxy/: ... (200; 12.262281ms)
Dec 28 14:24:42.141: INFO: (10) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:162/proxy/: bar (200; 13.403359ms)
Dec 28 14:24:42.142: INFO: (10) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname2/proxy/: bar (200; 14.352271ms)
Dec 28 14:24:42.142: INFO: (10) /api/v1/namespaces/proxy-6130/services/proxy-service-ldkm2:portname1/proxy/: foo (200; 14.742926ms)
Dec 28 14:24:42.142: INFO: (10) /api/v1/namespaces/proxy-6130/services/proxy-service-ldkm2:portname2/proxy/: bar (200; 15.020212ms)
Dec 28 14:24:42.144: INFO: (10) /api/v1/namespaces/proxy-6130/services/https:proxy-service-ldkm2:tlsportname1/proxy/: tls baz (200; 17.037657ms)
Dec 28 14:24:42.156: INFO: (11) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:443/proxy/: test<... (200; 11.422299ms)
Dec 28 14:24:42.159: INFO: (11) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:1080/proxy/: ... (200; 14.884701ms)
Dec 28 14:24:42.161: INFO: (11) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:160/proxy/: foo (200; 16.559208ms)
Dec 28 14:24:42.162: INFO: (11) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:460/proxy/: tls baz (200; 17.40504ms)
Dec 28 14:24:42.162: INFO: (11) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9/proxy/: test (200; 17.484785ms)
Dec 28 14:24:42.162: INFO: (11) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:162/proxy/: bar (200; 17.586145ms)
Dec 28 14:24:42.162: INFO: (11) /api/v1/namespaces/proxy-6130/services/proxy-service-ldkm2:portname1/proxy/: foo (200; 17.602267ms)
Dec 28 14:24:42.162: INFO: (11) /api/v1/namespaces/proxy-6130/services/https:proxy-service-ldkm2:tlsportname1/proxy/: tls baz (200; 17.605982ms)
Dec 28 14:24:42.162: INFO: (11) /api/v1/namespaces/proxy-6130/services/https:proxy-service-ldkm2:tlsportname2/proxy/: tls qux (200; 17.640808ms)
Dec 28 14:24:42.162: INFO: (11) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname1/proxy/: foo (200; 17.513823ms)
Dec 28 14:24:42.163: INFO: (11) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname2/proxy/: bar (200; 18.161652ms)
Dec 28 14:24:42.165: INFO: (11) /api/v1/namespaces/proxy-6130/services/proxy-service-ldkm2:portname2/proxy/: bar (200; 20.648479ms)
Dec 28 14:24:42.166: INFO: (11) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:162/proxy/: bar (200; 21.476831ms)
Dec 28 14:24:42.166: INFO: (11) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:160/proxy/: foo (200; 21.629196ms)
Dec 28 14:24:42.166: INFO: (11) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:462/proxy/: tls qux (200; 21.647958ms)
Dec 28 14:24:42.176: INFO: (12) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:160/proxy/: foo (200; 9.753692ms)
Dec 28 14:24:42.176: INFO: (12) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9/proxy/: test (200; 9.939252ms)
Dec 28 14:24:42.176: INFO: (12) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:162/proxy/: bar (200; 10.169032ms)
Dec 28 14:24:42.176: INFO: (12) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:1080/proxy/: test<... (200; 10.091822ms)
Dec 28 14:24:42.176: INFO: (12) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:443/proxy/: ... (200; 12.602729ms)
Dec 28 14:24:42.179: INFO: (12) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname2/proxy/: bar (200; 12.608367ms)
Dec 28 14:24:42.179: INFO: (12) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:460/proxy/: tls baz (200; 13.10256ms)
Dec 28 14:24:42.179: INFO: (12) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname1/proxy/: foo (200; 13.167806ms)
Dec 28 14:24:42.179: INFO: (12) /api/v1/namespaces/proxy-6130/services/proxy-service-ldkm2:portname1/proxy/: foo (200; 13.092677ms)
Dec 28 14:24:42.179: INFO: (12) /api/v1/namespaces/proxy-6130/services/https:proxy-service-ldkm2:tlsportname1/proxy/: tls baz (200; 13.199539ms)
Dec 28 14:24:42.179: INFO: (12) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:462/proxy/: tls qux (200; 13.058847ms)
Dec 28 14:24:42.186: INFO: (13) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:462/proxy/: tls qux (200; 6.286569ms)
Dec 28 14:24:42.187: INFO: (13) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9/proxy/: test (200; 7.160588ms)
Dec 28 14:24:42.187: INFO: (13) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:460/proxy/: tls baz (200; 7.224932ms)
Dec 28 14:24:42.187: INFO: (13) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:162/proxy/: bar (200; 7.278952ms)
Dec 28 14:24:42.187: INFO: (13) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:1080/proxy/: test<... (200; 7.286424ms)
Dec 28 14:24:42.187: INFO: (13) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:160/proxy/: foo (200; 7.484766ms)
Dec 28 14:24:42.187: INFO: (13) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:162/proxy/: bar (200; 7.554405ms)
Dec 28 14:24:42.187: INFO: (13) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:1080/proxy/: ... (200; 7.592759ms)
Dec 28 14:24:42.187: INFO: (13) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:160/proxy/: foo (200; 7.707851ms)
Dec 28 14:24:42.191: INFO: (13) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:443/proxy/: ... (200; 9.982505ms)
Dec 28 14:24:42.203: INFO: (14) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:160/proxy/: foo (200; 9.970867ms)
Dec 28 14:24:42.203: INFO: (14) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9/proxy/: test (200; 10.217777ms)
Dec 28 14:24:42.204: INFO: (14) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:1080/proxy/: test<... (200; 10.201886ms)
Dec 28 14:24:42.204: INFO: (14) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:162/proxy/: bar (200; 10.501756ms)
Dec 28 14:24:42.205: INFO: (14) /api/v1/namespaces/proxy-6130/services/https:proxy-service-ldkm2:tlsportname2/proxy/: tls qux (200; 11.424905ms)
Dec 28 14:24:42.205: INFO: (14) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:160/proxy/: foo (200; 11.422111ms)
Dec 28 14:24:42.205: INFO: (14) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname1/proxy/: foo (200; 11.500405ms)
Dec 28 14:24:42.205: INFO: (14) /api/v1/namespaces/proxy-6130/services/proxy-service-ldkm2:portname1/proxy/: foo (200; 11.6935ms)
Dec 28 14:24:42.205: INFO: (14) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname2/proxy/: bar (200; 12.251682ms)
Dec 28 14:24:42.211: INFO: (14) /api/v1/namespaces/proxy-6130/services/proxy-service-ldkm2:portname2/proxy/: bar (200; 18.134787ms)
Dec 28 14:24:42.212: INFO: (14) /api/v1/namespaces/proxy-6130/services/https:proxy-service-ldkm2:tlsportname1/proxy/: tls baz (200; 18.790796ms)
Dec 28 14:24:42.223: INFO: (15) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:1080/proxy/: test<... (200; 10.334369ms)
Dec 28 14:24:42.223: INFO: (15) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:462/proxy/: tls qux (200; 10.301139ms)
Dec 28 14:24:42.223: INFO: (15) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:162/proxy/: bar (200; 10.554032ms)
Dec 28 14:24:42.223: INFO: (15) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:162/proxy/: bar (200; 10.750347ms)
Dec 28 14:24:42.223: INFO: (15) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:160/proxy/: foo (200; 11.105881ms)
Dec 28 14:24:42.224: INFO: (15) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:160/proxy/: foo (200; 11.709549ms)
Dec 28 14:24:42.224: INFO: (15) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:460/proxy/: tls baz (200; 11.83762ms)
Dec 28 14:24:42.225: INFO: (15) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9/proxy/: test (200; 12.326358ms)
Dec 28 14:24:42.231: INFO: (15) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:443/proxy/: ... (200; 25.875718ms)
Dec 28 14:24:42.254: INFO: (15) /api/v1/namespaces/proxy-6130/services/proxy-service-ldkm2:portname1/proxy/: foo (200; 41.409151ms)
Dec 28 14:24:42.254: INFO: (15) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname2/proxy/: bar (200; 41.517824ms)
Dec 28 14:24:42.254: INFO: (15) /api/v1/namespaces/proxy-6130/services/https:proxy-service-ldkm2:tlsportname2/proxy/: tls qux (200; 41.834699ms)
Dec 28 14:24:42.254: INFO: (15) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname1/proxy/: foo (200; 41.905222ms)
Dec 28 14:24:42.255: INFO: (15) /api/v1/namespaces/proxy-6130/services/proxy-service-ldkm2:portname2/proxy/: bar (200; 42.689835ms)
Dec 28 14:24:42.256: INFO: (15) /api/v1/namespaces/proxy-6130/services/https:proxy-service-ldkm2:tlsportname1/proxy/: tls baz (200; 44.325205ms)
Dec 28 14:24:42.272: INFO: (16) /api/v1/namespaces/proxy-6130/services/proxy-service-ldkm2:portname2/proxy/: bar (200; 15.127147ms)
Dec 28 14:24:42.272: INFO: (16) /api/v1/namespaces/proxy-6130/services/https:proxy-service-ldkm2:tlsportname1/proxy/: tls baz (200; 15.110467ms)
Dec 28 14:24:42.273: INFO: (16) /api/v1/namespaces/proxy-6130/services/proxy-service-ldkm2:portname1/proxy/: foo (200; 16.141048ms)
Dec 28 14:24:42.274: INFO: (16) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname1/proxy/: foo (200; 17.176212ms)
Dec 28 14:24:42.274: INFO: (16) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname2/proxy/: bar (200; 17.180042ms)
Dec 28 14:24:42.274: INFO: (16) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:162/proxy/: bar (200; 17.700191ms)
Dec 28 14:24:42.274: INFO: (16) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9/proxy/: test (200; 17.9162ms)
Dec 28 14:24:42.275: INFO: (16) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:462/proxy/: tls qux (200; 18.106282ms)
Dec 28 14:24:42.275: INFO: (16) /api/v1/namespaces/proxy-6130/services/https:proxy-service-ldkm2:tlsportname2/proxy/: tls qux (200; 18.082892ms)
Dec 28 14:24:42.275: INFO: (16) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:162/proxy/: bar (200; 18.324352ms)
Dec 28 14:24:42.275: INFO: (16) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:460/proxy/: tls baz (200; 18.711123ms)
Dec 28 14:24:42.275: INFO: (16) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:1080/proxy/: test<... (200; 18.823834ms)
Dec 28 14:24:42.275: INFO: (16) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:443/proxy/: ... (200; 19.200162ms)
Dec 28 14:24:42.287: INFO: (17) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:160/proxy/: foo (200; 10.748352ms)
Dec 28 14:24:42.287: INFO: (17) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:462/proxy/: tls qux (200; 10.830952ms)
Dec 28 14:24:42.287: INFO: (17) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:162/proxy/: bar (200; 11.054268ms)
Dec 28 14:24:42.287: INFO: (17) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9/proxy/: test (200; 10.974011ms)
Dec 28 14:24:42.287: INFO: (17) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:460/proxy/: tls baz (200; 11.100066ms)
Dec 28 14:24:42.287: INFO: (17) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:160/proxy/: foo (200; 11.428266ms)
Dec 28 14:24:42.288: INFO: (17) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:1080/proxy/: test<... (200; 11.840292ms)
Dec 28 14:24:42.288: INFO: (17) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:1080/proxy/: ... (200; 12.088807ms)
Dec 28 14:24:42.288: INFO: (17) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:443/proxy/: ... (200; 17.85114ms)
Dec 28 14:24:42.314: INFO: (18) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:162/proxy/: bar (200; 17.838765ms)
Dec 28 14:24:42.317: INFO: (18) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:460/proxy/: tls baz (200; 20.534657ms)
Dec 28 14:24:42.317: INFO: (18) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:162/proxy/: bar (200; 20.635469ms)
Dec 28 14:24:42.320: INFO: (18) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname2/proxy/: bar (200; 23.875779ms)
Dec 28 14:24:42.321: INFO: (18) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:462/proxy/: tls qux (200; 24.109142ms)
Dec 28 14:24:42.321: INFO: (18) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname1/proxy/: foo (200; 24.299403ms)
Dec 28 14:24:42.321: INFO: (18) /api/v1/namespaces/proxy-6130/services/proxy-service-ldkm2:portname1/proxy/: foo (200; 24.314667ms)
Dec 28 14:24:42.321: INFO: (18) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:1080/proxy/: test<... (200; 24.380514ms)
Dec 28 14:24:42.321: INFO: (18) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9/proxy/: test (200; 24.506026ms)
Dec 28 14:24:42.322: INFO: (18) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:160/proxy/: foo (200; 25.056256ms)
Dec 28 14:24:42.322: INFO: (18) /api/v1/namespaces/proxy-6130/services/https:proxy-service-ldkm2:tlsportname2/proxy/: tls qux (200; 25.281529ms)
Dec 28 14:24:42.322: INFO: (18) /api/v1/namespaces/proxy-6130/services/proxy-service-ldkm2:portname2/proxy/: bar (200; 25.44474ms)
Dec 28 14:24:42.322: INFO: (18) /api/v1/namespaces/proxy-6130/services/https:proxy-service-ldkm2:tlsportname1/proxy/: tls baz (200; 25.603857ms)
Dec 28 14:24:42.334: INFO: (19) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:160/proxy/: foo (200; 11.663041ms)
Dec 28 14:24:42.336: INFO: (19) /api/v1/namespaces/proxy-6130/services/proxy-service-ldkm2:portname2/proxy/: bar (200; 13.542765ms)
Dec 28 14:24:42.337: INFO: (19) /api/v1/namespaces/proxy-6130/services/proxy-service-ldkm2:portname1/proxy/: foo (200; 14.754763ms)
Dec 28 14:24:42.337: INFO: (19) /api/v1/namespaces/proxy-6130/services/https:proxy-service-ldkm2:tlsportname1/proxy/: tls baz (200; 14.822374ms)
Dec 28 14:24:42.337: INFO: (19) /api/v1/namespaces/proxy-6130/services/https:proxy-service-ldkm2:tlsportname2/proxy/: tls qux (200; 14.874209ms)
Dec 28 14:24:42.337: INFO: (19) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname2/proxy/: bar (200; 14.978418ms)
Dec 28 14:24:42.341: INFO: (19) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:162/proxy/: bar (200; 18.6886ms)
Dec 28 14:24:42.341: INFO: (19) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9/proxy/: test (200; 18.788465ms)
Dec 28 14:24:42.341: INFO: (19) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:160/proxy/: foo (200; 18.803915ms)
Dec 28 14:24:42.341: INFO: (19) /api/v1/namespaces/proxy-6130/pods/proxy-service-ldkm2-g47t9:162/proxy/: bar (200; 18.931754ms)
Dec 28 14:24:42.341: INFO: (19) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:443/proxy/: test<... (200; 19.268426ms)
Dec 28 14:24:42.342: INFO: (19) /api/v1/namespaces/proxy-6130/pods/http:proxy-service-ldkm2-g47t9:1080/proxy/: ... (200; 19.362265ms)
Dec 28 14:24:42.342: INFO: (19) /api/v1/namespaces/proxy-6130/pods/https:proxy-service-ldkm2-g47t9:460/proxy/: tls baz (200; 19.568402ms)
Dec 28 14:24:42.344: INFO: (19) /api/v1/namespaces/proxy-6130/services/http:proxy-service-ldkm2:portname1/proxy/: foo (200; 21.793972ms)
STEP: deleting ReplicationController proxy-service-ldkm2 in namespace proxy-6130, will wait for the garbage collector to delete the pods
Dec 28 14:24:42.417: INFO: Deleting ReplicationController proxy-service-ldkm2 took: 17.330535ms
Dec 28 14:24:42.718: INFO: Terminating ReplicationController proxy-service-ldkm2 pods took: 301.260688ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:24:56.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-6130" for this suite.
Dec 28 14:25:02.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:25:02.781: INFO: namespace proxy-6130 deletion completed in 6.148567579s

• [SLOW TEST:34.250 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:25:02.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-f2e77126-d84d-4265-8aa5-a438f1aae76c
STEP: Creating a pod to test consume secrets
Dec 28 14:25:02.908: INFO: Waiting up to 5m0s for pod "pod-secrets-96fb4be0-9d48-4838-b3c3-0def38137634" in namespace "secrets-9278" to be "success or failure"
Dec 28 14:25:02.912: INFO: Pod "pod-secrets-96fb4be0-9d48-4838-b3c3-0def38137634": Phase="Pending", Reason="", readiness=false. Elapsed: 4.844556ms
Dec 28 14:25:04.920: INFO: Pod "pod-secrets-96fb4be0-9d48-4838-b3c3-0def38137634": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012273869s
Dec 28 14:25:06.939: INFO: Pod "pod-secrets-96fb4be0-9d48-4838-b3c3-0def38137634": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031260402s
Dec 28 14:25:08.949: INFO: Pod "pod-secrets-96fb4be0-9d48-4838-b3c3-0def38137634": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04129549s
Dec 28 14:25:10.956: INFO: Pod "pod-secrets-96fb4be0-9d48-4838-b3c3-0def38137634": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048820597s
STEP: Saw pod success
Dec 28 14:25:10.956: INFO: Pod "pod-secrets-96fb4be0-9d48-4838-b3c3-0def38137634" satisfied condition "success or failure"
Dec 28 14:25:10.963: INFO: Trying to get logs from node iruya-node pod pod-secrets-96fb4be0-9d48-4838-b3c3-0def38137634 container secret-volume-test: 
STEP: delete the pod
Dec 28 14:25:11.109: INFO: Waiting for pod pod-secrets-96fb4be0-9d48-4838-b3c3-0def38137634 to disappear
Dec 28 14:25:11.130: INFO: Pod pod-secrets-96fb4be0-9d48-4838-b3c3-0def38137634 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:25:11.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9278" for this suite.
Dec 28 14:25:17.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:25:17.266: INFO: namespace secrets-9278 deletion completed in 6.1297935s

• [SLOW TEST:14.484 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:25:17.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 28 14:25:17.434: INFO: Waiting up to 5m0s for pod "pod-7a0b8305-0085-472a-b51a-bcad948e7800" in namespace "emptydir-5340" to be "success or failure"
Dec 28 14:25:17.510: INFO: Pod "pod-7a0b8305-0085-472a-b51a-bcad948e7800": Phase="Pending", Reason="", readiness=false. Elapsed: 75.795316ms
Dec 28 14:25:19.532: INFO: Pod "pod-7a0b8305-0085-472a-b51a-bcad948e7800": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097692078s
Dec 28 14:25:21.544: INFO: Pod "pod-7a0b8305-0085-472a-b51a-bcad948e7800": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109123224s
Dec 28 14:25:23.550: INFO: Pod "pod-7a0b8305-0085-472a-b51a-bcad948e7800": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115661307s
Dec 28 14:25:25.581: INFO: Pod "pod-7a0b8305-0085-472a-b51a-bcad948e7800": Phase="Pending", Reason="", readiness=false. Elapsed: 8.145872582s
Dec 28 14:25:27.601: INFO: Pod "pod-7a0b8305-0085-472a-b51a-bcad948e7800": Phase="Pending", Reason="", readiness=false. Elapsed: 10.166127314s
Dec 28 14:25:29.608: INFO: Pod "pod-7a0b8305-0085-472a-b51a-bcad948e7800": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.173839469s
STEP: Saw pod success
Dec 28 14:25:29.609: INFO: Pod "pod-7a0b8305-0085-472a-b51a-bcad948e7800" satisfied condition "success or failure"
Dec 28 14:25:29.612: INFO: Trying to get logs from node iruya-node pod pod-7a0b8305-0085-472a-b51a-bcad948e7800 container test-container: 
STEP: delete the pod
Dec 28 14:25:29.769: INFO: Waiting for pod pod-7a0b8305-0085-472a-b51a-bcad948e7800 to disappear
Dec 28 14:25:29.791: INFO: Pod pod-7a0b8305-0085-472a-b51a-bcad948e7800 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:25:29.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5340" for this suite.
Dec 28 14:25:35.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:25:35.968: INFO: namespace emptydir-5340 deletion completed in 6.168210572s

• [SLOW TEST:18.702 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:25:35.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 28 14:25:36.061: INFO: Waiting up to 5m0s for pod "downwardapi-volume-240ac515-cc7c-42a6-b65b-c5a9a2dfb396" in namespace "projected-5449" to be "success or failure"
Dec 28 14:25:36.064: INFO: Pod "downwardapi-volume-240ac515-cc7c-42a6-b65b-c5a9a2dfb396": Phase="Pending", Reason="", readiness=false. Elapsed: 3.668325ms
Dec 28 14:25:38.074: INFO: Pod "downwardapi-volume-240ac515-cc7c-42a6-b65b-c5a9a2dfb396": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013402694s
Dec 28 14:25:40.091: INFO: Pod "downwardapi-volume-240ac515-cc7c-42a6-b65b-c5a9a2dfb396": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030097809s
Dec 28 14:25:42.129: INFO: Pod "downwardapi-volume-240ac515-cc7c-42a6-b65b-c5a9a2dfb396": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068586637s
Dec 28 14:25:44.137: INFO: Pod "downwardapi-volume-240ac515-cc7c-42a6-b65b-c5a9a2dfb396": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076338397s
Dec 28 14:25:46.148: INFO: Pod "downwardapi-volume-240ac515-cc7c-42a6-b65b-c5a9a2dfb396": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.087784426s
STEP: Saw pod success
Dec 28 14:25:46.148: INFO: Pod "downwardapi-volume-240ac515-cc7c-42a6-b65b-c5a9a2dfb396" satisfied condition "success or failure"
Dec 28 14:25:46.154: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-240ac515-cc7c-42a6-b65b-c5a9a2dfb396 container client-container: 
STEP: delete the pod
Dec 28 14:25:46.272: INFO: Waiting for pod downwardapi-volume-240ac515-cc7c-42a6-b65b-c5a9a2dfb396 to disappear
Dec 28 14:25:46.285: INFO: Pod downwardapi-volume-240ac515-cc7c-42a6-b65b-c5a9a2dfb396 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:25:46.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5449" for this suite.
Dec 28 14:25:52.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:25:52.533: INFO: namespace projected-5449 deletion completed in 6.230500117s

• [SLOW TEST:16.563 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:25:52.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Dec 28 14:25:52.661: INFO: Waiting up to 5m0s for pod "var-expansion-04a23625-761c-43f1-8f5e-1b2710b5d851" in namespace "var-expansion-2566" to be "success or failure"
Dec 28 14:25:52.680: INFO: Pod "var-expansion-04a23625-761c-43f1-8f5e-1b2710b5d851": Phase="Pending", Reason="", readiness=false. Elapsed: 19.397472ms
Dec 28 14:25:54.686: INFO: Pod "var-expansion-04a23625-761c-43f1-8f5e-1b2710b5d851": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025887766s
Dec 28 14:25:56.695: INFO: Pod "var-expansion-04a23625-761c-43f1-8f5e-1b2710b5d851": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034273849s
Dec 28 14:25:58.710: INFO: Pod "var-expansion-04a23625-761c-43f1-8f5e-1b2710b5d851": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049267945s
Dec 28 14:26:00.716: INFO: Pod "var-expansion-04a23625-761c-43f1-8f5e-1b2710b5d851": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055600641s
STEP: Saw pod success
Dec 28 14:26:00.716: INFO: Pod "var-expansion-04a23625-761c-43f1-8f5e-1b2710b5d851" satisfied condition "success or failure"
Dec 28 14:26:00.720: INFO: Trying to get logs from node iruya-node pod var-expansion-04a23625-761c-43f1-8f5e-1b2710b5d851 container dapi-container: 
STEP: delete the pod
Dec 28 14:26:00.794: INFO: Waiting for pod var-expansion-04a23625-761c-43f1-8f5e-1b2710b5d851 to disappear
Dec 28 14:26:00.813: INFO: Pod var-expansion-04a23625-761c-43f1-8f5e-1b2710b5d851 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:26:00.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2566" for this suite.
Dec 28 14:26:06.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:26:06.925: INFO: namespace var-expansion-2566 deletion completed in 6.1014143s

• [SLOW TEST:14.392 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:26:06.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Dec 28 14:26:29.290: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7969 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 14:26:29.290: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 14:26:29.810: INFO: Exec stderr: ""
Dec 28 14:26:29.810: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7969 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 14:26:29.810: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 14:26:30.113: INFO: Exec stderr: ""
Dec 28 14:26:30.113: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7969 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 14:26:30.113: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 14:26:30.422: INFO: Exec stderr: ""
Dec 28 14:26:30.422: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7969 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 14:26:30.422: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 14:26:30.983: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Dec 28 14:26:30.983: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7969 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 14:26:30.983: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 14:26:31.233: INFO: Exec stderr: ""
Dec 28 14:26:31.233: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7969 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 14:26:31.233: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 14:26:31.510: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Dec 28 14:26:31.510: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7969 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 14:26:31.510: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 14:26:31.803: INFO: Exec stderr: ""
Dec 28 14:26:31.803: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7969 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 14:26:31.803: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 14:26:32.044: INFO: Exec stderr: ""
Dec 28 14:26:32.044: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7969 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 14:26:32.045: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 14:26:32.310: INFO: Exec stderr: ""
Dec 28 14:26:32.310: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7969 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 14:26:32.310: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 14:26:32.747: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:26:32.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-7969" for this suite.
Dec 28 14:27:34.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:27:34.872: INFO: namespace e2e-kubelet-etc-hosts-7969 deletion completed in 1m2.11243762s

• [SLOW TEST:87.947 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:27:34.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:28:34.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5545" for this suite.
Dec 28 14:28:57.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:28:57.159: INFO: namespace container-probe-5545 deletion completed in 22.200761744s

• [SLOW TEST:82.286 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:28:57.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-2ceeca97-aaf4-44a5-a311-4e6dc0545fcb
STEP: Creating a pod to test consume secrets
Dec 28 14:28:57.312: INFO: Waiting up to 5m0s for pod "pod-secrets-d8b603fe-d6b8-4d62-800b-6e22c9e98ed2" in namespace "secrets-8670" to be "success or failure"
Dec 28 14:28:57.373: INFO: Pod "pod-secrets-d8b603fe-d6b8-4d62-800b-6e22c9e98ed2": Phase="Pending", Reason="", readiness=false. Elapsed: 60.67192ms
Dec 28 14:28:59.386: INFO: Pod "pod-secrets-d8b603fe-d6b8-4d62-800b-6e22c9e98ed2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073501746s
Dec 28 14:29:01.401: INFO: Pod "pod-secrets-d8b603fe-d6b8-4d62-800b-6e22c9e98ed2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089057611s
Dec 28 14:29:03.409: INFO: Pod "pod-secrets-d8b603fe-d6b8-4d62-800b-6e22c9e98ed2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097002464s
Dec 28 14:29:05.418: INFO: Pod "pod-secrets-d8b603fe-d6b8-4d62-800b-6e22c9e98ed2": Phase="Running", Reason="", readiness=true. Elapsed: 8.105127098s
Dec 28 14:29:07.428: INFO: Pod "pod-secrets-d8b603fe-d6b8-4d62-800b-6e22c9e98ed2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.115847583s
STEP: Saw pod success
Dec 28 14:29:07.428: INFO: Pod "pod-secrets-d8b603fe-d6b8-4d62-800b-6e22c9e98ed2" satisfied condition "success or failure"
Dec 28 14:29:07.435: INFO: Trying to get logs from node iruya-node pod pod-secrets-d8b603fe-d6b8-4d62-800b-6e22c9e98ed2 container secret-volume-test: 
STEP: delete the pod
Dec 28 14:29:07.486: INFO: Waiting for pod pod-secrets-d8b603fe-d6b8-4d62-800b-6e22c9e98ed2 to disappear
Dec 28 14:29:07.543: INFO: Pod pod-secrets-d8b603fe-d6b8-4d62-800b-6e22c9e98ed2 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:29:07.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8670" for this suite.
Dec 28 14:29:13.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:29:13.826: INFO: namespace secrets-8670 deletion completed in 6.276044636s

• [SLOW TEST:16.667 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:29:13.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-83a078b1-61b1-4048-97c5-a1bce41128fe in namespace container-probe-4484
Dec 28 14:29:22.002: INFO: Started pod busybox-83a078b1-61b1-4048-97c5-a1bce41128fe in namespace container-probe-4484
STEP: checking the pod's current state and verifying that restartCount is present
Dec 28 14:29:22.007: INFO: Initial restart count of pod busybox-83a078b1-61b1-4048-97c5-a1bce41128fe is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:33:22.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4484" for this suite.
Dec 28 14:33:28.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:33:28.487: INFO: namespace container-probe-4484 deletion completed in 6.231113175s

• [SLOW TEST:254.661 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:33:28.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-5804
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 28 14:33:28.582: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 28 14:34:02.898: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5804 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 14:34:02.898: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 14:34:03.289: INFO: Found all expected endpoints: [netserver-0]
Dec 28 14:34:03.305: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5804 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 14:34:03.305: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 14:34:03.668: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:34:03.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5804" for this suite.
Dec 28 14:34:27.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:34:27.888: INFO: namespace pod-network-test-5804 deletion completed in 24.207482114s

• [SLOW TEST:59.401 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:34:27.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-f8b7982e-d063-432c-967f-e2cc185c189e
STEP: Creating a pod to test consume secrets
Dec 28 14:34:28.042: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ab632827-fa66-4308-aa7d-9480fc217beb" in namespace "projected-2682" to be "success or failure"
Dec 28 14:34:28.050: INFO: Pod "pod-projected-secrets-ab632827-fa66-4308-aa7d-9480fc217beb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.541305ms
Dec 28 14:34:30.058: INFO: Pod "pod-projected-secrets-ab632827-fa66-4308-aa7d-9480fc217beb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016193842s
Dec 28 14:34:32.068: INFO: Pod "pod-projected-secrets-ab632827-fa66-4308-aa7d-9480fc217beb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026160555s
Dec 28 14:34:34.075: INFO: Pod "pod-projected-secrets-ab632827-fa66-4308-aa7d-9480fc217beb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033575636s
Dec 28 14:34:36.085: INFO: Pod "pod-projected-secrets-ab632827-fa66-4308-aa7d-9480fc217beb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.043660723s
Dec 28 14:34:38.092: INFO: Pod "pod-projected-secrets-ab632827-fa66-4308-aa7d-9480fc217beb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.049815002s
STEP: Saw pod success
Dec 28 14:34:38.092: INFO: Pod "pod-projected-secrets-ab632827-fa66-4308-aa7d-9480fc217beb" satisfied condition "success or failure"
Dec 28 14:34:38.094: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-ab632827-fa66-4308-aa7d-9480fc217beb container projected-secret-volume-test: 
STEP: delete the pod
Dec 28 14:34:38.253: INFO: Waiting for pod pod-projected-secrets-ab632827-fa66-4308-aa7d-9480fc217beb to disappear
Dec 28 14:34:38.302: INFO: Pod pod-projected-secrets-ab632827-fa66-4308-aa7d-9480fc217beb no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:34:38.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2682" for this suite.
Dec 28 14:34:44.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:34:44.467: INFO: namespace projected-2682 deletion completed in 6.158295415s

• [SLOW TEST:16.579 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:34:44.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Dec 28 14:34:44.589: INFO: Waiting up to 5m0s for pod "client-containers-1b555d3e-b8aa-4609-ab58-968e3661944d" in namespace "containers-9751" to be "success or failure"
Dec 28 14:34:44.608: INFO: Pod "client-containers-1b555d3e-b8aa-4609-ab58-968e3661944d": Phase="Pending", Reason="", readiness=false. Elapsed: 18.910531ms
Dec 28 14:34:46.622: INFO: Pod "client-containers-1b555d3e-b8aa-4609-ab58-968e3661944d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032451635s
Dec 28 14:34:48.633: INFO: Pod "client-containers-1b555d3e-b8aa-4609-ab58-968e3661944d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043913803s
Dec 28 14:34:50.643: INFO: Pod "client-containers-1b555d3e-b8aa-4609-ab58-968e3661944d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05393534s
Dec 28 14:34:52.651: INFO: Pod "client-containers-1b555d3e-b8aa-4609-ab58-968e3661944d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061794971s
Dec 28 14:34:54.657: INFO: Pod "client-containers-1b555d3e-b8aa-4609-ab58-968e3661944d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067837492s
STEP: Saw pod success
Dec 28 14:34:54.657: INFO: Pod "client-containers-1b555d3e-b8aa-4609-ab58-968e3661944d" satisfied condition "success or failure"
Dec 28 14:34:54.660: INFO: Trying to get logs from node iruya-node pod client-containers-1b555d3e-b8aa-4609-ab58-968e3661944d container test-container: 
STEP: delete the pod
Dec 28 14:34:54.721: INFO: Waiting for pod client-containers-1b555d3e-b8aa-4609-ab58-968e3661944d to disappear
Dec 28 14:34:54.744: INFO: Pod client-containers-1b555d3e-b8aa-4609-ab58-968e3661944d no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:34:54.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9751" for this suite.
Dec 28 14:35:00.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:35:00.877: INFO: namespace containers-9751 deletion completed in 6.12636345s

• [SLOW TEST:16.407 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:35:00.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 28 14:35:00.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6309'
Dec 28 14:35:03.221: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 28 14:35:03.221: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Dec 28 14:35:03.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-6309'
Dec 28 14:35:03.474: INFO: stderr: ""
Dec 28 14:35:03.474: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:35:03.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6309" for this suite.
Dec 28 14:35:09.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:35:09.682: INFO: namespace kubectl-6309 deletion completed in 6.202650729s

• [SLOW TEST:8.805 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:35:09.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 28 14:35:09.723: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:35:24.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3883" for this suite.
Dec 28 14:35:30.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:35:30.591: INFO: namespace init-container-3883 deletion completed in 6.180038313s

• [SLOW TEST:20.908 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:35:30.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-f55f5c65-8d91-4503-bd5f-30515adbdcc6
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-f55f5c65-8d91-4503-bd5f-30515adbdcc6
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:35:40.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7586" for this suite.
Dec 28 14:36:02.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:36:03.113: INFO: namespace configmap-7586 deletion completed in 22.173383152s

• [SLOW TEST:32.521 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:36:03.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-fe85fd3c-8313-48c1-981c-f29a78ceb51e
STEP: Creating a pod to test consume secrets
Dec 28 14:36:03.262: INFO: Waiting up to 5m0s for pod "pod-secrets-fef99366-ba21-4382-8473-09fce97ffbf1" in namespace "secrets-7096" to be "success or failure"
Dec 28 14:36:03.266: INFO: Pod "pod-secrets-fef99366-ba21-4382-8473-09fce97ffbf1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.956957ms
Dec 28 14:36:05.275: INFO: Pod "pod-secrets-fef99366-ba21-4382-8473-09fce97ffbf1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013675018s
Dec 28 14:36:07.281: INFO: Pod "pod-secrets-fef99366-ba21-4382-8473-09fce97ffbf1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019503528s
Dec 28 14:36:09.311: INFO: Pod "pod-secrets-fef99366-ba21-4382-8473-09fce97ffbf1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048822251s
Dec 28 14:36:11.317: INFO: Pod "pod-secrets-fef99366-ba21-4382-8473-09fce97ffbf1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055482249s
Dec 28 14:36:13.328: INFO: Pod "pod-secrets-fef99366-ba21-4382-8473-09fce97ffbf1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065824002s
STEP: Saw pod success
Dec 28 14:36:13.328: INFO: Pod "pod-secrets-fef99366-ba21-4382-8473-09fce97ffbf1" satisfied condition "success or failure"
Dec 28 14:36:13.333: INFO: Trying to get logs from node iruya-node pod pod-secrets-fef99366-ba21-4382-8473-09fce97ffbf1 container secret-volume-test: 
STEP: delete the pod
Dec 28 14:36:13.573: INFO: Waiting for pod pod-secrets-fef99366-ba21-4382-8473-09fce97ffbf1 to disappear
Dec 28 14:36:13.581: INFO: Pod pod-secrets-fef99366-ba21-4382-8473-09fce97ffbf1 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:36:13.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7096" for this suite.
Dec 28 14:36:19.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:36:19.924: INFO: namespace secrets-7096 deletion completed in 6.335716909s

• [SLOW TEST:16.810 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:36:19.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 28 14:36:20.110: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d3ea4731-b061-4023-b101-72e39dd22b4f" in namespace "downward-api-5533" to be "success or failure"
Dec 28 14:36:20.135: INFO: Pod "downwardapi-volume-d3ea4731-b061-4023-b101-72e39dd22b4f": Phase="Pending", Reason="", readiness=false. Elapsed: 25.332375ms
Dec 28 14:36:23.140: INFO: Pod "downwardapi-volume-d3ea4731-b061-4023-b101-72e39dd22b4f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.030118658s
Dec 28 14:36:25.149: INFO: Pod "downwardapi-volume-d3ea4731-b061-4023-b101-72e39dd22b4f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.038734248s
Dec 28 14:36:27.157: INFO: Pod "downwardapi-volume-d3ea4731-b061-4023-b101-72e39dd22b4f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.047438893s
Dec 28 14:36:29.168: INFO: Pod "downwardapi-volume-d3ea4731-b061-4023-b101-72e39dd22b4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.058196586s
STEP: Saw pod success
Dec 28 14:36:29.168: INFO: Pod "downwardapi-volume-d3ea4731-b061-4023-b101-72e39dd22b4f" satisfied condition "success or failure"
Dec 28 14:36:29.171: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d3ea4731-b061-4023-b101-72e39dd22b4f container client-container: 
STEP: delete the pod
Dec 28 14:36:29.240: INFO: Waiting for pod downwardapi-volume-d3ea4731-b061-4023-b101-72e39dd22b4f to disappear
Dec 28 14:36:29.286: INFO: Pod downwardapi-volume-d3ea4731-b061-4023-b101-72e39dd22b4f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:36:29.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5533" for this suite.
Dec 28 14:36:35.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:36:35.483: INFO: namespace downward-api-5533 deletion completed in 6.190935476s

• [SLOW TEST:15.559 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:36:35.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Dec 28 14:36:35.553: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2433,SelfLink:/api/v1/namespaces/watch-2433/configmaps/e2e-watch-test-configmap-a,UID:da480fe1-256b-459d-b6d0-d2af1973be8b,ResourceVersion:18405556,Generation:0,CreationTimestamp:2019-12-28 14:36:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 28 14:36:35.554: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2433,SelfLink:/api/v1/namespaces/watch-2433/configmaps/e2e-watch-test-configmap-a,UID:da480fe1-256b-459d-b6d0-d2af1973be8b,ResourceVersion:18405556,Generation:0,CreationTimestamp:2019-12-28 14:36:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Dec 28 14:36:45.608: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2433,SelfLink:/api/v1/namespaces/watch-2433/configmaps/e2e-watch-test-configmap-a,UID:da480fe1-256b-459d-b6d0-d2af1973be8b,ResourceVersion:18405570,Generation:0,CreationTimestamp:2019-12-28 14:36:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 28 14:36:45.608: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2433,SelfLink:/api/v1/namespaces/watch-2433/configmaps/e2e-watch-test-configmap-a,UID:da480fe1-256b-459d-b6d0-d2af1973be8b,ResourceVersion:18405570,Generation:0,CreationTimestamp:2019-12-28 14:36:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Dec 28 14:36:55.623: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2433,SelfLink:/api/v1/namespaces/watch-2433/configmaps/e2e-watch-test-configmap-a,UID:da480fe1-256b-459d-b6d0-d2af1973be8b,ResourceVersion:18405583,Generation:0,CreationTimestamp:2019-12-28 14:36:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 28 14:36:55.623: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2433,SelfLink:/api/v1/namespaces/watch-2433/configmaps/e2e-watch-test-configmap-a,UID:da480fe1-256b-459d-b6d0-d2af1973be8b,ResourceVersion:18405583,Generation:0,CreationTimestamp:2019-12-28 14:36:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Dec 28 14:37:05.639: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2433,SelfLink:/api/v1/namespaces/watch-2433/configmaps/e2e-watch-test-configmap-a,UID:da480fe1-256b-459d-b6d0-d2af1973be8b,ResourceVersion:18405597,Generation:0,CreationTimestamp:2019-12-28 14:36:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 28 14:37:05.640: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2433,SelfLink:/api/v1/namespaces/watch-2433/configmaps/e2e-watch-test-configmap-a,UID:da480fe1-256b-459d-b6d0-d2af1973be8b,ResourceVersion:18405597,Generation:0,CreationTimestamp:2019-12-28 14:36:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Dec 28 14:37:15.657: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2433,SelfLink:/api/v1/namespaces/watch-2433/configmaps/e2e-watch-test-configmap-b,UID:50a5bdce-6934-46e8-b825-e6d3577bd050,ResourceVersion:18405610,Generation:0,CreationTimestamp:2019-12-28 14:37:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 28 14:37:15.657: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2433,SelfLink:/api/v1/namespaces/watch-2433/configmaps/e2e-watch-test-configmap-b,UID:50a5bdce-6934-46e8-b825-e6d3577bd050,ResourceVersion:18405610,Generation:0,CreationTimestamp:2019-12-28 14:37:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Dec 28 14:37:25.673: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2433,SelfLink:/api/v1/namespaces/watch-2433/configmaps/e2e-watch-test-configmap-b,UID:50a5bdce-6934-46e8-b825-e6d3577bd050,ResourceVersion:18405626,Generation:0,CreationTimestamp:2019-12-28 14:37:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 28 14:37:25.673: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2433,SelfLink:/api/v1/namespaces/watch-2433/configmaps/e2e-watch-test-configmap-b,UID:50a5bdce-6934-46e8-b825-e6d3577bd050,ResourceVersion:18405626,Generation:0,CreationTimestamp:2019-12-28 14:37:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:37:35.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2433" for this suite.
Dec 28 14:37:41.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:37:41.990: INFO: namespace watch-2433 deletion completed in 6.305125122s

• [SLOW TEST:66.507 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:37:41.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-123f3dc7-95f1-4f61-bf09-29da011d9d11
STEP: Creating a pod to test consume configMaps
Dec 28 14:37:42.187: INFO: Waiting up to 5m0s for pod "pod-configmaps-93b60dbb-ff15-4914-9ba4-5947dea8edf2" in namespace "configmap-2432" to be "success or failure"
Dec 28 14:37:42.201: INFO: Pod "pod-configmaps-93b60dbb-ff15-4914-9ba4-5947dea8edf2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.071782ms
Dec 28 14:37:44.208: INFO: Pod "pod-configmaps-93b60dbb-ff15-4914-9ba4-5947dea8edf2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020526917s
Dec 28 14:37:46.271: INFO: Pod "pod-configmaps-93b60dbb-ff15-4914-9ba4-5947dea8edf2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083374972s
Dec 28 14:37:48.279: INFO: Pod "pod-configmaps-93b60dbb-ff15-4914-9ba4-5947dea8edf2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092085185s
Dec 28 14:37:50.285: INFO: Pod "pod-configmaps-93b60dbb-ff15-4914-9ba4-5947dea8edf2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.098084669s
STEP: Saw pod success
Dec 28 14:37:50.285: INFO: Pod "pod-configmaps-93b60dbb-ff15-4914-9ba4-5947dea8edf2" satisfied condition "success or failure"
Dec 28 14:37:50.289: INFO: Trying to get logs from node iruya-node pod pod-configmaps-93b60dbb-ff15-4914-9ba4-5947dea8edf2 container configmap-volume-test: 
STEP: delete the pod
Dec 28 14:37:50.347: INFO: Waiting for pod pod-configmaps-93b60dbb-ff15-4914-9ba4-5947dea8edf2 to disappear
Dec 28 14:37:50.426: INFO: Pod pod-configmaps-93b60dbb-ff15-4914-9ba4-5947dea8edf2 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:37:50.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2432" for this suite.
Dec 28 14:37:56.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:37:56.674: INFO: namespace configmap-2432 deletion completed in 6.227989911s

• [SLOW TEST:14.684 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:37:56.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-f368bb1f-71e9-4041-9d99-a76e7bfa497c
STEP: Creating a pod to test consume secrets
Dec 28 14:37:56.772: INFO: Waiting up to 5m0s for pod "pod-secrets-2a155eca-8d1b-4a71-9bfc-abbe7d018d6a" in namespace "secrets-5414" to be "success or failure"
Dec 28 14:37:56.847: INFO: Pod "pod-secrets-2a155eca-8d1b-4a71-9bfc-abbe7d018d6a": Phase="Pending", Reason="", readiness=false. Elapsed: 75.435525ms
Dec 28 14:37:58.858: INFO: Pod "pod-secrets-2a155eca-8d1b-4a71-9bfc-abbe7d018d6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086500175s
Dec 28 14:38:00.872: INFO: Pod "pod-secrets-2a155eca-8d1b-4a71-9bfc-abbe7d018d6a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100480634s
Dec 28 14:38:02.893: INFO: Pod "pod-secrets-2a155eca-8d1b-4a71-9bfc-abbe7d018d6a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121461349s
Dec 28 14:38:04.900: INFO: Pod "pod-secrets-2a155eca-8d1b-4a71-9bfc-abbe7d018d6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.128439713s
STEP: Saw pod success
Dec 28 14:38:04.901: INFO: Pod "pod-secrets-2a155eca-8d1b-4a71-9bfc-abbe7d018d6a" satisfied condition "success or failure"
Dec 28 14:38:04.903: INFO: Trying to get logs from node iruya-node pod pod-secrets-2a155eca-8d1b-4a71-9bfc-abbe7d018d6a container secret-env-test: 
STEP: delete the pod
Dec 28 14:38:05.013: INFO: Waiting for pod pod-secrets-2a155eca-8d1b-4a71-9bfc-abbe7d018d6a to disappear
Dec 28 14:38:05.027: INFO: Pod pod-secrets-2a155eca-8d1b-4a71-9bfc-abbe7d018d6a no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:38:05.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5414" for this suite.
Dec 28 14:38:11.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:38:11.399: INFO: namespace secrets-5414 deletion completed in 6.36550683s

• [SLOW TEST:14.723 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:38:11.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Dec 28 14:38:11.585: INFO: Waiting up to 5m0s for pod "var-expansion-bc3469a6-5078-4329-a447-e5d254487985" in namespace "var-expansion-6467" to be "success or failure"
Dec 28 14:38:11.591: INFO: Pod "var-expansion-bc3469a6-5078-4329-a447-e5d254487985": Phase="Pending", Reason="", readiness=false. Elapsed: 5.980602ms
Dec 28 14:38:13.608: INFO: Pod "var-expansion-bc3469a6-5078-4329-a447-e5d254487985": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023566893s
Dec 28 14:38:15.615: INFO: Pod "var-expansion-bc3469a6-5078-4329-a447-e5d254487985": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030356142s
Dec 28 14:38:17.625: INFO: Pod "var-expansion-bc3469a6-5078-4329-a447-e5d254487985": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039612025s
Dec 28 14:38:19.634: INFO: Pod "var-expansion-bc3469a6-5078-4329-a447-e5d254487985": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04944713s
STEP: Saw pod success
Dec 28 14:38:19.634: INFO: Pod "var-expansion-bc3469a6-5078-4329-a447-e5d254487985" satisfied condition "success or failure"
Dec 28 14:38:19.642: INFO: Trying to get logs from node iruya-node pod var-expansion-bc3469a6-5078-4329-a447-e5d254487985 container dapi-container: 
STEP: delete the pod
Dec 28 14:38:19.827: INFO: Waiting for pod var-expansion-bc3469a6-5078-4329-a447-e5d254487985 to disappear
Dec 28 14:38:19.836: INFO: Pod var-expansion-bc3469a6-5078-4329-a447-e5d254487985 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:38:19.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6467" for this suite.
Dec 28 14:38:25.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:38:26.137: INFO: namespace var-expansion-6467 deletion completed in 6.199283053s

• [SLOW TEST:14.739 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:38:26.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-3865997f-99f1-4cdf-bf30-fb7cebe06637
STEP: Creating a pod to test consume configMaps
Dec 28 14:38:26.282: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fedb1459-406d-4bd5-a7a6-ff2a638d61e1" in namespace "projected-3040" to be "success or failure"
Dec 28 14:38:26.287: INFO: Pod "pod-projected-configmaps-fedb1459-406d-4bd5-a7a6-ff2a638d61e1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.003786ms
Dec 28 14:38:28.298: INFO: Pod "pod-projected-configmaps-fedb1459-406d-4bd5-a7a6-ff2a638d61e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015723827s
Dec 28 14:38:30.310: INFO: Pod "pod-projected-configmaps-fedb1459-406d-4bd5-a7a6-ff2a638d61e1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02786463s
Dec 28 14:38:32.319: INFO: Pod "pod-projected-configmaps-fedb1459-406d-4bd5-a7a6-ff2a638d61e1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037164706s
Dec 28 14:38:34.329: INFO: Pod "pod-projected-configmaps-fedb1459-406d-4bd5-a7a6-ff2a638d61e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047021286s
STEP: Saw pod success
Dec 28 14:38:34.329: INFO: Pod "pod-projected-configmaps-fedb1459-406d-4bd5-a7a6-ff2a638d61e1" satisfied condition "success or failure"
Dec 28 14:38:34.334: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-fedb1459-406d-4bd5-a7a6-ff2a638d61e1 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 28 14:38:34.389: INFO: Waiting for pod pod-projected-configmaps-fedb1459-406d-4bd5-a7a6-ff2a638d61e1 to disappear
Dec 28 14:38:34.503: INFO: Pod pod-projected-configmaps-fedb1459-406d-4bd5-a7a6-ff2a638d61e1 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:38:34.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3040" for this suite.
Dec 28 14:38:40.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:38:40.825: INFO: namespace projected-3040 deletion completed in 6.313454753s

• [SLOW TEST:14.687 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:38:40.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-fd655bcc-d51d-4a77-8cef-6e9b3a6c669d in namespace container-probe-3954
Dec 28 14:38:51.023: INFO: Started pod test-webserver-fd655bcc-d51d-4a77-8cef-6e9b3a6c669d in namespace container-probe-3954
STEP: checking the pod's current state and verifying that restartCount is present
Dec 28 14:38:51.027: INFO: Initial restart count of pod test-webserver-fd655bcc-d51d-4a77-8cef-6e9b3a6c669d is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:42:51.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3954" for this suite.
Dec 28 14:42:57.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:42:57.557: INFO: namespace container-probe-3954 deletion completed in 6.184642471s

• [SLOW TEST:256.731 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:42:57.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-2lt6
STEP: Creating a pod to test atomic-volume-subpath
Dec 28 14:42:57.721: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-2lt6" in namespace "subpath-3983" to be "success or failure"
Dec 28 14:42:57.724: INFO: Pod "pod-subpath-test-secret-2lt6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.816069ms
Dec 28 14:42:59.740: INFO: Pod "pod-subpath-test-secret-2lt6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018827212s
Dec 28 14:43:01.752: INFO: Pod "pod-subpath-test-secret-2lt6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030562298s
Dec 28 14:43:03.762: INFO: Pod "pod-subpath-test-secret-2lt6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040529291s
Dec 28 14:43:05.773: INFO: Pod "pod-subpath-test-secret-2lt6": Phase="Running", Reason="", readiness=true. Elapsed: 8.052152015s
Dec 28 14:43:07.782: INFO: Pod "pod-subpath-test-secret-2lt6": Phase="Running", Reason="", readiness=true. Elapsed: 10.06114811s
Dec 28 14:43:09.795: INFO: Pod "pod-subpath-test-secret-2lt6": Phase="Running", Reason="", readiness=true. Elapsed: 12.073423291s
Dec 28 14:43:11.811: INFO: Pod "pod-subpath-test-secret-2lt6": Phase="Running", Reason="", readiness=true. Elapsed: 14.089983543s
Dec 28 14:43:13.818: INFO: Pod "pod-subpath-test-secret-2lt6": Phase="Running", Reason="", readiness=true. Elapsed: 16.096448772s
Dec 28 14:43:15.829: INFO: Pod "pod-subpath-test-secret-2lt6": Phase="Running", Reason="", readiness=true. Elapsed: 18.108030615s
Dec 28 14:43:17.838: INFO: Pod "pod-subpath-test-secret-2lt6": Phase="Running", Reason="", readiness=true. Elapsed: 20.117019065s
Dec 28 14:43:19.850: INFO: Pod "pod-subpath-test-secret-2lt6": Phase="Running", Reason="", readiness=true. Elapsed: 22.128691561s
Dec 28 14:43:21.861: INFO: Pod "pod-subpath-test-secret-2lt6": Phase="Running", Reason="", readiness=true. Elapsed: 24.140188679s
Dec 28 14:43:23.881: INFO: Pod "pod-subpath-test-secret-2lt6": Phase="Running", Reason="", readiness=true. Elapsed: 26.159930875s
Dec 28 14:43:25.957: INFO: Pod "pod-subpath-test-secret-2lt6": Phase="Running", Reason="", readiness=true. Elapsed: 28.235329776s
Dec 28 14:43:27.963: INFO: Pod "pod-subpath-test-secret-2lt6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.241791215s
STEP: Saw pod success
Dec 28 14:43:27.963: INFO: Pod "pod-subpath-test-secret-2lt6" satisfied condition "success or failure"
Dec 28 14:43:27.967: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-2lt6 container test-container-subpath-secret-2lt6: 
STEP: delete the pod
Dec 28 14:43:28.031: INFO: Waiting for pod pod-subpath-test-secret-2lt6 to disappear
Dec 28 14:43:28.046: INFO: Pod pod-subpath-test-secret-2lt6 no longer exists
STEP: Deleting pod pod-subpath-test-secret-2lt6
Dec 28 14:43:28.046: INFO: Deleting pod "pod-subpath-test-secret-2lt6" in namespace "subpath-3983"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:43:28.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3983" for this suite.
Dec 28 14:43:34.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:43:34.384: INFO: namespace subpath-3983 deletion completed in 6.330934115s

• [SLOW TEST:36.827 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:43:34.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 28 14:43:34.550: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9ac7a56b-322f-4af5-a42b-35cf01820d65" in namespace "downward-api-6995" to be "success or failure"
Dec 28 14:43:34.561: INFO: Pod "downwardapi-volume-9ac7a56b-322f-4af5-a42b-35cf01820d65": Phase="Pending", Reason="", readiness=false. Elapsed: 11.272054ms
Dec 28 14:43:36.577: INFO: Pod "downwardapi-volume-9ac7a56b-322f-4af5-a42b-35cf01820d65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026493746s
Dec 28 14:43:38.595: INFO: Pod "downwardapi-volume-9ac7a56b-322f-4af5-a42b-35cf01820d65": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045114101s
Dec 28 14:43:40.611: INFO: Pod "downwardapi-volume-9ac7a56b-322f-4af5-a42b-35cf01820d65": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061233512s
Dec 28 14:43:42.633: INFO: Pod "downwardapi-volume-9ac7a56b-322f-4af5-a42b-35cf01820d65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.082797456s
STEP: Saw pod success
Dec 28 14:43:42.633: INFO: Pod "downwardapi-volume-9ac7a56b-322f-4af5-a42b-35cf01820d65" satisfied condition "success or failure"
Dec 28 14:43:42.647: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-9ac7a56b-322f-4af5-a42b-35cf01820d65 container client-container: 
STEP: delete the pod
Dec 28 14:43:42.909: INFO: Waiting for pod downwardapi-volume-9ac7a56b-322f-4af5-a42b-35cf01820d65 to disappear
Dec 28 14:43:42.922: INFO: Pod downwardapi-volume-9ac7a56b-322f-4af5-a42b-35cf01820d65 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:43:42.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6995" for this suite.
Dec 28 14:43:49.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:43:49.150: INFO: namespace downward-api-6995 deletion completed in 6.147227566s

• [SLOW TEST:14.765 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:43:49.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Dec 28 14:43:49.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Dec 28 14:43:49.499: INFO: stderr: ""
Dec 28 14:43:49.499: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:43:49.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2282" for this suite.
Dec 28 14:43:55.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:43:55.662: INFO: namespace kubectl-2282 deletion completed in 6.152279972s

• [SLOW TEST:6.512 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:43:55.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:44:02.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-6225" for this suite.
Dec 28 14:44:08.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:44:08.271: INFO: namespace namespaces-6225 deletion completed in 6.186448063s
STEP: Destroying namespace "nsdeletetest-1761" for this suite.
Dec 28 14:44:08.275: INFO: Namespace nsdeletetest-1761 was already deleted
STEP: Destroying namespace "nsdeletetest-8771" for this suite.
Dec 28 14:44:14.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:44:14.446: INFO: namespace nsdeletetest-8771 deletion completed in 6.170981105s

• [SLOW TEST:18.784 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:44:14.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:44:20.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2114" for this suite.
Dec 28 14:44:26.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:44:26.393: INFO: namespace watch-2114 deletion completed in 6.284056867s

• [SLOW TEST:11.946 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:44:26.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 28 14:44:26.504: INFO: Creating deployment "test-recreate-deployment"
Dec 28 14:44:26.524: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Dec 28 14:44:26.626: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Dec 28 14:44:28.646: INFO: Waiting deployment "test-recreate-deployment" to complete
Dec 28 14:44:28.649: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141066, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141066, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141066, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141066, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 14:44:30.662: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141066, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141066, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141066, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141066, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 14:44:32.660: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141066, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141066, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141066, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141066, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 14:44:34.656: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141066, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141066, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141066, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141066, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 14:44:36.669: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Dec 28 14:44:36.705: INFO: Updating deployment test-recreate-deployment
Dec 28 14:44:36.705: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 28 14:44:37.185: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-2177,SelfLink:/apis/apps/v1/namespaces/deployment-2177/deployments/test-recreate-deployment,UID:e516fb9a-67ea-4634-8a16-5d6ef39431b0,ResourceVersion:18406582,Generation:2,CreationTimestamp:2019-12-28 14:44:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-28 14:44:37 +0000 UTC 2019-12-28 14:44:37 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-28 14:44:37 +0000 UTC 2019-12-28 14:44:26 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Dec 28 14:44:37.215: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-2177,SelfLink:/apis/apps/v1/namespaces/deployment-2177/replicasets/test-recreate-deployment-5c8c9cc69d,UID:c6fa5f80-423d-44f5-88f3-1f0a88998c12,ResourceVersion:18406581,Generation:1,CreationTimestamp:2019-12-28 14:44:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e516fb9a-67ea-4634-8a16-5d6ef39431b0 0xc0035c24e7 0xc0035c24e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 28 14:44:37.215: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Dec 28 14:44:37.215: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-2177,SelfLink:/apis/apps/v1/namespaces/deployment-2177/replicasets/test-recreate-deployment-6df85df6b9,UID:27b9f36f-5c94-457e-b6e0-067e605104d8,ResourceVersion:18406570,Generation:2,CreationTimestamp:2019-12-28 14:44:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e516fb9a-67ea-4634-8a16-5d6ef39431b0 0xc0035c25b7 0xc0035c25b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 28 14:44:37.220: INFO: Pod "test-recreate-deployment-5c8c9cc69d-5gq55" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-5gq55,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-2177,SelfLink:/api/v1/namespaces/deployment-2177/pods/test-recreate-deployment-5c8c9cc69d-5gq55,UID:16cbfed9-6227-4dec-80da-513ffb4a245a,ResourceVersion:18406583,Generation:0,CreationTimestamp:2019-12-28 14:44:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d c6fa5f80-423d-44f5-88f3-1f0a88998c12 0xc0035c2e77 0xc0035c2e78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7lcns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7lcns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-7lcns true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0035c2ef0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0035c2f10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 14:44:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 14:44:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 14:44:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 14:44:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-28 14:44:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:44:37.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2177" for this suite.
Dec 28 14:44:43.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:44:43.521: INFO: namespace deployment-2177 deletion completed in 6.297522632s

• [SLOW TEST:17.126 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:44:43.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Dec 28 14:44:43.692: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:45:06.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7304" for this suite.
Dec 28 14:45:12.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:45:12.733: INFO: namespace pods-7304 deletion completed in 6.126123029s

• [SLOW TEST:29.212 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:45:12.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Dec 28 14:45:12.952: INFO: Waiting up to 5m0s for pod "pod-ff804873-00c5-4ce3-b8af-12419a1d9723" in namespace "emptydir-822" to be "success or failure"
Dec 28 14:45:12.958: INFO: Pod "pod-ff804873-00c5-4ce3-b8af-12419a1d9723": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071453ms
Dec 28 14:45:14.964: INFO: Pod "pod-ff804873-00c5-4ce3-b8af-12419a1d9723": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012495263s
Dec 28 14:45:16.976: INFO: Pod "pod-ff804873-00c5-4ce3-b8af-12419a1d9723": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024640881s
Dec 28 14:45:18.992: INFO: Pod "pod-ff804873-00c5-4ce3-b8af-12419a1d9723": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039929107s
Dec 28 14:45:21.000: INFO: Pod "pod-ff804873-00c5-4ce3-b8af-12419a1d9723": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048617029s
Dec 28 14:45:23.017: INFO: Pod "pod-ff804873-00c5-4ce3-b8af-12419a1d9723": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.064921756s
STEP: Saw pod success
Dec 28 14:45:23.017: INFO: Pod "pod-ff804873-00c5-4ce3-b8af-12419a1d9723" satisfied condition "success or failure"
Dec 28 14:45:23.022: INFO: Trying to get logs from node iruya-node pod pod-ff804873-00c5-4ce3-b8af-12419a1d9723 container test-container: 
STEP: delete the pod
Dec 28 14:45:23.763: INFO: Waiting for pod pod-ff804873-00c5-4ce3-b8af-12419a1d9723 to disappear
Dec 28 14:45:23.770: INFO: Pod pod-ff804873-00c5-4ce3-b8af-12419a1d9723 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:45:23.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-822" for this suite.
Dec 28 14:45:29.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:45:29.965: INFO: namespace emptydir-822 deletion completed in 6.172682222s

• [SLOW TEST:17.230 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:45:29.965: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-d567051a-afe4-4e6f-86d9-b688ee51ca65
STEP: Creating a pod to test consume secrets
Dec 28 14:45:30.145: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d5e7418f-6473-4447-b99f-fc7103260210" in namespace "projected-7481" to be "success or failure"
Dec 28 14:45:30.156: INFO: Pod "pod-projected-secrets-d5e7418f-6473-4447-b99f-fc7103260210": Phase="Pending", Reason="", readiness=false. Elapsed: 10.822112ms
Dec 28 14:45:32.174: INFO: Pod "pod-projected-secrets-d5e7418f-6473-4447-b99f-fc7103260210": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028962493s
Dec 28 14:45:34.246: INFO: Pod "pod-projected-secrets-d5e7418f-6473-4447-b99f-fc7103260210": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101029779s
Dec 28 14:45:36.258: INFO: Pod "pod-projected-secrets-d5e7418f-6473-4447-b99f-fc7103260210": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112745772s
Dec 28 14:45:38.314: INFO: Pod "pod-projected-secrets-d5e7418f-6473-4447-b99f-fc7103260210": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.169022669s
STEP: Saw pod success
Dec 28 14:45:38.314: INFO: Pod "pod-projected-secrets-d5e7418f-6473-4447-b99f-fc7103260210" satisfied condition "success or failure"
Dec 28 14:45:38.320: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-d5e7418f-6473-4447-b99f-fc7103260210 container projected-secret-volume-test: 
STEP: delete the pod
Dec 28 14:45:38.376: INFO: Waiting for pod pod-projected-secrets-d5e7418f-6473-4447-b99f-fc7103260210 to disappear
Dec 28 14:45:38.389: INFO: Pod pod-projected-secrets-d5e7418f-6473-4447-b99f-fc7103260210 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:45:38.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7481" for this suite.
Dec 28 14:45:44.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:45:44.627: INFO: namespace projected-7481 deletion completed in 6.230724468s

• [SLOW TEST:14.662 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:45:44.628: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 28 14:45:44.699: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f565a1a1-fa85-4454-804e-82c75f4feb36" in namespace "projected-5325" to be "success or failure"
Dec 28 14:45:44.716: INFO: Pod "downwardapi-volume-f565a1a1-fa85-4454-804e-82c75f4feb36": Phase="Pending", Reason="", readiness=false. Elapsed: 17.364813ms
Dec 28 14:45:46.725: INFO: Pod "downwardapi-volume-f565a1a1-fa85-4454-804e-82c75f4feb36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025831146s
Dec 28 14:45:48.734: INFO: Pod "downwardapi-volume-f565a1a1-fa85-4454-804e-82c75f4feb36": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034711082s
Dec 28 14:45:50.749: INFO: Pod "downwardapi-volume-f565a1a1-fa85-4454-804e-82c75f4feb36": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049939045s
Dec 28 14:45:52.760: INFO: Pod "downwardapi-volume-f565a1a1-fa85-4454-804e-82c75f4feb36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061300506s
STEP: Saw pod success
Dec 28 14:45:52.760: INFO: Pod "downwardapi-volume-f565a1a1-fa85-4454-804e-82c75f4feb36" satisfied condition "success or failure"
Dec 28 14:45:52.765: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f565a1a1-fa85-4454-804e-82c75f4feb36 container client-container: 
STEP: delete the pod
Dec 28 14:45:52.897: INFO: Waiting for pod downwardapi-volume-f565a1a1-fa85-4454-804e-82c75f4feb36 to disappear
Dec 28 14:45:52.906: INFO: Pod downwardapi-volume-f565a1a1-fa85-4454-804e-82c75f4feb36 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:45:52.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5325" for this suite.
Dec 28 14:45:58.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:45:59.137: INFO: namespace projected-5325 deletion completed in 6.22108149s

• [SLOW TEST:14.510 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:45:59.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Dec 28 14:45:59.404: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-3343,SelfLink:/api/v1/namespaces/watch-3343/configmaps/e2e-watch-test-resource-version,UID:f076d5ba-a4b7-4319-b77f-c2743c4b4042,ResourceVersion:18406816,Generation:0,CreationTimestamp:2019-12-28 14:45:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 28 14:45:59.405: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-3343,SelfLink:/api/v1/namespaces/watch-3343/configmaps/e2e-watch-test-resource-version,UID:f076d5ba-a4b7-4319-b77f-c2743c4b4042,ResourceVersion:18406817,Generation:0,CreationTimestamp:2019-12-28 14:45:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:45:59.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3343" for this suite.
Dec 28 14:46:05.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:46:05.576: INFO: namespace watch-3343 deletion completed in 6.164417706s

• [SLOW TEST:6.439 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:46:05.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:46:05.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4932" for this suite.
Dec 28 14:46:27.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:46:27.922: INFO: namespace pods-4932 deletion completed in 22.198516625s

• [SLOW TEST:22.346 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:46:27.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 28 14:46:28.025: INFO: PodSpec: initContainers in spec.initContainers
Dec 28 14:47:31.943: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-85acefcb-f3f5-4286-aff5-62b57842a58c", GenerateName:"", Namespace:"init-container-8315", SelfLink:"/api/v1/namespaces/init-container-8315/pods/pod-init-85acefcb-f3f5-4286-aff5-62b57842a58c", UID:"7416cff1-1d7d-4056-8fa3-6cb2d7dee41d", ResourceVersion:"18406994", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713141188, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"25336592"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-hccvs", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0031b4000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hccvs", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hccvs", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hccvs", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00332e088), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003340000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00332e110)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00332e130)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00332e138), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00332e13c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141188, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141188, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141188, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141188, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc0034f0060), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002012070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0020120e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://eed9090c562e543221873e56be16a0f1e2e7678e3ae00c2c8a21ee3f7e3a77f5"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0034f00a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0034f0080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:47:31.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8315" for this suite.
Dec 28 14:47:54.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:47:54.177: INFO: namespace init-container-8315 deletion completed in 22.164544385s

• [SLOW TEST:86.254 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:47:54.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 28 14:47:54.271: INFO: Waiting up to 5m0s for pod "downwardapi-volume-563d9fc5-4e80-4ebc-8ee0-c43c6e8e3d79" in namespace "downward-api-5956" to be "success or failure"
Dec 28 14:47:54.274: INFO: Pod "downwardapi-volume-563d9fc5-4e80-4ebc-8ee0-c43c6e8e3d79": Phase="Pending", Reason="", readiness=false. Elapsed: 3.413947ms
Dec 28 14:47:56.283: INFO: Pod "downwardapi-volume-563d9fc5-4e80-4ebc-8ee0-c43c6e8e3d79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011810418s
Dec 28 14:47:58.291: INFO: Pod "downwardapi-volume-563d9fc5-4e80-4ebc-8ee0-c43c6e8e3d79": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02011255s
Dec 28 14:48:00.302: INFO: Pod "downwardapi-volume-563d9fc5-4e80-4ebc-8ee0-c43c6e8e3d79": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030791777s
Dec 28 14:48:02.315: INFO: Pod "downwardapi-volume-563d9fc5-4e80-4ebc-8ee0-c43c6e8e3d79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.043791595s
STEP: Saw pod success
Dec 28 14:48:02.315: INFO: Pod "downwardapi-volume-563d9fc5-4e80-4ebc-8ee0-c43c6e8e3d79" satisfied condition "success or failure"
Dec 28 14:48:02.319: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-563d9fc5-4e80-4ebc-8ee0-c43c6e8e3d79 container client-container: 
STEP: delete the pod
Dec 28 14:48:02.387: INFO: Waiting for pod downwardapi-volume-563d9fc5-4e80-4ebc-8ee0-c43c6e8e3d79 to disappear
Dec 28 14:48:02.391: INFO: Pod downwardapi-volume-563d9fc5-4e80-4ebc-8ee0-c43c6e8e3d79 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:48:02.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5956" for this suite.
Dec 28 14:48:08.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:48:08.677: INFO: namespace downward-api-5956 deletion completed in 6.279040424s

• [SLOW TEST:14.499 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:48:08.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 28 14:48:08.800: INFO: Waiting up to 5m0s for pod "pod-9c64e6bd-0bdf-4b80-894a-38e3da065fd3" in namespace "emptydir-3691" to be "success or failure"
Dec 28 14:48:08.806: INFO: Pod "pod-9c64e6bd-0bdf-4b80-894a-38e3da065fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.921991ms
Dec 28 14:48:10.811: INFO: Pod "pod-9c64e6bd-0bdf-4b80-894a-38e3da065fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011627014s
Dec 28 14:48:12.822: INFO: Pod "pod-9c64e6bd-0bdf-4b80-894a-38e3da065fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021977673s
Dec 28 14:48:14.981: INFO: Pod "pod-9c64e6bd-0bdf-4b80-894a-38e3da065fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.181179977s
Dec 28 14:48:16.998: INFO: Pod "pod-9c64e6bd-0bdf-4b80-894a-38e3da065fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.198737186s
Dec 28 14:48:19.007: INFO: Pod "pod-9c64e6bd-0bdf-4b80-894a-38e3da065fd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.207010048s
STEP: Saw pod success
Dec 28 14:48:19.007: INFO: Pod "pod-9c64e6bd-0bdf-4b80-894a-38e3da065fd3" satisfied condition "success or failure"
Dec 28 14:48:19.009: INFO: Trying to get logs from node iruya-node pod pod-9c64e6bd-0bdf-4b80-894a-38e3da065fd3 container test-container: 
STEP: delete the pod
Dec 28 14:48:19.073: INFO: Waiting for pod pod-9c64e6bd-0bdf-4b80-894a-38e3da065fd3 to disappear
Dec 28 14:48:19.095: INFO: Pod pod-9c64e6bd-0bdf-4b80-894a-38e3da065fd3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:48:19.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3691" for this suite.
Dec 28 14:48:25.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:48:25.353: INFO: namespace emptydir-3691 deletion completed in 6.25055113s

• [SLOW TEST:16.675 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:48:25.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 28 14:48:25.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:48:36.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6607" for this suite.
Dec 28 14:49:38.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:49:38.739: INFO: namespace pods-6607 deletion completed in 1m2.701403128s

• [SLOW TEST:73.384 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:49:38.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 28 14:49:38.876: INFO: Waiting up to 5m0s for pod "pod-2a031200-dd21-48bf-bb2c-5d165486b99a" in namespace "emptydir-1458" to be "success or failure"
Dec 28 14:49:38.896: INFO: Pod "pod-2a031200-dd21-48bf-bb2c-5d165486b99a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.35273ms
Dec 28 14:49:40.909: INFO: Pod "pod-2a031200-dd21-48bf-bb2c-5d165486b99a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032941067s
Dec 28 14:49:42.919: INFO: Pod "pod-2a031200-dd21-48bf-bb2c-5d165486b99a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042671204s
Dec 28 14:49:44.928: INFO: Pod "pod-2a031200-dd21-48bf-bb2c-5d165486b99a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052559807s
Dec 28 14:49:46.948: INFO: Pod "pod-2a031200-dd21-48bf-bb2c-5d165486b99a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072504764s
STEP: Saw pod success
Dec 28 14:49:46.949: INFO: Pod "pod-2a031200-dd21-48bf-bb2c-5d165486b99a" satisfied condition "success or failure"
Dec 28 14:49:46.954: INFO: Trying to get logs from node iruya-node pod pod-2a031200-dd21-48bf-bb2c-5d165486b99a container test-container: 
STEP: delete the pod
Dec 28 14:49:47.029: INFO: Waiting for pod pod-2a031200-dd21-48bf-bb2c-5d165486b99a to disappear
Dec 28 14:49:47.032: INFO: Pod pod-2a031200-dd21-48bf-bb2c-5d165486b99a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:49:47.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1458" for this suite.
Dec 28 14:49:53.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:49:53.260: INFO: namespace emptydir-1458 deletion completed in 6.162605565s

• [SLOW TEST:14.521 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:49:53.260: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-45a7066c-7a8d-4001-a65e-7b1d9d492dd0
STEP: Creating a pod to test consume secrets
Dec 28 14:49:53.697: INFO: Waiting up to 5m0s for pod "pod-secrets-86711dd3-d625-4de2-b8f7-759bc124f980" in namespace "secrets-841" to be "success or failure"
Dec 28 14:49:53.704: INFO: Pod "pod-secrets-86711dd3-d625-4de2-b8f7-759bc124f980": Phase="Pending", Reason="", readiness=false. Elapsed: 6.526281ms
Dec 28 14:49:55.711: INFO: Pod "pod-secrets-86711dd3-d625-4de2-b8f7-759bc124f980": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013697681s
Dec 28 14:49:57.726: INFO: Pod "pod-secrets-86711dd3-d625-4de2-b8f7-759bc124f980": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028638773s
Dec 28 14:49:59.779: INFO: Pod "pod-secrets-86711dd3-d625-4de2-b8f7-759bc124f980": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08214658s
Dec 28 14:50:01.812: INFO: Pod "pod-secrets-86711dd3-d625-4de2-b8f7-759bc124f980": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.115359765s
STEP: Saw pod success
Dec 28 14:50:01.813: INFO: Pod "pod-secrets-86711dd3-d625-4de2-b8f7-759bc124f980" satisfied condition "success or failure"
Dec 28 14:50:01.817: INFO: Trying to get logs from node iruya-node pod pod-secrets-86711dd3-d625-4de2-b8f7-759bc124f980 container secret-volume-test: 
STEP: delete the pod
Dec 28 14:50:01.980: INFO: Waiting for pod pod-secrets-86711dd3-d625-4de2-b8f7-759bc124f980 to disappear
Dec 28 14:50:01.992: INFO: Pod pod-secrets-86711dd3-d625-4de2-b8f7-759bc124f980 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:50:01.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-841" for this suite.
Dec 28 14:50:08.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:50:08.233: INFO: namespace secrets-841 deletion completed in 6.232280811s
STEP: Destroying namespace "secret-namespace-9777" for this suite.
Dec 28 14:50:14.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:50:14.367: INFO: namespace secret-namespace-9777 deletion completed in 6.133995076s

• [SLOW TEST:21.107 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:50:14.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:50:22.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4428" for this suite.
Dec 28 14:51:04.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:51:05.071: INFO: namespace kubelet-test-4428 deletion completed in 42.17624139s

• [SLOW TEST:50.703 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:51:05.071: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-5244
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 28 14:51:05.149: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 28 14:51:45.417: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-5244 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 14:51:45.417: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 14:51:45.923: INFO: Waiting for endpoints: map[]
Dec 28 14:51:45.932: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-5244 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 14:51:45.932: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 14:51:46.244: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:51:46.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5244" for this suite.
Dec 28 14:52:10.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:52:10.405: INFO: namespace pod-network-test-5244 deletion completed in 24.139447558s

• [SLOW TEST:65.334 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:52:10.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9126.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9126.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9126.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9126.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9126.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9126.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 28 14:52:22.669: INFO: Unable to read wheezy_udp@PodARecord from pod dns-9126/dns-test-d6bf2209-e718-43d0-9c44-636c867d34c2: the server could not find the requested resource (get pods dns-test-d6bf2209-e718-43d0-9c44-636c867d34c2)
Dec 28 14:52:22.674: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-9126/dns-test-d6bf2209-e718-43d0-9c44-636c867d34c2: the server could not find the requested resource (get pods dns-test-d6bf2209-e718-43d0-9c44-636c867d34c2)
Dec 28 14:52:22.680: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-9126.svc.cluster.local from pod dns-9126/dns-test-d6bf2209-e718-43d0-9c44-636c867d34c2: the server could not find the requested resource (get pods dns-test-d6bf2209-e718-43d0-9c44-636c867d34c2)
Dec 28 14:52:22.690: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-9126/dns-test-d6bf2209-e718-43d0-9c44-636c867d34c2: the server could not find the requested resource (get pods dns-test-d6bf2209-e718-43d0-9c44-636c867d34c2)
Dec 28 14:52:22.695: INFO: Unable to read jessie_udp@PodARecord from pod dns-9126/dns-test-d6bf2209-e718-43d0-9c44-636c867d34c2: the server could not find the requested resource (get pods dns-test-d6bf2209-e718-43d0-9c44-636c867d34c2)
Dec 28 14:52:22.699: INFO: Unable to read jessie_tcp@PodARecord from pod dns-9126/dns-test-d6bf2209-e718-43d0-9c44-636c867d34c2: the server could not find the requested resource (get pods dns-test-d6bf2209-e718-43d0-9c44-636c867d34c2)
Dec 28 14:52:22.699: INFO: Lookups using dns-9126/dns-test-d6bf2209-e718-43d0-9c44-636c867d34c2 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-9126.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 28 14:52:27.775: INFO: DNS probes using dns-9126/dns-test-d6bf2209-e718-43d0-9c44-636c867d34c2 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:52:27.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9126" for this suite.
Dec 28 14:52:33.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:52:34.023: INFO: namespace dns-9126 deletion completed in 6.183875214s

• [SLOW TEST:23.618 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:52:34.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 28 14:52:34.147: INFO: Waiting up to 5m0s for pod "pod-d6d210f2-7f36-4959-914b-f967cebb33e2" in namespace "emptydir-6212" to be "success or failure"
Dec 28 14:52:34.172: INFO: Pod "pod-d6d210f2-7f36-4959-914b-f967cebb33e2": Phase="Pending", Reason="", readiness=false. Elapsed: 25.23016ms
Dec 28 14:52:36.182: INFO: Pod "pod-d6d210f2-7f36-4959-914b-f967cebb33e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034691663s
Dec 28 14:52:38.190: INFO: Pod "pod-d6d210f2-7f36-4959-914b-f967cebb33e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042717437s
Dec 28 14:52:40.197: INFO: Pod "pod-d6d210f2-7f36-4959-914b-f967cebb33e2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049739648s
Dec 28 14:52:42.206: INFO: Pod "pod-d6d210f2-7f36-4959-914b-f967cebb33e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059345056s
STEP: Saw pod success
Dec 28 14:52:42.206: INFO: Pod "pod-d6d210f2-7f36-4959-914b-f967cebb33e2" satisfied condition "success or failure"
Dec 28 14:52:42.210: INFO: Trying to get logs from node iruya-node pod pod-d6d210f2-7f36-4959-914b-f967cebb33e2 container test-container: 
STEP: delete the pod
Dec 28 14:52:42.290: INFO: Waiting for pod pod-d6d210f2-7f36-4959-914b-f967cebb33e2 to disappear
Dec 28 14:52:42.339: INFO: Pod pod-d6d210f2-7f36-4959-914b-f967cebb33e2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:52:42.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6212" for this suite.
Dec 28 14:52:48.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:52:48.579: INFO: namespace emptydir-6212 deletion completed in 6.207233585s

• [SLOW TEST:14.556 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:52:48.581: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 28 14:52:48.798: INFO: Number of nodes with available pods: 0
Dec 28 14:52:48.798: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:52:50.252: INFO: Number of nodes with available pods: 0
Dec 28 14:52:50.252: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:52:50.810: INFO: Number of nodes with available pods: 0
Dec 28 14:52:50.811: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:52:52.093: INFO: Number of nodes with available pods: 0
Dec 28 14:52:52.093: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:52:52.837: INFO: Number of nodes with available pods: 0
Dec 28 14:52:52.837: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:52:53.854: INFO: Number of nodes with available pods: 0
Dec 28 14:52:53.854: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:52:56.383: INFO: Number of nodes with available pods: 0
Dec 28 14:52:56.383: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:52:57.032: INFO: Number of nodes with available pods: 0
Dec 28 14:52:57.032: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:52:58.593: INFO: Number of nodes with available pods: 0
Dec 28 14:52:58.593: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:52:58.871: INFO: Number of nodes with available pods: 0
Dec 28 14:52:58.871: INFO: Node iruya-node is running more than one daemon pod
Dec 28 14:52:59.820: INFO: Number of nodes with available pods: 1
Dec 28 14:52:59.820: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:53:00.816: INFO: Number of nodes with available pods: 2
Dec 28 14:53:00.816: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Dec 28 14:53:00.897: INFO: Number of nodes with available pods: 1
Dec 28 14:53:00.897: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:53:01.920: INFO: Number of nodes with available pods: 1
Dec 28 14:53:01.920: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:53:02.907: INFO: Number of nodes with available pods: 1
Dec 28 14:53:02.907: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:53:03.994: INFO: Number of nodes with available pods: 1
Dec 28 14:53:03.995: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:53:05.216: INFO: Number of nodes with available pods: 1
Dec 28 14:53:05.216: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:53:05.932: INFO: Number of nodes with available pods: 1
Dec 28 14:53:05.932: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:53:06.913: INFO: Number of nodes with available pods: 1
Dec 28 14:53:06.913: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:53:07.919: INFO: Number of nodes with available pods: 1
Dec 28 14:53:07.919: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:53:08.921: INFO: Number of nodes with available pods: 1
Dec 28 14:53:08.921: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:53:09.921: INFO: Number of nodes with available pods: 1
Dec 28 14:53:09.921: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:53:10.921: INFO: Number of nodes with available pods: 1
Dec 28 14:53:10.921: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:53:11.917: INFO: Number of nodes with available pods: 1
Dec 28 14:53:11.917: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:53:12.913: INFO: Number of nodes with available pods: 1
Dec 28 14:53:12.913: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:53:13.915: INFO: Number of nodes with available pods: 1
Dec 28 14:53:13.915: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:53:14.931: INFO: Number of nodes with available pods: 1
Dec 28 14:53:14.932: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:53:15.925: INFO: Number of nodes with available pods: 1
Dec 28 14:53:15.925: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:53:16.919: INFO: Number of nodes with available pods: 1
Dec 28 14:53:16.919: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:53:18.031: INFO: Number of nodes with available pods: 1
Dec 28 14:53:18.031: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:53:19.515: INFO: Number of nodes with available pods: 1
Dec 28 14:53:19.515: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:53:20.109: INFO: Number of nodes with available pods: 1
Dec 28 14:53:20.109: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:53:20.916: INFO: Number of nodes with available pods: 1
Dec 28 14:53:20.917: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:53:23.074: INFO: Number of nodes with available pods: 1
Dec 28 14:53:23.074: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:53:24.262: INFO: Number of nodes with available pods: 1
Dec 28 14:53:24.262: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:53:24.911: INFO: Number of nodes with available pods: 1
Dec 28 14:53:24.911: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:53:25.918: INFO: Number of nodes with available pods: 1
Dec 28 14:53:25.918: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 28 14:53:26.914: INFO: Number of nodes with available pods: 2
Dec 28 14:53:26.914: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5114, will wait for the garbage collector to delete the pods
Dec 28 14:53:26.985: INFO: Deleting DaemonSet.extensions daemon-set took: 13.811377ms
Dec 28 14:53:27.286: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.757419ms
Dec 28 14:53:35.093: INFO: Number of nodes with available pods: 0
Dec 28 14:53:35.093: INFO: Number of running nodes: 0, number of available pods: 0
Dec 28 14:53:35.096: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5114/daemonsets","resourceVersion":"18407834"},"items":null}

Dec 28 14:53:35.098: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5114/pods","resourceVersion":"18407834"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:53:35.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5114" for this suite.
Dec 28 14:53:41.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:53:41.437: INFO: namespace daemonsets-5114 deletion completed in 6.19724422s

• [SLOW TEST:52.856 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:53:41.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 28 14:53:41.497: INFO: Creating ReplicaSet my-hostname-basic-65e62a4a-c7cc-4371-a2e3-45ac203eac1b
Dec 28 14:53:41.509: INFO: Pod name my-hostname-basic-65e62a4a-c7cc-4371-a2e3-45ac203eac1b: Found 0 pods out of 1
Dec 28 14:53:46.518: INFO: Pod name my-hostname-basic-65e62a4a-c7cc-4371-a2e3-45ac203eac1b: Found 1 pods out of 1
Dec 28 14:53:46.518: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-65e62a4a-c7cc-4371-a2e3-45ac203eac1b" is running
Dec 28 14:53:48.530: INFO: Pod "my-hostname-basic-65e62a4a-c7cc-4371-a2e3-45ac203eac1b-7ph5t" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 14:53:41 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 14:53:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-65e62a4a-c7cc-4371-a2e3-45ac203eac1b]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 14:53:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-65e62a4a-c7cc-4371-a2e3-45ac203eac1b]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 14:53:41 +0000 UTC Reason: Message:}])
Dec 28 14:53:48.531: INFO: Trying to dial the pod
Dec 28 14:53:53.594: INFO: Controller my-hostname-basic-65e62a4a-c7cc-4371-a2e3-45ac203eac1b: Got expected result from replica 1 [my-hostname-basic-65e62a4a-c7cc-4371-a2e3-45ac203eac1b-7ph5t]: "my-hostname-basic-65e62a4a-c7cc-4371-a2e3-45ac203eac1b-7ph5t", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:53:53.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-4155" for this suite.
Dec 28 14:53:59.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:53:59.874: INFO: namespace replicaset-4155 deletion completed in 6.270375276s

• [SLOW TEST:18.436 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:53:59.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Dec 28 14:54:00.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5362'
Dec 28 14:54:03.697: INFO: stderr: ""
Dec 28 14:54:03.697: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Dec 28 14:54:04.705: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 14:54:04.705: INFO: Found 0 / 1
Dec 28 14:54:05.708: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 14:54:05.708: INFO: Found 0 / 1
Dec 28 14:54:06.713: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 14:54:06.713: INFO: Found 0 / 1
Dec 28 14:54:07.713: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 14:54:07.714: INFO: Found 0 / 1
Dec 28 14:54:08.743: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 14:54:08.743: INFO: Found 0 / 1
Dec 28 14:54:09.707: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 14:54:09.707: INFO: Found 0 / 1
Dec 28 14:54:10.706: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 14:54:10.706: INFO: Found 0 / 1
Dec 28 14:54:11.708: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 14:54:11.708: INFO: Found 0 / 1
Dec 28 14:54:12.714: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 14:54:12.714: INFO: Found 0 / 1
Dec 28 14:54:13.710: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 14:54:13.710: INFO: Found 1 / 1
Dec 28 14:54:13.710: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 28 14:54:13.715: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 14:54:13.715: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Dec 28 14:54:13.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-vpln6 redis-master --namespace=kubectl-5362'
Dec 28 14:54:13.969: INFO: stderr: ""
Dec 28 14:54:13.970: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 28 Dec 14:54:11.679 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 28 Dec 14:54:11.679 # Server started, Redis version 3.2.12\n1:M 28 Dec 14:54:11.679 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 28 Dec 14:54:11.679 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Dec 28 14:54:13.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-vpln6 redis-master --namespace=kubectl-5362 --tail=1'
Dec 28 14:54:14.110: INFO: stderr: ""
Dec 28 14:54:14.110: INFO: stdout: "1:M 28 Dec 14:54:11.679 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Dec 28 14:54:14.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-vpln6 redis-master --namespace=kubectl-5362 --limit-bytes=1'
Dec 28 14:54:14.249: INFO: stderr: ""
Dec 28 14:54:14.249: INFO: stdout: " "
STEP: exposing timestamps
Dec 28 14:54:14.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-vpln6 redis-master --namespace=kubectl-5362 --tail=1 --timestamps'
Dec 28 14:54:14.348: INFO: stderr: ""
Dec 28 14:54:14.348: INFO: stdout: "2019-12-28T14:54:11.680263871Z 1:M 28 Dec 14:54:11.679 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Dec 28 14:54:16.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-vpln6 redis-master --namespace=kubectl-5362 --since=1s'
Dec 28 14:54:17.066: INFO: stderr: ""
Dec 28 14:54:17.066: INFO: stdout: ""
Dec 28 14:54:17.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-vpln6 redis-master --namespace=kubectl-5362 --since=24h'
Dec 28 14:54:17.232: INFO: stderr: ""
Dec 28 14:54:17.232: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 28 Dec 14:54:11.679 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 28 Dec 14:54:11.679 # Server started, Redis version 3.2.12\n1:M 28 Dec 14:54:11.679 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 28 Dec 14:54:11.679 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Dec 28 14:54:17.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5362'
Dec 28 14:54:17.363: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 28 14:54:17.363: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Dec 28 14:54:17.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-5362'
Dec 28 14:54:17.524: INFO: stderr: "No resources found.\n"
Dec 28 14:54:17.524: INFO: stdout: ""
Dec 28 14:54:17.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-5362 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 28 14:54:17.671: INFO: stderr: ""
Dec 28 14:54:17.671: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:54:17.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5362" for this suite.
Dec 28 14:54:23.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:54:23.914: INFO: namespace kubectl-5362 deletion completed in 6.237127075s

• [SLOW TEST:24.040 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:54:23.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-5e21e2d2-8693-4326-98b3-c5c061d2a6c3
STEP: Creating secret with name s-test-opt-upd-94c1531d-dd59-4927-9601-0af3e464058d
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-5e21e2d2-8693-4326-98b3-c5c061d2a6c3
STEP: Updating secret s-test-opt-upd-94c1531d-dd59-4927-9601-0af3e464058d
STEP: Creating secret with name s-test-opt-create-9a273129-ef21-4cc9-94bd-3af3cc9c54b7
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:54:38.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-160" for this suite.
Dec 28 14:55:00.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:55:00.509: INFO: namespace projected-160 deletion completed in 22.149785829s

• [SLOW TEST:36.593 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:55:00.511: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 28 14:55:11.091: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:55:11.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3219" for this suite.
Dec 28 14:55:17.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:55:17.396: INFO: namespace container-runtime-3219 deletion completed in 6.151764338s

• [SLOW TEST:16.885 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:55:17.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Dec 28 14:55:17.584: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Dec 28 14:55:18.079: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Dec 28 14:55:20.378: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141718, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141718, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141718, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141718, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 14:55:22.387: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141718, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141718, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141718, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141718, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 14:55:24.388: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141718, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141718, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141718, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141718, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 14:55:26.387: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141718, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141718, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141718, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141718, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 14:55:28.388: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141718, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141718, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141718, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713141718, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 14:55:34.761: INFO: Waited 4.364338689s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:55:36.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-7668" for this suite.
Dec 28 14:55:42.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:55:42.324: INFO: namespace aggregator-7668 deletion completed in 6.137470647s

• [SLOW TEST:24.928 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:55:42.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 28 14:55:42.439: INFO: Waiting up to 5m0s for pod "downwardapi-volume-658c5778-6428-4585-b1a1-72c8d41cfbec" in namespace "downward-api-9131" to be "success or failure"
Dec 28 14:55:42.473: INFO: Pod "downwardapi-volume-658c5778-6428-4585-b1a1-72c8d41cfbec": Phase="Pending", Reason="", readiness=false. Elapsed: 33.623737ms
Dec 28 14:55:44.481: INFO: Pod "downwardapi-volume-658c5778-6428-4585-b1a1-72c8d41cfbec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041801617s
Dec 28 14:55:46.618: INFO: Pod "downwardapi-volume-658c5778-6428-4585-b1a1-72c8d41cfbec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.17858166s
Dec 28 14:55:48.630: INFO: Pod "downwardapi-volume-658c5778-6428-4585-b1a1-72c8d41cfbec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.190093382s
Dec 28 14:55:50.636: INFO: Pod "downwardapi-volume-658c5778-6428-4585-b1a1-72c8d41cfbec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.196793328s
STEP: Saw pod success
Dec 28 14:55:50.636: INFO: Pod "downwardapi-volume-658c5778-6428-4585-b1a1-72c8d41cfbec" satisfied condition "success or failure"
Dec 28 14:55:50.641: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-658c5778-6428-4585-b1a1-72c8d41cfbec container client-container: 
STEP: delete the pod
Dec 28 14:55:50.783: INFO: Waiting for pod downwardapi-volume-658c5778-6428-4585-b1a1-72c8d41cfbec to disappear
Dec 28 14:55:50.796: INFO: Pod downwardapi-volume-658c5778-6428-4585-b1a1-72c8d41cfbec no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:55:50.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9131" for this suite.
Dec 28 14:55:56.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:55:56.970: INFO: namespace downward-api-9131 deletion completed in 6.169041912s

• [SLOW TEST:14.646 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:55:56.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-9494
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-9494
STEP: Deleting pre-stop pod
Dec 28 14:56:20.176: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:56:20.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-9494" for this suite.
Dec 28 14:57:00.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:57:00.374: INFO: namespace prestop-9494 deletion completed in 40.146611815s

• [SLOW TEST:63.402 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:57:00.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 28 14:57:00.504: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:57:08.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2426" for this suite.
Dec 28 14:57:50.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:57:50.858: INFO: namespace pods-2426 deletion completed in 42.161103448s

• [SLOW TEST:50.484 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:57:50.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Dec 28 14:57:50.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9470'
Dec 28 14:57:51.449: INFO: stderr: ""
Dec 28 14:57:51.449: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 28 14:57:52.462: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 14:57:52.462: INFO: Found 0 / 1
Dec 28 14:57:53.457: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 14:57:53.457: INFO: Found 0 / 1
Dec 28 14:57:54.460: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 14:57:54.460: INFO: Found 0 / 1
Dec 28 14:57:55.459: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 14:57:55.459: INFO: Found 0 / 1
Dec 28 14:57:56.491: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 14:57:56.491: INFO: Found 0 / 1
Dec 28 14:57:57.460: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 14:57:57.461: INFO: Found 0 / 1
Dec 28 14:57:58.462: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 14:57:58.462: INFO: Found 0 / 1
Dec 28 14:57:59.461: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 14:57:59.461: INFO: Found 1 / 1
Dec 28 14:57:59.461: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Dec 28 14:57:59.470: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 14:57:59.471: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 28 14:57:59.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-fvlh5 --namespace=kubectl-9470 -p {"metadata":{"annotations":{"x":"y"}}}'
Dec 28 14:57:59.633: INFO: stderr: ""
Dec 28 14:57:59.634: INFO: stdout: "pod/redis-master-fvlh5 patched\n"
STEP: checking annotations
Dec 28 14:57:59.661: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 14:57:59.661: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:57:59.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9470" for this suite.
Dec 28 14:58:21.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:58:21.911: INFO: namespace kubectl-9470 deletion completed in 22.242392445s

• [SLOW TEST:31.052 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:58:21.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-e4c5d407-7bde-4d7b-bd7d-3b5dcf9cb0f5 in namespace container-probe-4859
Dec 28 14:58:30.085: INFO: Started pod liveness-e4c5d407-7bde-4d7b-bd7d-3b5dcf9cb0f5 in namespace container-probe-4859
STEP: checking the pod's current state and verifying that restartCount is present
Dec 28 14:58:30.087: INFO: Initial restart count of pod liveness-e4c5d407-7bde-4d7b-bd7d-3b5dcf9cb0f5 is 0
Dec 28 14:58:50.181: INFO: Restart count of pod container-probe-4859/liveness-e4c5d407-7bde-4d7b-bd7d-3b5dcf9cb0f5 is now 1 (20.093904084s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:58:50.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4859" for this suite.
Dec 28 14:58:56.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:58:56.568: INFO: namespace container-probe-4859 deletion completed in 6.288300199s

• [SLOW TEST:34.656 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:58:56.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Dec 28 14:58:57.247: INFO: created pod pod-service-account-defaultsa
Dec 28 14:58:57.247: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Dec 28 14:58:57.261: INFO: created pod pod-service-account-mountsa
Dec 28 14:58:57.261: INFO: pod pod-service-account-mountsa service account token volume mount: true
Dec 28 14:58:57.282: INFO: created pod pod-service-account-nomountsa
Dec 28 14:58:57.282: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Dec 28 14:58:57.346: INFO: created pod pod-service-account-defaultsa-mountspec
Dec 28 14:58:57.346: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Dec 28 14:58:57.472: INFO: created pod pod-service-account-mountsa-mountspec
Dec 28 14:58:57.472: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Dec 28 14:58:57.517: INFO: created pod pod-service-account-nomountsa-mountspec
Dec 28 14:58:57.518: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Dec 28 14:58:58.525: INFO: created pod pod-service-account-defaultsa-nomountspec
Dec 28 14:58:58.525: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Dec 28 14:58:59.292: INFO: created pod pod-service-account-mountsa-nomountspec
Dec 28 14:58:59.292: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Dec 28 14:58:59.736: INFO: created pod pod-service-account-nomountsa-nomountspec
Dec 28 14:58:59.736: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 14:58:59.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-1874" for this suite.
Dec 28 14:59:58.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 14:59:58.508: INFO: namespace svcaccounts-1874 deletion completed in 58.675014121s

• [SLOW TEST:61.940 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 14:59:58.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-1241572c-3ffe-4bc0-8078-f88708a2eadd
STEP: Creating a pod to test consume configMaps
Dec 28 14:59:58.694: INFO: Waiting up to 5m0s for pod "pod-configmaps-fce13b27-1068-48a7-ba80-423e220ff588" in namespace "configmap-6938" to be "success or failure"
Dec 28 14:59:58.762: INFO: Pod "pod-configmaps-fce13b27-1068-48a7-ba80-423e220ff588": Phase="Pending", Reason="", readiness=false. Elapsed: 67.934178ms
Dec 28 15:00:00.772: INFO: Pod "pod-configmaps-fce13b27-1068-48a7-ba80-423e220ff588": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077368021s
Dec 28 15:00:02.781: INFO: Pod "pod-configmaps-fce13b27-1068-48a7-ba80-423e220ff588": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087186683s
Dec 28 15:00:04.795: INFO: Pod "pod-configmaps-fce13b27-1068-48a7-ba80-423e220ff588": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100959277s
Dec 28 15:00:06.805: INFO: Pod "pod-configmaps-fce13b27-1068-48a7-ba80-423e220ff588": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110713389s
Dec 28 15:00:08.813: INFO: Pod "pod-configmaps-fce13b27-1068-48a7-ba80-423e220ff588": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.118340709s
STEP: Saw pod success
Dec 28 15:00:08.813: INFO: Pod "pod-configmaps-fce13b27-1068-48a7-ba80-423e220ff588" satisfied condition "success or failure"
Dec 28 15:00:08.822: INFO: Trying to get logs from node iruya-node pod pod-configmaps-fce13b27-1068-48a7-ba80-423e220ff588 container configmap-volume-test: 
STEP: delete the pod
Dec 28 15:00:08.881: INFO: Waiting for pod pod-configmaps-fce13b27-1068-48a7-ba80-423e220ff588 to disappear
Dec 28 15:00:08.891: INFO: Pod pod-configmaps-fce13b27-1068-48a7-ba80-423e220ff588 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 15:00:08.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6938" for this suite.
Dec 28 15:00:14.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 15:00:15.077: INFO: namespace configmap-6938 deletion completed in 6.179437685s

• [SLOW TEST:16.569 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 15:00:15.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-b5c46dd5-b36f-4b87-9ff5-341a565409f4
STEP: Creating secret with name secret-projected-all-test-volume-a58d3668-7873-4b6b-a7e7-2344d91cc303
STEP: Creating a pod to test Check all projections for projected volume plugin
Dec 28 15:00:15.227: INFO: Waiting up to 5m0s for pod "projected-volume-b45ec173-1e7e-4ae2-8a8d-f2c5331734a1" in namespace "projected-4488" to be "success or failure"
Dec 28 15:00:15.236: INFO: Pod "projected-volume-b45ec173-1e7e-4ae2-8a8d-f2c5331734a1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.590568ms
Dec 28 15:00:17.247: INFO: Pod "projected-volume-b45ec173-1e7e-4ae2-8a8d-f2c5331734a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020471373s
Dec 28 15:00:19.647: INFO: Pod "projected-volume-b45ec173-1e7e-4ae2-8a8d-f2c5331734a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.419835716s
Dec 28 15:00:21.656: INFO: Pod "projected-volume-b45ec173-1e7e-4ae2-8a8d-f2c5331734a1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.429809158s
Dec 28 15:00:23.665: INFO: Pod "projected-volume-b45ec173-1e7e-4ae2-8a8d-f2c5331734a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.438199215s
STEP: Saw pod success
Dec 28 15:00:23.665: INFO: Pod "projected-volume-b45ec173-1e7e-4ae2-8a8d-f2c5331734a1" satisfied condition "success or failure"
Dec 28 15:00:23.668: INFO: Trying to get logs from node iruya-node pod projected-volume-b45ec173-1e7e-4ae2-8a8d-f2c5331734a1 container projected-all-volume-test: 
STEP: delete the pod
Dec 28 15:00:23.722: INFO: Waiting for pod projected-volume-b45ec173-1e7e-4ae2-8a8d-f2c5331734a1 to disappear
Dec 28 15:00:23.729: INFO: Pod projected-volume-b45ec173-1e7e-4ae2-8a8d-f2c5331734a1 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 15:00:23.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4488" for this suite.
Dec 28 15:00:29.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 15:00:29.932: INFO: namespace projected-4488 deletion completed in 6.195778584s

• [SLOW TEST:14.854 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 15:00:29.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 28 15:00:30.180: INFO: Waiting up to 5m0s for pod "downward-api-ae6675a4-0298-45c2-99c6-2e1b22e90424" in namespace "downward-api-5047" to be "success or failure"
Dec 28 15:00:30.202: INFO: Pod "downward-api-ae6675a4-0298-45c2-99c6-2e1b22e90424": Phase="Pending", Reason="", readiness=false. Elapsed: 21.763041ms
Dec 28 15:00:32.211: INFO: Pod "downward-api-ae6675a4-0298-45c2-99c6-2e1b22e90424": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030809301s
Dec 28 15:00:34.220: INFO: Pod "downward-api-ae6675a4-0298-45c2-99c6-2e1b22e90424": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040103338s
Dec 28 15:00:36.245: INFO: Pod "downward-api-ae6675a4-0298-45c2-99c6-2e1b22e90424": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064449885s
Dec 28 15:00:38.255: INFO: Pod "downward-api-ae6675a4-0298-45c2-99c6-2e1b22e90424": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.07449415s
STEP: Saw pod success
Dec 28 15:00:38.255: INFO: Pod "downward-api-ae6675a4-0298-45c2-99c6-2e1b22e90424" satisfied condition "success or failure"
Dec 28 15:00:38.261: INFO: Trying to get logs from node iruya-node pod downward-api-ae6675a4-0298-45c2-99c6-2e1b22e90424 container dapi-container: 
STEP: delete the pod
Dec 28 15:00:38.414: INFO: Waiting for pod downward-api-ae6675a4-0298-45c2-99c6-2e1b22e90424 to disappear
Dec 28 15:00:38.478: INFO: Pod downward-api-ae6675a4-0298-45c2-99c6-2e1b22e90424 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 15:00:38.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5047" for this suite.
Dec 28 15:00:44.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 15:00:44.640: INFO: namespace downward-api-5047 deletion completed in 6.151227035s

• [SLOW TEST:14.707 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 15:00:44.640: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 28 15:00:44.764: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1ae36794-1152-4139-9460-3258c5c335af" in namespace "projected-168" to be "success or failure"
Dec 28 15:00:44.768: INFO: Pod "downwardapi-volume-1ae36794-1152-4139-9460-3258c5c335af": Phase="Pending", Reason="", readiness=false. Elapsed: 3.637411ms
Dec 28 15:00:46.780: INFO: Pod "downwardapi-volume-1ae36794-1152-4139-9460-3258c5c335af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015570564s
Dec 28 15:00:48.789: INFO: Pod "downwardapi-volume-1ae36794-1152-4139-9460-3258c5c335af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025445428s
Dec 28 15:00:51.492: INFO: Pod "downwardapi-volume-1ae36794-1152-4139-9460-3258c5c335af": Phase="Pending", Reason="", readiness=false. Elapsed: 6.728429771s
Dec 28 15:00:53.498: INFO: Pod "downwardapi-volume-1ae36794-1152-4139-9460-3258c5c335af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.733650032s
STEP: Saw pod success
Dec 28 15:00:53.498: INFO: Pod "downwardapi-volume-1ae36794-1152-4139-9460-3258c5c335af" satisfied condition "success or failure"
Dec 28 15:00:53.501: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-1ae36794-1152-4139-9460-3258c5c335af container client-container: 
STEP: delete the pod
Dec 28 15:00:53.625: INFO: Waiting for pod downwardapi-volume-1ae36794-1152-4139-9460-3258c5c335af to disappear
Dec 28 15:00:53.644: INFO: Pod downwardapi-volume-1ae36794-1152-4139-9460-3258c5c335af no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 15:00:53.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-168" for this suite.
Dec 28 15:00:59.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 15:00:59.891: INFO: namespace projected-168 deletion completed in 6.239008384s

• [SLOW TEST:15.251 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 15:00:59.892: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Dec 28 15:01:00.122: INFO: namespace kubectl-1679
Dec 28 15:01:00.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1679'
Dec 28 15:01:00.540: INFO: stderr: ""
Dec 28 15:01:00.540: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 28 15:01:01.567: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 15:01:01.567: INFO: Found 0 / 1
Dec 28 15:01:02.562: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 15:01:02.562: INFO: Found 0 / 1
Dec 28 15:01:03.549: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 15:01:03.549: INFO: Found 0 / 1
Dec 28 15:01:04.559: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 15:01:04.559: INFO: Found 0 / 1
Dec 28 15:01:05.549: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 15:01:05.549: INFO: Found 0 / 1
Dec 28 15:01:06.551: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 15:01:06.552: INFO: Found 0 / 1
Dec 28 15:01:07.549: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 15:01:07.549: INFO: Found 1 / 1
Dec 28 15:01:07.549: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 28 15:01:07.554: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 15:01:07.554: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 28 15:01:07.554: INFO: wait on redis-master startup in kubectl-1679 
Dec 28 15:01:07.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-f7qgm redis-master --namespace=kubectl-1679'
Dec 28 15:01:07.837: INFO: stderr: ""
Dec 28 15:01:07.837: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 28 Dec 15:01:06.652 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 28 Dec 15:01:06.652 # Server started, Redis version 3.2.12\n1:M 28 Dec 15:01:06.653 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 28 Dec 15:01:06.653 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Dec 28 15:01:07.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1679'
Dec 28 15:01:08.074: INFO: stderr: ""
Dec 28 15:01:08.074: INFO: stdout: "service/rm2 exposed\n"
Dec 28 15:01:08.082: INFO: Service rm2 in namespace kubectl-1679 found.
STEP: exposing service
Dec 28 15:01:10.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1679'
Dec 28 15:01:10.321: INFO: stderr: ""
Dec 28 15:01:10.321: INFO: stdout: "service/rm3 exposed\n"
Dec 28 15:01:10.334: INFO: Service rm3 in namespace kubectl-1679 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 15:01:12.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1679" for this suite.
Dec 28 15:01:36.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 15:01:36.622: INFO: namespace kubectl-1679 deletion completed in 24.268900079s

• [SLOW TEST:36.730 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 15:01:36.622: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-4231
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-4231
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4231
Dec 28 15:01:36.812: INFO: Found 0 stateful pods, waiting for 1
Dec 28 15:01:46.831: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Dec 28 15:01:46.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4231 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 28 15:01:47.410: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 28 15:01:47.410: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 28 15:01:47.410: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 28 15:01:47.419: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 28 15:01:57.432: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 28 15:01:57.432: INFO: Waiting for statefulset status.replicas updated to 0
Dec 28 15:01:57.502: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999932s
Dec 28 15:01:58.815: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.975534461s
Dec 28 15:01:59.842: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.662809879s
Dec 28 15:02:00.855: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.63554854s
Dec 28 15:02:01.868: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.622325102s
Dec 28 15:02:02.893: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.609612472s
Dec 28 15:02:03.907: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.584860931s
Dec 28 15:02:04.916: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.570514154s
Dec 28 15:02:05.926: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.561506399s
Dec 28 15:02:06.932: INFO: Verifying statefulset ss doesn't scale past 1 for another 551.993595ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4231
Dec 28 15:02:07.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4231 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 15:02:08.621: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 28 15:02:08.622: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 28 15:02:08.622: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 28 15:02:08.634: INFO: Found 1 stateful pods, waiting for 3
Dec 28 15:02:18.646: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 15:02:18.646: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 15:02:18.646: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 28 15:02:28.646: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 15:02:28.646: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 15:02:28.646: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Dec 28 15:02:28.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4231 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 28 15:02:29.210: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 28 15:02:29.210: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 28 15:02:29.210: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 28 15:02:29.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4231 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 28 15:02:29.541: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 28 15:02:29.541: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 28 15:02:29.541: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 28 15:02:29.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4231 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 28 15:02:30.103: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 28 15:02:30.103: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 28 15:02:30.103: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 28 15:02:30.103: INFO: Waiting for statefulset status.replicas updated to 0
Dec 28 15:02:30.108: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Dec 28 15:02:40.123: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 28 15:02:40.123: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 28 15:02:40.123: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 28 15:02:40.156: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999603s
Dec 28 15:02:41.171: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.976072289s
Dec 28 15:02:42.200: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.961173989s
Dec 28 15:02:43.212: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.9315517s
Dec 28 15:02:44.219: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.919694751s
Dec 28 15:02:45.846: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.913153451s
Dec 28 15:02:46.856: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.286363141s
Dec 28 15:02:47.869: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.276311309s
Dec 28 15:02:48.882: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.262654931s
Dec 28 15:02:49.892: INFO: Verifying statefulset ss doesn't scale past 3 for another 250.396591ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4231
Dec 28 15:02:50.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4231 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 15:02:51.414: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 28 15:02:51.414: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 28 15:02:51.414: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 28 15:02:51.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4231 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 15:02:51.743: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 28 15:02:51.743: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 28 15:02:51.743: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 28 15:02:51.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4231 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 15:02:52.607: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 28 15:02:52.607: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 28 15:02:52.607: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 28 15:02:52.607: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 28 15:03:32.650: INFO: Deleting all statefulset in ns statefulset-4231
Dec 28 15:03:32.654: INFO: Scaling statefulset ss to 0
Dec 28 15:03:32.662: INFO: Waiting for statefulset status.replicas updated to 0
Dec 28 15:03:32.665: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 15:03:32.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4231" for this suite.
Dec 28 15:03:38.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 15:03:38.834: INFO: namespace statefulset-4231 deletion completed in 6.143386572s

• [SLOW TEST:122.212 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 15:03:38.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 28 15:03:38.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5561'
Dec 28 15:03:39.134: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 28 15:03:39.134: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Dec 28 15:03:39.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-5561'
Dec 28 15:03:39.437: INFO: stderr: ""
Dec 28 15:03:39.437: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 15:03:39.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5561" for this suite.
Dec 28 15:03:45.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 15:03:45.705: INFO: namespace kubectl-5561 deletion completed in 6.227993191s

• [SLOW TEST:6.870 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 15:03:45.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-0a498582-85e4-4d91-a5ae-5bd304735aa5
STEP: Creating a pod to test consume configMaps
Dec 28 15:03:45.894: INFO: Waiting up to 5m0s for pod "pod-configmaps-6fa388c9-cfbb-407e-9ef2-6455cebccd71" in namespace "configmap-267" to be "success or failure"
Dec 28 15:03:45.927: INFO: Pod "pod-configmaps-6fa388c9-cfbb-407e-9ef2-6455cebccd71": Phase="Pending", Reason="", readiness=false. Elapsed: 32.532299ms
Dec 28 15:03:47.934: INFO: Pod "pod-configmaps-6fa388c9-cfbb-407e-9ef2-6455cebccd71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040372112s
Dec 28 15:03:49.950: INFO: Pod "pod-configmaps-6fa388c9-cfbb-407e-9ef2-6455cebccd71": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056071705s
Dec 28 15:03:51.958: INFO: Pod "pod-configmaps-6fa388c9-cfbb-407e-9ef2-6455cebccd71": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063430788s
Dec 28 15:03:53.978: INFO: Pod "pod-configmaps-6fa388c9-cfbb-407e-9ef2-6455cebccd71": Phase="Pending", Reason="", readiness=false. Elapsed: 8.083437757s
Dec 28 15:03:55.985: INFO: Pod "pod-configmaps-6fa388c9-cfbb-407e-9ef2-6455cebccd71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.091058764s
STEP: Saw pod success
Dec 28 15:03:55.985: INFO: Pod "pod-configmaps-6fa388c9-cfbb-407e-9ef2-6455cebccd71" satisfied condition "success or failure"
Dec 28 15:03:55.989: INFO: Trying to get logs from node iruya-node pod pod-configmaps-6fa388c9-cfbb-407e-9ef2-6455cebccd71 container configmap-volume-test: 
STEP: delete the pod
Dec 28 15:03:56.071: INFO: Waiting for pod pod-configmaps-6fa388c9-cfbb-407e-9ef2-6455cebccd71 to disappear
Dec 28 15:03:56.109: INFO: Pod pod-configmaps-6fa388c9-cfbb-407e-9ef2-6455cebccd71 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 15:03:56.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-267" for this suite.
Dec 28 15:04:02.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 15:04:02.292: INFO: namespace configmap-267 deletion completed in 6.17647993s

• [SLOW TEST:16.587 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 15:04:02.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 28 15:04:10.609: INFO: Waiting up to 5m0s for pod "client-envvars-f57f2c36-d15b-4ee2-a6dd-9c4e8943b743" in namespace "pods-5298" to be "success or failure"
Dec 28 15:04:10.619: INFO: Pod "client-envvars-f57f2c36-d15b-4ee2-a6dd-9c4e8943b743": Phase="Pending", Reason="", readiness=false. Elapsed: 9.97966ms
Dec 28 15:04:12.636: INFO: Pod "client-envvars-f57f2c36-d15b-4ee2-a6dd-9c4e8943b743": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027264276s
Dec 28 15:04:14.648: INFO: Pod "client-envvars-f57f2c36-d15b-4ee2-a6dd-9c4e8943b743": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038307395s
Dec 28 15:04:16.655: INFO: Pod "client-envvars-f57f2c36-d15b-4ee2-a6dd-9c4e8943b743": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046270015s
Dec 28 15:04:18.666: INFO: Pod "client-envvars-f57f2c36-d15b-4ee2-a6dd-9c4e8943b743": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056972043s
Dec 28 15:04:20.676: INFO: Pod "client-envvars-f57f2c36-d15b-4ee2-a6dd-9c4e8943b743": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.066740153s
STEP: Saw pod success
Dec 28 15:04:20.676: INFO: Pod "client-envvars-f57f2c36-d15b-4ee2-a6dd-9c4e8943b743" satisfied condition "success or failure"
Dec 28 15:04:20.679: INFO: Trying to get logs from node iruya-node pod client-envvars-f57f2c36-d15b-4ee2-a6dd-9c4e8943b743 container env3cont: 
STEP: delete the pod
Dec 28 15:04:20.765: INFO: Waiting for pod client-envvars-f57f2c36-d15b-4ee2-a6dd-9c4e8943b743 to disappear
Dec 28 15:04:20.798: INFO: Pod client-envvars-f57f2c36-d15b-4ee2-a6dd-9c4e8943b743 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 15:04:20.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5298" for this suite.
Dec 28 15:05:16.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 15:05:16.976: INFO: namespace pods-5298 deletion completed in 56.173256775s

• [SLOW TEST:74.683 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 15:05:16.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 28 15:05:17.161: INFO: Waiting up to 5m0s for pod "pod-7000ca34-2d64-452c-88a0-891e665a722b" in namespace "emptydir-9287" to be "success or failure"
Dec 28 15:05:17.208: INFO: Pod "pod-7000ca34-2d64-452c-88a0-891e665a722b": Phase="Pending", Reason="", readiness=false. Elapsed: 46.800569ms
Dec 28 15:05:19.216: INFO: Pod "pod-7000ca34-2d64-452c-88a0-891e665a722b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054840677s
Dec 28 15:05:21.235: INFO: Pod "pod-7000ca34-2d64-452c-88a0-891e665a722b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073418282s
Dec 28 15:05:23.252: INFO: Pod "pod-7000ca34-2d64-452c-88a0-891e665a722b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090441479s
Dec 28 15:05:25.285: INFO: Pod "pod-7000ca34-2d64-452c-88a0-891e665a722b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.123789176s
Dec 28 15:05:27.295: INFO: Pod "pod-7000ca34-2d64-452c-88a0-891e665a722b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.13389144s
STEP: Saw pod success
Dec 28 15:05:27.295: INFO: Pod "pod-7000ca34-2d64-452c-88a0-891e665a722b" satisfied condition "success or failure"
Dec 28 15:05:27.299: INFO: Trying to get logs from node iruya-node pod pod-7000ca34-2d64-452c-88a0-891e665a722b container test-container: 
STEP: delete the pod
Dec 28 15:05:27.368: INFO: Waiting for pod pod-7000ca34-2d64-452c-88a0-891e665a722b to disappear
Dec 28 15:05:27.373: INFO: Pod pod-7000ca34-2d64-452c-88a0-891e665a722b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 15:05:27.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9287" for this suite.
Dec 28 15:05:33.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 15:05:33.510: INFO: namespace emptydir-9287 deletion completed in 6.131283518s

• [SLOW TEST:16.533 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 15:05:33.512: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 15:05:42.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-1446" for this suite.
Dec 28 15:05:48.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 15:05:48.351: INFO: namespace emptydir-wrapper-1446 deletion completed in 6.17893822s

• [SLOW TEST:14.839 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 15:05:48.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 15:05:48.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5730" for this suite.
Dec 28 15:05:54.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 15:05:54.851: INFO: namespace kubelet-test-5730 deletion completed in 6.319661111s

• [SLOW TEST:6.500 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 15:05:54.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-qj2h
STEP: Creating a pod to test atomic-volume-subpath
Dec 28 15:05:55.135: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-qj2h" in namespace "subpath-2500" to be "success or failure"
Dec 28 15:05:55.159: INFO: Pod "pod-subpath-test-downwardapi-qj2h": Phase="Pending", Reason="", readiness=false. Elapsed: 23.656862ms
Dec 28 15:05:57.167: INFO: Pod "pod-subpath-test-downwardapi-qj2h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032189904s
Dec 28 15:05:59.175: INFO: Pod "pod-subpath-test-downwardapi-qj2h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039826312s
Dec 28 15:06:01.182: INFO: Pod "pod-subpath-test-downwardapi-qj2h": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046729683s
Dec 28 15:06:03.190: INFO: Pod "pod-subpath-test-downwardapi-qj2h": Phase="Running", Reason="", readiness=true. Elapsed: 8.054359009s
Dec 28 15:06:05.199: INFO: Pod "pod-subpath-test-downwardapi-qj2h": Phase="Running", Reason="", readiness=true. Elapsed: 10.063622784s
Dec 28 15:06:07.210: INFO: Pod "pod-subpath-test-downwardapi-qj2h": Phase="Running", Reason="", readiness=true. Elapsed: 12.075107542s
Dec 28 15:06:09.224: INFO: Pod "pod-subpath-test-downwardapi-qj2h": Phase="Running", Reason="", readiness=true. Elapsed: 14.089138816s
Dec 28 15:06:11.238: INFO: Pod "pod-subpath-test-downwardapi-qj2h": Phase="Running", Reason="", readiness=true. Elapsed: 16.103064556s
Dec 28 15:06:13.254: INFO: Pod "pod-subpath-test-downwardapi-qj2h": Phase="Running", Reason="", readiness=true. Elapsed: 18.118740429s
Dec 28 15:06:15.267: INFO: Pod "pod-subpath-test-downwardapi-qj2h": Phase="Running", Reason="", readiness=true. Elapsed: 20.132065679s
Dec 28 15:06:17.281: INFO: Pod "pod-subpath-test-downwardapi-qj2h": Phase="Running", Reason="", readiness=true. Elapsed: 22.145927498s
Dec 28 15:06:19.294: INFO: Pod "pod-subpath-test-downwardapi-qj2h": Phase="Running", Reason="", readiness=true. Elapsed: 24.158818367s
Dec 28 15:06:21.307: INFO: Pod "pod-subpath-test-downwardapi-qj2h": Phase="Running", Reason="", readiness=true. Elapsed: 26.171845816s
Dec 28 15:06:23.329: INFO: Pod "pod-subpath-test-downwardapi-qj2h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.193480694s
STEP: Saw pod success
Dec 28 15:06:23.329: INFO: Pod "pod-subpath-test-downwardapi-qj2h" satisfied condition "success or failure"
Dec 28 15:06:23.337: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-qj2h container test-container-subpath-downwardapi-qj2h: 
STEP: delete the pod
Dec 28 15:06:23.448: INFO: Waiting for pod pod-subpath-test-downwardapi-qj2h to disappear
Dec 28 15:06:23.496: INFO: Pod pod-subpath-test-downwardapi-qj2h no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-qj2h
Dec 28 15:06:23.497: INFO: Deleting pod "pod-subpath-test-downwardapi-qj2h" in namespace "subpath-2500"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 15:06:23.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2500" for this suite.
Dec 28 15:06:29.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 15:06:29.712: INFO: namespace subpath-2500 deletion completed in 6.206570903s

• [SLOW TEST:34.860 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 15:06:29.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-2933aafd-bff4-4604-a773-635199086d74
STEP: Creating a pod to test consume configMaps
Dec 28 15:06:29.885: INFO: Waiting up to 5m0s for pod "pod-configmaps-33a497c7-3afe-4b70-9fa4-16cc2e711846" in namespace "configmap-6227" to be "success or failure"
Dec 28 15:06:29.914: INFO: Pod "pod-configmaps-33a497c7-3afe-4b70-9fa4-16cc2e711846": Phase="Pending", Reason="", readiness=false. Elapsed: 28.09644ms
Dec 28 15:06:31.923: INFO: Pod "pod-configmaps-33a497c7-3afe-4b70-9fa4-16cc2e711846": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037677208s
Dec 28 15:06:33.933: INFO: Pod "pod-configmaps-33a497c7-3afe-4b70-9fa4-16cc2e711846": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047352861s
Dec 28 15:06:35.940: INFO: Pod "pod-configmaps-33a497c7-3afe-4b70-9fa4-16cc2e711846": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054821782s
Dec 28 15:06:37.954: INFO: Pod "pod-configmaps-33a497c7-3afe-4b70-9fa4-16cc2e711846": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068419889s
Dec 28 15:06:39.961: INFO: Pod "pod-configmaps-33a497c7-3afe-4b70-9fa4-16cc2e711846": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.075518256s
STEP: Saw pod success
Dec 28 15:06:39.961: INFO: Pod "pod-configmaps-33a497c7-3afe-4b70-9fa4-16cc2e711846" satisfied condition "success or failure"
Dec 28 15:06:39.964: INFO: Trying to get logs from node iruya-node pod pod-configmaps-33a497c7-3afe-4b70-9fa4-16cc2e711846 container configmap-volume-test: 
STEP: delete the pod
Dec 28 15:06:40.145: INFO: Waiting for pod pod-configmaps-33a497c7-3afe-4b70-9fa4-16cc2e711846 to disappear
Dec 28 15:06:40.155: INFO: Pod pod-configmaps-33a497c7-3afe-4b70-9fa4-16cc2e711846 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 15:06:40.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6227" for this suite.
Dec 28 15:06:46.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 15:06:46.341: INFO: namespace configmap-6227 deletion completed in 6.1782014s

• [SLOW TEST:16.628 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 15:06:46.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-3157
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Dec 28 15:06:46.487: INFO: Found 0 stateful pods, waiting for 3
Dec 28 15:06:56.506: INFO: Found 2 stateful pods, waiting for 3
Dec 28 15:07:06.504: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 15:07:06.504: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 15:07:06.504: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 28 15:07:16.521: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 15:07:16.521: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 15:07:16.521: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 28 15:07:16.589: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Dec 28 15:07:26.721: INFO: Updating stateful set ss2
Dec 28 15:07:26.793: INFO: Waiting for Pod statefulset-3157/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Dec 28 15:07:36.994: INFO: Found 2 stateful pods, waiting for 3
Dec 28 15:07:47.005: INFO: Found 2 stateful pods, waiting for 3
Dec 28 15:07:57.002: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 15:07:57.002: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 15:07:57.002: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 28 15:08:07.003: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 15:08:07.004: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 15:08:07.004: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Dec 28 15:08:07.055: INFO: Updating stateful set ss2
Dec 28 15:08:07.106: INFO: Waiting for Pod statefulset-3157/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 28 15:08:18.910: INFO: Updating stateful set ss2
Dec 28 15:08:18.938: INFO: Waiting for StatefulSet statefulset-3157/ss2 to complete update
Dec 28 15:08:18.938: INFO: Waiting for Pod statefulset-3157/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 28 15:08:28.949: INFO: Waiting for StatefulSet statefulset-3157/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 28 15:08:38.949: INFO: Deleting all statefulset in ns statefulset-3157
Dec 28 15:08:38.954: INFO: Scaling statefulset ss2 to 0
Dec 28 15:09:08.985: INFO: Waiting for statefulset status.replicas updated to 0
Dec 28 15:09:08.990: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 15:09:09.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3157" for this suite.
Dec 28 15:09:17.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 15:09:17.180: INFO: namespace statefulset-3157 deletion completed in 8.159458808s

• [SLOW TEST:150.839 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 28 15:09:17.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 28 15:09:17.326: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.55584ms)
Dec 28 15:09:17.336: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.78691ms)
Dec 28 15:09:17.343: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.11409ms)
Dec 28 15:09:17.358: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.821841ms)
Dec 28 15:09:17.385: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 26.699829ms)
Dec 28 15:09:17.394: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.724931ms)
Dec 28 15:09:17.402: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.172894ms)
Dec 28 15:09:17.408: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.226613ms)
Dec 28 15:09:17.415: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.040511ms)
Dec 28 15:09:17.421: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.638391ms)
Dec 28 15:09:17.427: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.937957ms)
Dec 28 15:09:17.432: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.699205ms)
Dec 28 15:09:17.436: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.38547ms)
Dec 28 15:09:17.443: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.130726ms)
Dec 28 15:09:17.449: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.88137ms)
Dec 28 15:09:17.453: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.257415ms)
Dec 28 15:09:17.459: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.406459ms)
Dec 28 15:09:17.464: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.101432ms)
Dec 28 15:09:17.469: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.197715ms)
Dec 28 15:09:17.475: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.214101ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 15:09:17.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-6575" for this suite.
Dec 28 15:09:23.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 15:09:23.671: INFO: namespace proxy-6575 deletion completed in 6.190498495s

• [SLOW TEST:6.491 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSDec 28 15:09:23.672: INFO: Running AfterSuite actions on all nodes
Dec 28 15:09:23.672: INFO: Running AfterSuite actions on node 1
Dec 28 15:09:23.672: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 7953.398 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS