I0224 12:56:01.090647 8 e2e.go:243] Starting e2e run "f462c093-9538-4cab-9220-52741c5b49ff" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1582548959 - Will randomize all specs Will run 215 of 4412 specs Feb 24 12:56:01.431: INFO: >>> kubeConfig: /root/.kube/config Feb 24 12:56:01.436: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 24 12:56:01.465: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 24 12:56:01.514: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 24 12:56:01.514: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 24 12:56:01.514: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 24 12:56:01.522: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 24 12:56:01.523: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 24 12:56:01.523: INFO: e2e test version: v1.15.7 Feb 24 12:56:01.524: INFO: kube-apiserver version: v1.15.1 S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 12:56:01.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Feb 24 12:56:01.651: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Feb 24 12:56:01.653: INFO: namespace kubectl-4688 Feb 24 12:56:01.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4688' Feb 24 12:56:05.355: INFO: stderr: "" Feb 24 12:56:05.355: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 24 12:56:06.367: INFO: Selector matched 1 pods for map[app:redis] Feb 24 12:56:06.367: INFO: Found 0 / 1 Feb 24 12:56:07.365: INFO: Selector matched 1 pods for map[app:redis] Feb 24 12:56:07.365: INFO: Found 0 / 1 Feb 24 12:56:08.376: INFO: Selector matched 1 pods for map[app:redis] Feb 24 12:56:08.377: INFO: Found 0 / 1 Feb 24 12:56:09.369: INFO: Selector matched 1 pods for map[app:redis] Feb 24 12:56:09.369: INFO: Found 0 / 1 Feb 24 12:56:10.375: INFO: Selector matched 1 pods for map[app:redis] Feb 24 12:56:10.375: INFO: Found 0 / 1 Feb 24 12:56:11.365: INFO: Selector matched 1 pods for map[app:redis] Feb 24 12:56:11.365: INFO: Found 0 / 1 Feb 24 12:56:12.371: INFO: Selector matched 1 pods for map[app:redis] Feb 24 12:56:12.371: INFO: Found 0 / 1 Feb 24 12:56:13.550: INFO: Selector matched 1 pods for map[app:redis] Feb 24 12:56:13.551: INFO: Found 0 / 1 Feb 24 12:56:14.368: INFO: Selector matched 1 pods for map[app:redis] Feb 24 12:56:14.368: INFO: Found 1 / 1 Feb 24 12:56:14.368: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 24 12:56:14.380: INFO: Selector matched 1 pods for map[app:redis] Feb 24 12:56:14.380: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 24 12:56:14.380: INFO: wait on redis-master startup in kubectl-4688 Feb 24 12:56:14.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jdwjm redis-master --namespace=kubectl-4688' Feb 24 12:56:14.633: INFO: stderr: "" Feb 24 12:56:14.633: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 24 Feb 12:56:12.281 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 24 Feb 12:56:12.282 # Server started, Redis version 3.2.12\n1:M 24 Feb 12:56:12.282 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 24 Feb 12:56:12.282 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Feb 24 12:56:14.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-4688' Feb 24 12:56:14.812: INFO: stderr: "" Feb 24 12:56:14.812: INFO: stdout: "service/rm2 exposed\n" Feb 24 12:56:14.862: INFO: Service rm2 in namespace kubectl-4688 found. STEP: exposing service Feb 24 12:56:16.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-4688' Feb 24 12:56:17.028: INFO: stderr: "" Feb 24 12:56:17.028: INFO: stdout: "service/rm3 exposed\n" Feb 24 12:56:17.045: INFO: Service rm3 in namespace kubectl-4688 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 12:56:19.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4688" for this suite. Feb 24 12:56:41.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 12:56:41.197: INFO: namespace kubectl-4688 deletion completed in 22.12381027s • [SLOW TEST:39.673 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 12:56:41.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-fhnq STEP: Creating a pod to test atomic-volume-subpath Feb 24 12:56:41.291: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-fhnq" in namespace "subpath-1071" to be "success or failure" Feb 24 12:56:41.299: INFO: Pod "pod-subpath-test-projected-fhnq": Phase="Pending", Reason="", readiness=false. Elapsed: 7.264095ms Feb 24 12:56:43.308: INFO: Pod "pod-subpath-test-projected-fhnq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016292701s Feb 24 12:56:45.317: INFO: Pod "pod-subpath-test-projected-fhnq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025223236s Feb 24 12:56:47.430: INFO: Pod "pod-subpath-test-projected-fhnq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138041775s Feb 24 12:56:49.440: INFO: Pod "pod-subpath-test-projected-fhnq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.148805902s Feb 24 12:56:51.453: INFO: Pod "pod-subpath-test-projected-fhnq": Phase="Running", Reason="", readiness=true. Elapsed: 10.161024551s Feb 24 12:56:53.468: INFO: Pod "pod-subpath-test-projected-fhnq": Phase="Running", Reason="", readiness=true. Elapsed: 12.17685062s Feb 24 12:56:55.475: INFO: Pod "pod-subpath-test-projected-fhnq": Phase="Running", Reason="", readiness=true. Elapsed: 14.183862515s Feb 24 12:56:57.479: INFO: Pod "pod-subpath-test-projected-fhnq": Phase="Running", Reason="", readiness=true. Elapsed: 16.18793112s Feb 24 12:56:59.486: INFO: Pod "pod-subpath-test-projected-fhnq": Phase="Running", Reason="", readiness=true. Elapsed: 18.193973144s Feb 24 12:57:01.495: INFO: Pod "pod-subpath-test-projected-fhnq": Phase="Running", Reason="", readiness=true. Elapsed: 20.203885229s Feb 24 12:57:03.528: INFO: Pod "pod-subpath-test-projected-fhnq": Phase="Running", Reason="", readiness=true. Elapsed: 22.236451013s Feb 24 12:57:05.536: INFO: Pod "pod-subpath-test-projected-fhnq": Phase="Running", Reason="", readiness=true. Elapsed: 24.244473291s Feb 24 12:57:07.542: INFO: Pod "pod-subpath-test-projected-fhnq": Phase="Running", Reason="", readiness=true. Elapsed: 26.250799702s Feb 24 12:57:09.598: INFO: Pod "pod-subpath-test-projected-fhnq": Phase="Running", Reason="", readiness=true. Elapsed: 28.30636528s Feb 24 12:57:11.604: INFO: Pod "pod-subpath-test-projected-fhnq": Phase="Running", Reason="", readiness=true. Elapsed: 30.312278355s Feb 24 12:57:13.649: INFO: Pod "pod-subpath-test-projected-fhnq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.357438699s STEP: Saw pod success Feb 24 12:57:13.649: INFO: Pod "pod-subpath-test-projected-fhnq" satisfied condition "success or failure" Feb 24 12:57:13.655: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-fhnq container test-container-subpath-projected-fhnq: STEP: delete the pod Feb 24 12:57:13.752: INFO: Waiting for pod pod-subpath-test-projected-fhnq to disappear Feb 24 12:57:13.847: INFO: Pod pod-subpath-test-projected-fhnq no longer exists STEP: Deleting pod pod-subpath-test-projected-fhnq Feb 24 12:57:13.847: INFO: Deleting pod "pod-subpath-test-projected-fhnq" in namespace "subpath-1071" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 12:57:13.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1071" for this suite. Feb 24 12:57:19.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 12:57:20.002: INFO: namespace subpath-1071 deletion completed in 6.140673816s • [SLOW TEST:38.804 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 12:57:20.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 12:58:20.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4723" for this suite. Feb 24 12:58:42.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 12:58:42.344: INFO: namespace container-probe-4723 deletion completed in 22.16943483s • [SLOW TEST:82.342 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 12:58:42.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8167.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8167.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8167.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8167.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8167.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8167.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 24 12:58:54.581: INFO: Unable to read wheezy_udp@PodARecord from pod dns-8167/dns-test-ceab3352-8b10-4e1a-8d54-bdf506cdb7ba: the server could not find the requested resource (get pods dns-test-ceab3352-8b10-4e1a-8d54-bdf506cdb7ba) Feb 24 12:58:54.586: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-8167/dns-test-ceab3352-8b10-4e1a-8d54-bdf506cdb7ba: the server could not find the requested resource (get pods dns-test-ceab3352-8b10-4e1a-8d54-bdf506cdb7ba) Feb 24 12:58:54.595: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-8167.svc.cluster.local from pod dns-8167/dns-test-ceab3352-8b10-4e1a-8d54-bdf506cdb7ba: the server could not find the requested resource (get pods dns-test-ceab3352-8b10-4e1a-8d54-bdf506cdb7ba) Feb 24 12:58:54.602: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-8167/dns-test-ceab3352-8b10-4e1a-8d54-bdf506cdb7ba: the server could not find the requested resource (get pods dns-test-ceab3352-8b10-4e1a-8d54-bdf506cdb7ba) Feb 24 12:58:54.607: INFO: Unable to read jessie_udp@PodARecord from pod dns-8167/dns-test-ceab3352-8b10-4e1a-8d54-bdf506cdb7ba: the server could not find the requested resource (get pods dns-test-ceab3352-8b10-4e1a-8d54-bdf506cdb7ba) Feb 24 12:58:54.615: INFO: Unable to read jessie_tcp@PodARecord from pod dns-8167/dns-test-ceab3352-8b10-4e1a-8d54-bdf506cdb7ba: the server could not find the requested resource (get pods dns-test-ceab3352-8b10-4e1a-8d54-bdf506cdb7ba) Feb 24 12:58:54.615: INFO: Lookups using dns-8167/dns-test-ceab3352-8b10-4e1a-8d54-bdf506cdb7ba failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-8167.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Feb 24 12:58:59.681: INFO: DNS probes using dns-8167/dns-test-ceab3352-8b10-4e1a-8d54-bdf506cdb7ba succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 12:58:59.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8167" for this suite. Feb 24 12:59:05.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 12:59:06.084: INFO: namespace dns-8167 deletion completed in 6.274074134s • [SLOW TEST:23.739 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 12:59:06.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Feb 24 12:59:07.205: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 12:59:25.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3372" for this suite. Feb 24 12:59:47.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 12:59:47.211: INFO: namespace init-container-3372 deletion completed in 22.129221418s • [SLOW TEST:41.127 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 12:59:47.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-2a2d68fd-7442-4383-a6b3-150e684057d6 STEP: Creating a pod to test consume secrets Feb 24 12:59:47.509: INFO: Waiting up to 5m0s for pod "pod-secrets-9632d109-81a3-4aa8-8e76-807f1fd45997" in namespace "secrets-167" to be "success or failure" Feb 24 12:59:47.513: INFO: Pod "pod-secrets-9632d109-81a3-4aa8-8e76-807f1fd45997": Phase="Pending", Reason="", readiness=false. Elapsed: 3.387825ms Feb 24 12:59:49.525: INFO: Pod "pod-secrets-9632d109-81a3-4aa8-8e76-807f1fd45997": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015210649s Feb 24 12:59:51.543: INFO: Pod "pod-secrets-9632d109-81a3-4aa8-8e76-807f1fd45997": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033844251s Feb 24 12:59:53.553: INFO: Pod "pod-secrets-9632d109-81a3-4aa8-8e76-807f1fd45997": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043547975s Feb 24 12:59:55.560: INFO: Pod "pod-secrets-9632d109-81a3-4aa8-8e76-807f1fd45997": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05021684s Feb 24 12:59:59.084: INFO: Pod "pod-secrets-9632d109-81a3-4aa8-8e76-807f1fd45997": Phase="Pending", Reason="", readiness=false. Elapsed: 11.574622614s Feb 24 13:00:01.090: INFO: Pod "pod-secrets-9632d109-81a3-4aa8-8e76-807f1fd45997": Phase="Pending", Reason="", readiness=false. Elapsed: 13.580518589s Feb 24 13:00:03.098: INFO: Pod "pod-secrets-9632d109-81a3-4aa8-8e76-807f1fd45997": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.589119604s STEP: Saw pod success Feb 24 13:00:03.099: INFO: Pod "pod-secrets-9632d109-81a3-4aa8-8e76-807f1fd45997" satisfied condition "success or failure" Feb 24 13:00:03.103: INFO: Trying to get logs from node iruya-node pod pod-secrets-9632d109-81a3-4aa8-8e76-807f1fd45997 container secret-volume-test: STEP: delete the pod Feb 24 13:00:03.305: INFO: Waiting for pod pod-secrets-9632d109-81a3-4aa8-8e76-807f1fd45997 to disappear Feb 24 13:00:03.323: INFO: Pod pod-secrets-9632d109-81a3-4aa8-8e76-807f1fd45997 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:00:03.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-167" for this suite. Feb 24 13:00:09.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:00:09.483: INFO: namespace secrets-167 deletion completed in 6.147106424s STEP: Destroying namespace "secret-namespace-9083" for this suite. Feb 24 13:00:15.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:00:15.607: INFO: namespace secret-namespace-9083 deletion completed in 6.124010371s • [SLOW TEST:28.396 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:00:15.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 24 13:00:15.726: INFO: Waiting up to 5m0s for pod "downward-api-d4892397-e846-4150-b242-bbb73776983b" in namespace "downward-api-8986" to be "success or failure" Feb 24 13:00:15.746: INFO: Pod "downward-api-d4892397-e846-4150-b242-bbb73776983b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.581763ms Feb 24 13:00:17.753: INFO: Pod "downward-api-d4892397-e846-4150-b242-bbb73776983b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026854202s Feb 24 13:00:19.820: INFO: Pod "downward-api-d4892397-e846-4150-b242-bbb73776983b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09383689s Feb 24 13:00:21.837: INFO: Pod "downward-api-d4892397-e846-4150-b242-bbb73776983b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110069193s Feb 24 13:00:23.847: INFO: Pod "downward-api-d4892397-e846-4150-b242-bbb73776983b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120648211s Feb 24 13:00:25.855: INFO: Pod "downward-api-d4892397-e846-4150-b242-bbb73776983b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.128227098s STEP: Saw pod success Feb 24 13:00:25.855: INFO: Pod "downward-api-d4892397-e846-4150-b242-bbb73776983b" satisfied condition "success or failure" Feb 24 13:00:25.858: INFO: Trying to get logs from node iruya-node pod downward-api-d4892397-e846-4150-b242-bbb73776983b container dapi-container: STEP: delete the pod Feb 24 13:00:26.059: INFO: Waiting for pod downward-api-d4892397-e846-4150-b242-bbb73776983b to disappear Feb 24 13:00:26.063: INFO: Pod downward-api-d4892397-e846-4150-b242-bbb73776983b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:00:26.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8986" for this suite. Feb 24 13:00:32.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:00:32.232: INFO: namespace downward-api-8986 deletion completed in 6.163886249s • [SLOW TEST:16.625 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:00:32.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0224 13:00:36.691481 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 24 13:00:36.691: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:00:36.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9505" for this suite. Feb 24 13:00:42.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:00:43.101: INFO: namespace gc-9505 deletion completed in 6.348352515s • [SLOW TEST:10.868 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:00:43.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-342331df-c9d9-47c2-b034-511389efa9f0 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-342331df-c9d9-47c2-b034-511389efa9f0 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:00:55.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2466" for this suite. Feb 24 13:01:17.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:01:17.652: INFO: namespace configmap-2466 deletion completed in 22.196044808s • [SLOW TEST:34.550 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:01:17.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 24 13:01:26.868: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:01:26.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4411" for this suite. Feb 24 13:01:32.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:01:33.096: INFO: namespace container-runtime-4411 deletion completed in 6.152080558s • [SLOW TEST:15.444 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:01:33.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 24 13:01:33.173: INFO: Waiting up to 5m0s for pod "pod-f790017e-d39e-4269-bdf4-74ea9b912b42" in namespace "emptydir-2085" to be "success or failure" Feb 24 13:01:33.224: INFO: Pod "pod-f790017e-d39e-4269-bdf4-74ea9b912b42": Phase="Pending", Reason="", readiness=false. Elapsed: 50.366064ms Feb 24 13:01:35.231: INFO: Pod "pod-f790017e-d39e-4269-bdf4-74ea9b912b42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057278675s Feb 24 13:01:37.249: INFO: Pod "pod-f790017e-d39e-4269-bdf4-74ea9b912b42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075124429s Feb 24 13:01:39.255: INFO: Pod "pod-f790017e-d39e-4269-bdf4-74ea9b912b42": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081943925s Feb 24 13:01:41.261: INFO: Pod "pod-f790017e-d39e-4269-bdf4-74ea9b912b42": Phase="Pending", Reason="", readiness=false. Elapsed: 8.087613134s Feb 24 13:01:43.272: INFO: Pod "pod-f790017e-d39e-4269-bdf4-74ea9b912b42": Phase="Pending", Reason="", readiness=false. Elapsed: 10.098261146s Feb 24 13:01:45.288: INFO: Pod "pod-f790017e-d39e-4269-bdf4-74ea9b912b42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.114286092s STEP: Saw pod success Feb 24 13:01:45.288: INFO: Pod "pod-f790017e-d39e-4269-bdf4-74ea9b912b42" satisfied condition "success or failure" Feb 24 13:01:45.299: INFO: Trying to get logs from node iruya-node pod pod-f790017e-d39e-4269-bdf4-74ea9b912b42 container test-container: STEP: delete the pod Feb 24 13:01:45.357: INFO: Waiting for pod pod-f790017e-d39e-4269-bdf4-74ea9b912b42 to disappear Feb 24 13:01:45.511: INFO: Pod pod-f790017e-d39e-4269-bdf4-74ea9b912b42 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:01:45.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2085" for this suite. Feb 24 13:01:51.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:01:51.664: INFO: namespace emptydir-2085 deletion completed in 6.145243593s • [SLOW TEST:18.567 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:01:51.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 24 13:01:51.857: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3792db34-78da-4b55-bb66-e1d9d6fdf943" in namespace "projected-7291" to be "success or failure" Feb 24 13:01:51.882: INFO: Pod "downwardapi-volume-3792db34-78da-4b55-bb66-e1d9d6fdf943": Phase="Pending", Reason="", readiness=false. Elapsed: 24.575728ms Feb 24 13:01:53.898: INFO: Pod "downwardapi-volume-3792db34-78da-4b55-bb66-e1d9d6fdf943": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040251896s Feb 24 13:01:55.905: INFO: Pod "downwardapi-volume-3792db34-78da-4b55-bb66-e1d9d6fdf943": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04799118s Feb 24 13:01:57.922: INFO: Pod "downwardapi-volume-3792db34-78da-4b55-bb66-e1d9d6fdf943": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064700809s Feb 24 13:02:00.424: INFO: Pod "downwardapi-volume-3792db34-78da-4b55-bb66-e1d9d6fdf943": Phase="Pending", Reason="", readiness=false. Elapsed: 8.566539277s Feb 24 13:02:02.432: INFO: Pod "downwardapi-volume-3792db34-78da-4b55-bb66-e1d9d6fdf943": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.574673864s STEP: Saw pod success Feb 24 13:02:02.432: INFO: Pod "downwardapi-volume-3792db34-78da-4b55-bb66-e1d9d6fdf943" satisfied condition "success or failure" Feb 24 13:02:02.437: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3792db34-78da-4b55-bb66-e1d9d6fdf943 container client-container: STEP: delete the pod Feb 24 13:02:02.562: INFO: Waiting for pod downwardapi-volume-3792db34-78da-4b55-bb66-e1d9d6fdf943 to disappear Feb 24 13:02:02.727: INFO: Pod downwardapi-volume-3792db34-78da-4b55-bb66-e1d9d6fdf943 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:02:02.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7291" for this suite. Feb 24 13:02:08.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:02:08.939: INFO: namespace projected-7291 deletion completed in 6.195832517s • [SLOW TEST:17.275 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:02:08.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-3669, will wait for the garbage collector to delete the pods Feb 24 13:02:19.106: INFO: Deleting Job.batch foo took: 20.273783ms Feb 24 13:02:19.407: INFO: Terminating Job.batch foo pods took: 300.324584ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:03:06.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3669" for this suite. Feb 24 13:03:12.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:03:12.944: INFO: namespace job-3669 deletion completed in 6.112865592s • [SLOW TEST:64.005 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:03:12.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Feb 24 13:03:15.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3692' Feb 24 13:03:15.974: INFO: stderr: "" Feb 24 13:03:15.974: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 24 13:03:15.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3692' Feb 24 13:03:16.097: INFO: stderr: "" Feb 24 13:03:16.097: INFO: stdout: "update-demo-nautilus-fbpd5 update-demo-nautilus-lq6rd " Feb 24 13:03:16.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fbpd5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3692' Feb 24 13:03:16.247: INFO: stderr: "" Feb 24 13:03:16.247: INFO: stdout: "" Feb 24 13:03:16.247: INFO: update-demo-nautilus-fbpd5 is created but not running Feb 24 13:03:21.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3692' Feb 24 13:03:21.335: INFO: stderr: "" Feb 24 13:03:21.335: INFO: stdout: "update-demo-nautilus-fbpd5 update-demo-nautilus-lq6rd " Feb 24 13:03:21.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fbpd5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3692' Feb 24 13:03:23.328: INFO: stderr: "" Feb 24 13:03:23.328: INFO: stdout: "" Feb 24 13:03:23.328: INFO: update-demo-nautilus-fbpd5 is created but not running Feb 24 13:03:28.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3692' Feb 24 13:03:28.485: INFO: stderr: "" Feb 24 13:03:28.485: INFO: stdout: "update-demo-nautilus-fbpd5 update-demo-nautilus-lq6rd " Feb 24 13:03:28.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fbpd5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3692' Feb 24 13:03:28.571: INFO: stderr: "" Feb 24 13:03:28.571: INFO: stdout: "true" Feb 24 13:03:28.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fbpd5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3692' Feb 24 13:03:28.654: INFO: stderr: "" Feb 24 13:03:28.654: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 24 13:03:28.654: INFO: validating pod update-demo-nautilus-fbpd5 Feb 24 13:03:28.677: INFO: got data: { "image": "nautilus.jpg" } Feb 24 13:03:28.678: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 24 13:03:28.678: INFO: update-demo-nautilus-fbpd5 is verified up and running Feb 24 13:03:28.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lq6rd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3692' Feb 24 13:03:28.755: INFO: stderr: "" Feb 24 13:03:28.755: INFO: stdout: "true" Feb 24 13:03:28.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lq6rd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3692' Feb 24 13:03:28.846: INFO: stderr: "" Feb 24 13:03:28.846: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 24 13:03:28.846: INFO: validating pod update-demo-nautilus-lq6rd Feb 24 13:03:28.859: INFO: got data: { "image": "nautilus.jpg" } Feb 24 13:03:28.859: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 24 13:03:28.859: INFO: update-demo-nautilus-lq6rd is verified up and running STEP: rolling-update to new replication controller Feb 24 13:03:28.862: INFO: scanned /root for discovery docs: Feb 24 13:03:28.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-3692' Feb 24 13:04:03.772: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 24 13:04:03.772: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 24 13:04:03.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3692' Feb 24 13:04:03.949: INFO: stderr: "" Feb 24 13:04:03.949: INFO: stdout: "update-demo-kitten-x8psv update-demo-kitten-zwfpc update-demo-nautilus-fbpd5 " STEP: Replicas for name=update-demo: expected=2 actual=3 Feb 24 13:04:08.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3692' Feb 24 13:04:09.065: INFO: stderr: "" Feb 24 13:04:09.065: INFO: stdout: "update-demo-kitten-x8psv update-demo-kitten-zwfpc " Feb 24 13:04:09.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-x8psv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3692' Feb 24 13:04:09.153: INFO: stderr: "" Feb 24 13:04:09.153: INFO: stdout: "true" Feb 24 13:04:09.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-x8psv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3692' Feb 24 13:04:09.215: INFO: stderr: "" Feb 24 13:04:09.215: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 24 13:04:09.215: INFO: validating pod update-demo-kitten-x8psv Feb 24 13:04:09.247: INFO: got data: { "image": "kitten.jpg" } Feb 24 13:04:09.247: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 24 13:04:09.247: INFO: update-demo-kitten-x8psv is verified up and running Feb 24 13:04:09.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-zwfpc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3692' Feb 24 13:04:09.341: INFO: stderr: "" Feb 24 13:04:09.341: INFO: stdout: "true" Feb 24 13:04:09.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-zwfpc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3692' Feb 24 13:04:09.422: INFO: stderr: "" Feb 24 13:04:09.422: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 24 13:04:09.422: INFO: validating pod update-demo-kitten-zwfpc Feb 24 13:04:09.437: INFO: got data: { "image": "kitten.jpg" } Feb 24 13:04:09.437: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 24 13:04:09.437: INFO: update-demo-kitten-zwfpc is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:04:09.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3692" for this suite. Feb 24 13:04:35.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:04:35.611: INFO: namespace kubectl-3692 deletion completed in 26.166855367s • [SLOW TEST:82.667 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:04:35.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 24 13:04:47.954: INFO: Waiting up to 5m0s for pod "client-envvars-9a939528-63c2-4384-92d8-5061bee1bbf7" in namespace "pods-4150" to be "success or failure" Feb 24 13:04:47.969: INFO: Pod "client-envvars-9a939528-63c2-4384-92d8-5061bee1bbf7": Phase="Pending", Reason="", readiness=false. Elapsed: 15.377532ms Feb 24 13:04:49.985: INFO: Pod "client-envvars-9a939528-63c2-4384-92d8-5061bee1bbf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030797778s Feb 24 13:04:52.003: INFO: Pod "client-envvars-9a939528-63c2-4384-92d8-5061bee1bbf7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049118638s Feb 24 13:04:54.018: INFO: Pod "client-envvars-9a939528-63c2-4384-92d8-5061bee1bbf7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064076162s Feb 24 13:04:56.025: INFO: Pod "client-envvars-9a939528-63c2-4384-92d8-5061bee1bbf7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07088199s Feb 24 13:04:58.031: INFO: Pod "client-envvars-9a939528-63c2-4384-92d8-5061bee1bbf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.076776019s STEP: Saw pod success Feb 24 13:04:58.031: INFO: Pod "client-envvars-9a939528-63c2-4384-92d8-5061bee1bbf7" satisfied condition "success or failure" Feb 24 13:04:58.034: INFO: Trying to get logs from node iruya-node pod client-envvars-9a939528-63c2-4384-92d8-5061bee1bbf7 container env3cont: STEP: delete the pod Feb 24 13:04:58.072: INFO: Waiting for pod client-envvars-9a939528-63c2-4384-92d8-5061bee1bbf7 to disappear Feb 24 13:04:58.083: INFO: Pod client-envvars-9a939528-63c2-4384-92d8-5061bee1bbf7 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:04:58.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4150" for this suite. Feb 24 13:05:52.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:05:52.211: INFO: namespace pods-4150 deletion completed in 54.106440438s • [SLOW TEST:76.600 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:05:52.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0224 13:06:03.528978 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 24 13:06:03.529: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:06:03.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7226" for this suite. Feb 24 13:06:09.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:06:09.982: INFO: namespace gc-7226 deletion completed in 6.420167865s • [SLOW TEST:17.770 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:06:09.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-6b8967c5-ab1b-4d35-a990-592b90b4a70e STEP: Creating a pod to test consume configMaps Feb 24 13:06:10.209: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-caf2fad7-fb7f-4856-b7af-884ee30188c7" in namespace "projected-9673" to be "success or failure" Feb 24 13:06:10.220: INFO: Pod "pod-projected-configmaps-caf2fad7-fb7f-4856-b7af-884ee30188c7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.80513ms Feb 24 13:06:12.231: INFO: Pod "pod-projected-configmaps-caf2fad7-fb7f-4856-b7af-884ee30188c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021378881s Feb 24 13:06:14.238: INFO: Pod "pod-projected-configmaps-caf2fad7-fb7f-4856-b7af-884ee30188c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028123652s Feb 24 13:06:16.543: INFO: Pod "pod-projected-configmaps-caf2fad7-fb7f-4856-b7af-884ee30188c7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.333218109s Feb 24 13:06:18.559: INFO: Pod "pod-projected-configmaps-caf2fad7-fb7f-4856-b7af-884ee30188c7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.349320164s Feb 24 13:06:20.570: INFO: Pod "pod-projected-configmaps-caf2fad7-fb7f-4856-b7af-884ee30188c7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.360420822s Feb 24 13:06:22.585: INFO: Pod "pod-projected-configmaps-caf2fad7-fb7f-4856-b7af-884ee30188c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.375249348s STEP: Saw pod success Feb 24 13:06:22.585: INFO: Pod "pod-projected-configmaps-caf2fad7-fb7f-4856-b7af-884ee30188c7" satisfied condition "success or failure" Feb 24 13:06:22.590: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-caf2fad7-fb7f-4856-b7af-884ee30188c7 container projected-configmap-volume-test: STEP: delete the pod Feb 24 13:06:22.729: INFO: Waiting for pod pod-projected-configmaps-caf2fad7-fb7f-4856-b7af-884ee30188c7 to disappear Feb 24 13:06:22.739: INFO: Pod pod-projected-configmaps-caf2fad7-fb7f-4856-b7af-884ee30188c7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:06:22.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9673" for this suite. Feb 24 13:06:30.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:06:30.871: INFO: namespace projected-9673 deletion completed in 8.125295858s • [SLOW TEST:20.889 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:06:30.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1533.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1533.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 24 13:06:43.061: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-1533/dns-test-37e7d951-b1ea-47fb-b182-277e249eff62: the server could not find the requested resource (get pods dns-test-37e7d951-b1ea-47fb-b182-277e249eff62) Feb 24 13:06:43.069: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1533/dns-test-37e7d951-b1ea-47fb-b182-277e249eff62: the server could not find the requested resource (get pods dns-test-37e7d951-b1ea-47fb-b182-277e249eff62) Feb 24 13:06:43.075: INFO: Unable to read wheezy_udp@PodARecord from pod dns-1533/dns-test-37e7d951-b1ea-47fb-b182-277e249eff62: the server could not find the requested resource (get pods dns-test-37e7d951-b1ea-47fb-b182-277e249eff62) Feb 24 13:06:43.083: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-1533/dns-test-37e7d951-b1ea-47fb-b182-277e249eff62: the server could not find the requested resource (get pods dns-test-37e7d951-b1ea-47fb-b182-277e249eff62) Feb 24 13:06:43.087: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-1533/dns-test-37e7d951-b1ea-47fb-b182-277e249eff62: the server could not find the requested resource (get pods dns-test-37e7d951-b1ea-47fb-b182-277e249eff62) Feb 24 13:06:43.092: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-1533/dns-test-37e7d951-b1ea-47fb-b182-277e249eff62: the server could not find the requested resource (get pods dns-test-37e7d951-b1ea-47fb-b182-277e249eff62) Feb 24 13:06:43.096: INFO: Unable to read jessie_udp@PodARecord from pod dns-1533/dns-test-37e7d951-b1ea-47fb-b182-277e249eff62: the server could not find the requested resource (get pods dns-test-37e7d951-b1ea-47fb-b182-277e249eff62) Feb 24 13:06:43.100: INFO: Unable to read jessie_tcp@PodARecord from pod dns-1533/dns-test-37e7d951-b1ea-47fb-b182-277e249eff62: the server could not find the requested resource (get pods dns-test-37e7d951-b1ea-47fb-b182-277e249eff62) Feb 24 13:06:43.100: INFO: Lookups using dns-1533/dns-test-37e7d951-b1ea-47fb-b182-277e249eff62 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord] Feb 24 13:06:48.167: INFO: DNS probes using dns-1533/dns-test-37e7d951-b1ea-47fb-b182-277e249eff62 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:06:48.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1533" for this suite. Feb 24 13:06:54.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:06:54.506: INFO: namespace dns-1533 deletion completed in 6.269430538s • [SLOW TEST:23.635 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:06:54.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Feb 24 13:06:54.710: INFO: Waiting up to 5m0s for pod "var-expansion-5bfa88df-be55-404e-adca-2b70e0a25d96" in namespace "var-expansion-2201" to be "success or failure" Feb 24 13:06:54.718: INFO: Pod "var-expansion-5bfa88df-be55-404e-adca-2b70e0a25d96": Phase="Pending", Reason="", readiness=false. Elapsed: 8.009492ms Feb 24 13:06:56.732: INFO: Pod "var-expansion-5bfa88df-be55-404e-adca-2b70e0a25d96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022036839s Feb 24 13:06:58.740: INFO: Pod "var-expansion-5bfa88df-be55-404e-adca-2b70e0a25d96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030150299s Feb 24 13:07:00.748: INFO: Pod "var-expansion-5bfa88df-be55-404e-adca-2b70e0a25d96": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037898327s Feb 24 13:07:02.762: INFO: Pod "var-expansion-5bfa88df-be55-404e-adca-2b70e0a25d96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052570934s STEP: Saw pod success Feb 24 13:07:02.762: INFO: Pod "var-expansion-5bfa88df-be55-404e-adca-2b70e0a25d96" satisfied condition "success or failure" Feb 24 13:07:02.767: INFO: Trying to get logs from node iruya-node pod var-expansion-5bfa88df-be55-404e-adca-2b70e0a25d96 container dapi-container: STEP: delete the pod Feb 24 13:07:02.908: INFO: Waiting for pod var-expansion-5bfa88df-be55-404e-adca-2b70e0a25d96 to disappear Feb 24 13:07:02.956: INFO: Pod var-expansion-5bfa88df-be55-404e-adca-2b70e0a25d96 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:07:02.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2201" for this suite. Feb 24 13:07:08.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:07:09.100: INFO: namespace var-expansion-2201 deletion completed in 6.133079845s • [SLOW TEST:14.593 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:07:09.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 24 13:07:09.207: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:07:17.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6028" for this suite. Feb 24 13:08:01.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:08:01.726: INFO: namespace pods-6028 deletion completed in 44.165416785s • [SLOW TEST:52.626 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:08:01.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 24 13:08:01.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4175' Feb 24 13:08:04.342: INFO: stderr: "" Feb 24 13:08:04.342: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Feb 24 13:08:04.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-4175' Feb 24 13:08:12.343: INFO: stderr: "" Feb 24 13:08:12.343: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:08:12.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4175" for this suite. Feb 24 13:08:18.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:08:18.633: INFO: namespace kubectl-4175 deletion completed in 6.206495135s • [SLOW TEST:16.907 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:08:18.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 24 13:08:18.688: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Feb 24 13:08:22.047: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:08:23.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3386" for this suite. Feb 24 13:08:35.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:08:36.051: INFO: namespace replication-controller-3386 deletion completed in 12.476177456s • [SLOW TEST:17.418 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:08:36.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 24 13:08:36.232: INFO: Waiting up to 5m0s for pod "downwardapi-volume-07e33728-1ba8-463a-b622-5c056e9a4064" in namespace "downward-api-1772" to be "success or failure" Feb 24 13:08:36.405: INFO: Pod "downwardapi-volume-07e33728-1ba8-463a-b622-5c056e9a4064": Phase="Pending", Reason="", readiness=false. Elapsed: 172.935381ms Feb 24 13:08:38.411: INFO: Pod "downwardapi-volume-07e33728-1ba8-463a-b622-5c056e9a4064": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179366976s Feb 24 13:08:40.418: INFO: Pod "downwardapi-volume-07e33728-1ba8-463a-b622-5c056e9a4064": Phase="Pending", Reason="", readiness=false. Elapsed: 4.186233668s Feb 24 13:08:42.426: INFO: Pod "downwardapi-volume-07e33728-1ba8-463a-b622-5c056e9a4064": Phase="Pending", Reason="", readiness=false. Elapsed: 6.194336519s Feb 24 13:08:44.434: INFO: Pod "downwardapi-volume-07e33728-1ba8-463a-b622-5c056e9a4064": Phase="Pending", Reason="", readiness=false. Elapsed: 8.202340542s Feb 24 13:08:46.548: INFO: Pod "downwardapi-volume-07e33728-1ba8-463a-b622-5c056e9a4064": Phase="Pending", Reason="", readiness=false. Elapsed: 10.316533392s Feb 24 13:08:48.559: INFO: Pod "downwardapi-volume-07e33728-1ba8-463a-b622-5c056e9a4064": Phase="Pending", Reason="", readiness=false. Elapsed: 12.326835457s Feb 24 13:08:50.603: INFO: Pod "downwardapi-volume-07e33728-1ba8-463a-b622-5c056e9a4064": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.370876913s STEP: Saw pod success Feb 24 13:08:50.603: INFO: Pod "downwardapi-volume-07e33728-1ba8-463a-b622-5c056e9a4064" satisfied condition "success or failure" Feb 24 13:08:50.607: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-07e33728-1ba8-463a-b622-5c056e9a4064 container client-container: STEP: delete the pod Feb 24 13:08:50.969: INFO: Waiting for pod downwardapi-volume-07e33728-1ba8-463a-b622-5c056e9a4064 to disappear Feb 24 13:08:51.002: INFO: Pod downwardapi-volume-07e33728-1ba8-463a-b622-5c056e9a4064 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:08:51.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1772" for this suite. Feb 24 13:08:57.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:08:57.168: INFO: namespace downward-api-1772 deletion completed in 6.152414938s • [SLOW TEST:21.116 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:08:57.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Feb 24 13:09:07.951: INFO: Successfully updated pod "labelsupdate190daf2b-97b2-4dea-95fd-67c4b0d84e77" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:09:10.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2752" for this suite. Feb 24 13:09:32.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:09:32.168: INFO: namespace downward-api-2752 deletion completed in 22.095948725s • [SLOW TEST:35.000 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:09:32.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 24 13:09:50.511: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 24 13:09:50.528: INFO: Pod pod-with-prestop-http-hook still exists Feb 24 13:09:52.529: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 24 13:09:52.549: INFO: Pod pod-with-prestop-http-hook still exists Feb 24 13:09:54.529: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 24 13:09:54.541: INFO: Pod pod-with-prestop-http-hook still exists Feb 24 13:09:56.529: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 24 13:09:56.542: INFO: Pod pod-with-prestop-http-hook still exists Feb 24 13:09:58.529: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 24 13:09:58.540: INFO: Pod pod-with-prestop-http-hook still exists Feb 24 13:10:00.529: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 24 13:10:00.562: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:10:00.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9793" for this suite. Feb 24 13:10:22.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:10:22.839: INFO: namespace container-lifecycle-hook-9793 deletion completed in 22.186388498s • [SLOW TEST:50.671 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:10:22.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-a1ce6805-184c-4cd8-b81f-26e54ddaca6a in namespace container-probe-2571 Feb 24 13:10:30.990: INFO: Started pod liveness-a1ce6805-184c-4cd8-b81f-26e54ddaca6a in namespace container-probe-2571 STEP: checking the pod's current state and verifying that restartCount is present Feb 24 13:10:30.996: INFO: Initial restart count of pod liveness-a1ce6805-184c-4cd8-b81f-26e54ddaca6a is 0 Feb 24 13:10:51.316: INFO: Restart count of pod container-probe-2571/liveness-a1ce6805-184c-4cd8-b81f-26e54ddaca6a is now 1 (20.31925432s elapsed) Feb 24 13:11:11.414: INFO: Restart count of pod container-probe-2571/liveness-a1ce6805-184c-4cd8-b81f-26e54ddaca6a is now 2 (40.417720002s elapsed) Feb 24 13:11:31.511: INFO: Restart count of pod container-probe-2571/liveness-a1ce6805-184c-4cd8-b81f-26e54ddaca6a is now 3 (1m0.5144092s elapsed) Feb 24 13:11:51.666: INFO: Restart count of pod container-probe-2571/liveness-a1ce6805-184c-4cd8-b81f-26e54ddaca6a is now 4 (1m20.669323805s elapsed) Feb 24 13:12:11.756: INFO: Restart count of pod container-probe-2571/liveness-a1ce6805-184c-4cd8-b81f-26e54ddaca6a is now 5 (1m40.759552086s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:12:11.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2571" for this suite. Feb 24 13:12:17.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:12:17.993: INFO: namespace container-probe-2571 deletion completed in 6.158981223s • [SLOW TEST:115.153 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:12:17.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-4376 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4376 to expose endpoints map[] Feb 24 13:12:18.563: INFO: Get endpoints failed (19.050237ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Feb 24 13:12:19.571: INFO: successfully validated that service multi-endpoint-test in namespace services-4376 exposes endpoints map[] (1.027816402s elapsed) STEP: Creating pod pod1 in namespace services-4376 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4376 to expose endpoints map[pod1:[100]] Feb 24 13:12:23.807: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.220269565s elapsed, will retry) Feb 24 13:12:28.864: INFO: successfully validated that service multi-endpoint-test in namespace services-4376 exposes endpoints map[pod1:[100]] (9.277999421s elapsed) STEP: Creating pod pod2 in namespace services-4376 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4376 to expose endpoints map[pod1:[100] pod2:[101]] Feb 24 13:12:33.321: INFO: Unexpected endpoints: found map[f029ea13-a105-46c3-a9e7-238f0db424e3:[100]], expected map[pod1:[100] pod2:[101]] (4.446524079s elapsed, will retry) Feb 24 13:12:38.506: INFO: successfully validated that service multi-endpoint-test in namespace services-4376 exposes endpoints map[pod1:[100] pod2:[101]] (9.631423181s elapsed) STEP: Deleting pod pod1 in namespace services-4376 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4376 to expose endpoints map[pod2:[101]] Feb 24 13:12:39.579: INFO: successfully validated that service multi-endpoint-test in namespace services-4376 exposes endpoints map[pod2:[101]] (1.054261948s elapsed) STEP: Deleting pod pod2 in namespace services-4376 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4376 to expose endpoints map[] Feb 24 13:12:40.631: INFO: successfully validated that service multi-endpoint-test in namespace services-4376 exposes endpoints map[] (1.028435919s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:12:40.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4376" for this suite. Feb 24 13:13:02.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:13:02.879: INFO: namespace services-4376 deletion completed in 22.118617236s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:44.886 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:13:02.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-8916 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 24 13:13:03.009: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 24 13:13:39.211: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-8916 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 24 13:13:39.211: INFO: >>> kubeConfig: /root/.kube/config I0224 13:13:39.358933 8 log.go:172] (0xc00099b970) (0xc0029aa460) Create stream I0224 13:13:39.359009 8 log.go:172] (0xc00099b970) (0xc0029aa460) Stream added, broadcasting: 1 I0224 13:13:39.370628 8 log.go:172] (0xc00099b970) Reply frame received for 1 I0224 13:13:39.370703 8 log.go:172] (0xc00099b970) (0xc001d866e0) Create stream I0224 13:13:39.370723 8 log.go:172] (0xc00099b970) (0xc001d866e0) Stream added, broadcasting: 3 I0224 13:13:39.372719 8 log.go:172] (0xc00099b970) Reply frame received for 3 I0224 13:13:39.372752 8 log.go:172] (0xc00099b970) (0xc00095caa0) Create stream I0224 13:13:39.372764 8 log.go:172] (0xc00099b970) (0xc00095caa0) Stream added, broadcasting: 5 I0224 13:13:39.374516 8 log.go:172] (0xc00099b970) Reply frame received for 5 I0224 13:13:39.607328 8 log.go:172] (0xc00099b970) Data frame received for 3 I0224 13:13:39.607367 8 log.go:172] (0xc001d866e0) (3) Data frame handling I0224 13:13:39.607388 8 log.go:172] (0xc001d866e0) (3) Data frame sent I0224 13:13:39.741691 8 log.go:172] (0xc00099b970) Data frame received for 1 I0224 13:13:39.741719 8 log.go:172] (0xc00099b970) (0xc00095caa0) Stream removed, broadcasting: 5 I0224 13:13:39.741741 8 log.go:172] (0xc0029aa460) (1) Data frame handling I0224 13:13:39.741771 8 log.go:172] (0xc0029aa460) (1) Data frame sent I0224 13:13:39.741784 8 log.go:172] (0xc00099b970) (0xc001d866e0) Stream removed, broadcasting: 3 I0224 13:13:39.741823 8 log.go:172] (0xc00099b970) (0xc0029aa460) Stream removed, broadcasting: 1 I0224 13:13:39.741837 8 log.go:172] (0xc00099b970) Go away received I0224 13:13:39.742067 8 log.go:172] (0xc00099b970) (0xc0029aa460) Stream removed, broadcasting: 1 I0224 13:13:39.742078 8 log.go:172] (0xc00099b970) (0xc001d866e0) Stream removed, broadcasting: 3 I0224 13:13:39.742085 8 log.go:172] (0xc00099b970) (0xc00095caa0) Stream removed, broadcasting: 5 Feb 24 13:13:39.742: INFO: Waiting for endpoints: map[] Feb 24 13:13:39.748: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-8916 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 24 13:13:39.749: INFO: >>> kubeConfig: /root/.kube/config I0224 13:13:39.813775 8 log.go:172] (0xc00177c420) (0xc00095d4a0) Create stream I0224 13:13:39.814274 8 log.go:172] (0xc00177c420) (0xc00095d4a0) Stream added, broadcasting: 1 I0224 13:13:39.831458 8 log.go:172] (0xc00177c420) Reply frame received for 1 I0224 13:13:39.831543 8 log.go:172] (0xc00177c420) (0xc00083e8c0) Create stream I0224 13:13:39.831551 8 log.go:172] (0xc00177c420) (0xc00083e8c0) Stream added, broadcasting: 3 I0224 13:13:39.839396 8 log.go:172] (0xc00177c420) Reply frame received for 3 I0224 13:13:39.839478 8 log.go:172] (0xc00177c420) (0xc00095d540) Create stream I0224 13:13:39.839501 8 log.go:172] (0xc00177c420) (0xc00095d540) Stream added, broadcasting: 5 I0224 13:13:39.843467 8 log.go:172] (0xc00177c420) Reply frame received for 5 I0224 13:13:39.997182 8 log.go:172] (0xc00177c420) Data frame received for 3 I0224 13:13:39.997234 8 log.go:172] (0xc00083e8c0) (3) Data frame handling I0224 13:13:39.997262 8 log.go:172] (0xc00083e8c0) (3) Data frame sent I0224 13:13:40.233388 8 log.go:172] (0xc00177c420) Data frame received for 1 I0224 13:13:40.233460 8 log.go:172] (0xc00095d4a0) (1) Data frame handling I0224 13:13:40.233474 8 log.go:172] (0xc00095d4a0) (1) Data frame sent I0224 13:13:40.233668 8 log.go:172] (0xc00177c420) (0xc00095d4a0) Stream removed, broadcasting: 1 I0224 13:13:40.234594 8 log.go:172] (0xc00177c420) (0xc00083e8c0) Stream removed, broadcasting: 3 I0224 13:13:40.234648 8 log.go:172] (0xc00177c420) (0xc00095d540) Stream removed, broadcasting: 5 I0224 13:13:40.234729 8 log.go:172] (0xc00177c420) (0xc00095d4a0) Stream removed, broadcasting: 1 I0224 13:13:40.234762 8 log.go:172] (0xc00177c420) (0xc00083e8c0) Stream removed, broadcasting: 3 I0224 13:13:40.234794 8 log.go:172] (0xc00177c420) (0xc00095d540) Stream removed, broadcasting: 5 I0224 13:13:40.234890 8 log.go:172] (0xc00177c420) Go away received Feb 24 13:13:40.234: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:13:40.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8916" for this suite. Feb 24 13:14:06.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:14:06.420: INFO: namespace pod-network-test-8916 deletion completed in 26.167898569s • [SLOW TEST:63.540 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:14:06.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-03c81ea6-095f-4ee0-86eb-fabaa5cdf50b STEP: Creating a pod to test consume configMaps Feb 24 13:14:06.642: INFO: Waiting up to 5m0s for pod "pod-configmaps-dfe1f3da-3ca4-42a4-99ca-c32f10bf52d6" in namespace "configmap-2619" to be "success or failure" Feb 24 13:14:06.653: INFO: Pod "pod-configmaps-dfe1f3da-3ca4-42a4-99ca-c32f10bf52d6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.89558ms Feb 24 13:14:08.668: INFO: Pod "pod-configmaps-dfe1f3da-3ca4-42a4-99ca-c32f10bf52d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025897808s Feb 24 13:14:10.698: INFO: Pod "pod-configmaps-dfe1f3da-3ca4-42a4-99ca-c32f10bf52d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056266251s Feb 24 13:14:12.713: INFO: Pod "pod-configmaps-dfe1f3da-3ca4-42a4-99ca-c32f10bf52d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071352558s Feb 24 13:14:14.729: INFO: Pod "pod-configmaps-dfe1f3da-3ca4-42a4-99ca-c32f10bf52d6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.087125362s Feb 24 13:14:16.752: INFO: Pod "pod-configmaps-dfe1f3da-3ca4-42a4-99ca-c32f10bf52d6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.109499968s Feb 24 13:14:18.759: INFO: Pod "pod-configmaps-dfe1f3da-3ca4-42a4-99ca-c32f10bf52d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.117005695s STEP: Saw pod success Feb 24 13:14:18.759: INFO: Pod "pod-configmaps-dfe1f3da-3ca4-42a4-99ca-c32f10bf52d6" satisfied condition "success or failure" Feb 24 13:14:18.762: INFO: Trying to get logs from node iruya-node pod pod-configmaps-dfe1f3da-3ca4-42a4-99ca-c32f10bf52d6 container configmap-volume-test: STEP: delete the pod Feb 24 13:14:18.833: INFO: Waiting for pod pod-configmaps-dfe1f3da-3ca4-42a4-99ca-c32f10bf52d6 to disappear Feb 24 13:14:18.868: INFO: Pod pod-configmaps-dfe1f3da-3ca4-42a4-99ca-c32f10bf52d6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:14:18.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2619" for this suite. Feb 24 13:14:25.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:14:25.211: INFO: namespace configmap-2619 deletion completed in 6.323296959s • [SLOW TEST:18.791 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:14:25.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 24 13:14:25.388: INFO: Waiting up to 5m0s for pod "downwardapi-volume-07ea8002-b7ce-41a8-94fe-424dac99c185" in namespace "downward-api-7970" to be "success or failure" Feb 24 13:14:25.430: INFO: Pod "downwardapi-volume-07ea8002-b7ce-41a8-94fe-424dac99c185": Phase="Pending", Reason="", readiness=false. Elapsed: 42.334066ms Feb 24 13:14:27.438: INFO: Pod "downwardapi-volume-07ea8002-b7ce-41a8-94fe-424dac99c185": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050527767s Feb 24 13:14:29.451: INFO: Pod "downwardapi-volume-07ea8002-b7ce-41a8-94fe-424dac99c185": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063349427s Feb 24 13:14:31.463: INFO: Pod "downwardapi-volume-07ea8002-b7ce-41a8-94fe-424dac99c185": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075078987s Feb 24 13:14:33.473: INFO: Pod "downwardapi-volume-07ea8002-b7ce-41a8-94fe-424dac99c185": Phase="Pending", Reason="", readiness=false. Elapsed: 8.085057446s Feb 24 13:14:35.481: INFO: Pod "downwardapi-volume-07ea8002-b7ce-41a8-94fe-424dac99c185": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.09338044s STEP: Saw pod success Feb 24 13:14:35.481: INFO: Pod "downwardapi-volume-07ea8002-b7ce-41a8-94fe-424dac99c185" satisfied condition "success or failure" Feb 24 13:14:35.485: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-07ea8002-b7ce-41a8-94fe-424dac99c185 container client-container: STEP: delete the pod Feb 24 13:14:35.531: INFO: Waiting for pod downwardapi-volume-07ea8002-b7ce-41a8-94fe-424dac99c185 to disappear Feb 24 13:14:35.536: INFO: Pod downwardapi-volume-07ea8002-b7ce-41a8-94fe-424dac99c185 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:14:35.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7970" for this suite. Feb 24 13:14:41.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:14:41.768: INFO: namespace downward-api-7970 deletion completed in 6.174012168s • [SLOW TEST:16.557 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:14:41.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Feb 24 13:14:41.900: INFO: Waiting up to 5m0s for pod "var-expansion-87d999e6-25eb-4a87-b21b-36385c9e6e03" in namespace "var-expansion-3618" to be "success or failure" Feb 24 13:14:41.910: INFO: Pod "var-expansion-87d999e6-25eb-4a87-b21b-36385c9e6e03": Phase="Pending", Reason="", readiness=false. Elapsed: 9.459389ms Feb 24 13:14:43.929: INFO: Pod "var-expansion-87d999e6-25eb-4a87-b21b-36385c9e6e03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029169424s Feb 24 13:14:45.937: INFO: Pod "var-expansion-87d999e6-25eb-4a87-b21b-36385c9e6e03": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036800034s Feb 24 13:14:47.948: INFO: Pod "var-expansion-87d999e6-25eb-4a87-b21b-36385c9e6e03": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047570266s Feb 24 13:14:50.890: INFO: Pod "var-expansion-87d999e6-25eb-4a87-b21b-36385c9e6e03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.989479734s STEP: Saw pod success Feb 24 13:14:50.890: INFO: Pod "var-expansion-87d999e6-25eb-4a87-b21b-36385c9e6e03" satisfied condition "success or failure" Feb 24 13:14:50.897: INFO: Trying to get logs from node iruya-node pod var-expansion-87d999e6-25eb-4a87-b21b-36385c9e6e03 container dapi-container: STEP: delete the pod Feb 24 13:14:54.208: INFO: Waiting for pod var-expansion-87d999e6-25eb-4a87-b21b-36385c9e6e03 to disappear Feb 24 13:14:54.215: INFO: Pod var-expansion-87d999e6-25eb-4a87-b21b-36385c9e6e03 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:14:54.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3618" for this suite. Feb 24 13:15:00.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:15:00.387: INFO: namespace var-expansion-3618 deletion completed in 6.168221328s • [SLOW TEST:18.618 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:15:00.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-500c7270-a9c9-495b-b7de-a29255e6dcbe STEP: Creating a pod to test consume configMaps Feb 24 13:15:00.480: INFO: Waiting up to 5m0s for pod "pod-configmaps-907e7cef-dce7-44ba-8511-38b3a1ebaeab" in namespace "configmap-2669" to be "success or failure" Feb 24 13:15:00.531: INFO: Pod "pod-configmaps-907e7cef-dce7-44ba-8511-38b3a1ebaeab": Phase="Pending", Reason="", readiness=false. Elapsed: 50.935974ms Feb 24 13:15:02.550: INFO: Pod "pod-configmaps-907e7cef-dce7-44ba-8511-38b3a1ebaeab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069556683s Feb 24 13:15:04.558: INFO: Pod "pod-configmaps-907e7cef-dce7-44ba-8511-38b3a1ebaeab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077564551s Feb 24 13:15:06.569: INFO: Pod "pod-configmaps-907e7cef-dce7-44ba-8511-38b3a1ebaeab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088841931s Feb 24 13:15:08.639: INFO: Pod "pod-configmaps-907e7cef-dce7-44ba-8511-38b3a1ebaeab": Phase="Pending", Reason="", readiness=false. Elapsed: 8.158942434s Feb 24 13:15:10.654: INFO: Pod "pod-configmaps-907e7cef-dce7-44ba-8511-38b3a1ebaeab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.173784733s STEP: Saw pod success Feb 24 13:15:10.654: INFO: Pod "pod-configmaps-907e7cef-dce7-44ba-8511-38b3a1ebaeab" satisfied condition "success or failure" Feb 24 13:15:10.661: INFO: Trying to get logs from node iruya-node pod pod-configmaps-907e7cef-dce7-44ba-8511-38b3a1ebaeab container configmap-volume-test: STEP: delete the pod Feb 24 13:15:10.806: INFO: Waiting for pod pod-configmaps-907e7cef-dce7-44ba-8511-38b3a1ebaeab to disappear Feb 24 13:15:10.816: INFO: Pod pod-configmaps-907e7cef-dce7-44ba-8511-38b3a1ebaeab no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:15:10.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2669" for this suite. Feb 24 13:15:16.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:15:17.025: INFO: namespace configmap-2669 deletion completed in 6.203641101s • [SLOW TEST:16.638 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:15:17.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 24 13:15:17.106: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Feb 24 13:15:17.166: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 24 13:15:26.091: INFO: Creating deployment "test-rolling-update-deployment" Feb 24 13:15:26.103: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Feb 24 13:15:26.136: INFO: deployment "test-rolling-update-deployment" doesn't have the required revision set Feb 24 13:15:28.779: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Feb 24 13:15:28.796: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718146926, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718146926, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718146926, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718146926, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 13:15:30.803: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718146926, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718146926, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718146926, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718146926, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 13:15:32.811: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718146926, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718146926, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718146926, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718146926, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 13:15:34.807: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718146926, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718146926, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718146926, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718146926, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 13:15:36.803: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718146926, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718146926, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718146926, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718146926, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 13:15:38.813: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 24 13:15:38.834: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-1760,SelfLink:/apis/apps/v1/namespaces/deployment-1760/deployments/test-rolling-update-deployment,UID:e4f5409d-3db9-4545-a567-4e6e9a5ec500,ResourceVersion:25574150,Generation:1,CreationTimestamp:2020-02-24 13:15:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-24 13:15:26 +0000 UTC 2020-02-24 13:15:26 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-24 13:15:38 +0000 UTC 2020-02-24 13:15:26 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 24 13:15:38.837: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-1760,SelfLink:/apis/apps/v1/namespaces/deployment-1760/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:7f6cba66-c977-4a42-9d2a-82239f3d0f12,ResourceVersion:25574140,Generation:1,CreationTimestamp:2020-02-24 13:15:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment e4f5409d-3db9-4545-a567-4e6e9a5ec500 0xc0025e7e47 0xc0025e7e48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 24 13:15:38.837: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Feb 24 13:15:38.837: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-1760,SelfLink:/apis/apps/v1/namespaces/deployment-1760/replicasets/test-rolling-update-controller,UID:ea5050ca-6d53-46c2-98ec-1284d1ed34e5,ResourceVersion:25574149,Generation:2,CreationTimestamp:2020-02-24 13:15:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment e4f5409d-3db9-4545-a567-4e6e9a5ec500 0xc0025e7d77 0xc0025e7d78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 24 13:15:38.841: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-d5f57" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-d5f57,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-1760,SelfLink:/api/v1/namespaces/deployment-1760/pods/test-rolling-update-deployment-79f6b9d75c-d5f57,UID:3fb81d94-6269-4639-a814-047e78a1be81,ResourceVersion:25574139,Generation:0,CreationTimestamp:2020-02-24 13:15:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 7f6cba66-c977-4a42-9d2a-82239f3d0f12 0xc0014fa7c7 0xc0014fa7c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-znb8z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-znb8z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-znb8z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014fa840} {node.kubernetes.io/unreachable Exists NoExecute 0xc0014fa860}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:15:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:15:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:15:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:15:26 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-24 13:15:26 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-24 13:15:38 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://3ccabee7eb7b758d6635ebd42a088da6e8375f8adb2ff71b97ed8afbcb1a59bf}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:15:38.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1760" for this suite. Feb 24 13:15:46.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:15:47.139: INFO: namespace deployment-1760 deletion completed in 8.289470585s • [SLOW TEST:30.113 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:15:47.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 24 13:15:47.348: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Feb 24 13:15:47.373: INFO: Number of nodes with available pods: 0 Feb 24 13:15:47.373: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:15:49.461: INFO: Number of nodes with available pods: 0 Feb 24 13:15:49.461: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:15:52.223: INFO: Number of nodes with available pods: 0 Feb 24 13:15:52.223: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:15:52.447: INFO: Number of nodes with available pods: 0 Feb 24 13:15:52.447: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:15:53.710: INFO: Number of nodes with available pods: 0 Feb 24 13:15:53.710: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:15:54.435: INFO: Number of nodes with available pods: 0 Feb 24 13:15:54.436: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:15:57.126: INFO: Number of nodes with available pods: 0 Feb 24 13:15:57.126: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:15:57.389: INFO: Number of nodes with available pods: 0 Feb 24 13:15:57.389: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:15:58.394: INFO: Number of nodes with available pods: 0 Feb 24 13:15:58.394: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:16:00.132: INFO: Number of nodes with available pods: 0 Feb 24 13:16:00.132: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:16:01.620: INFO: Number of nodes with available pods: 0 Feb 24 13:16:01.620: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:16:03.228: INFO: Number of nodes with available pods: 0 Feb 24 13:16:03.228: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:16:03.463: INFO: Number of nodes with available pods: 0 Feb 24 13:16:03.463: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:16:04.408: INFO: Number of nodes with available pods: 2 Feb 24 13:16:04.408: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Feb 24 13:16:04.457: INFO: Wrong image for pod: daemon-set-7fw9j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:04.457: INFO: Wrong image for pod: daemon-set-j278z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:05.494: INFO: Wrong image for pod: daemon-set-7fw9j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:05.494: INFO: Wrong image for pod: daemon-set-j278z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:06.525: INFO: Wrong image for pod: daemon-set-7fw9j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:06.525: INFO: Wrong image for pod: daemon-set-j278z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:07.494: INFO: Wrong image for pod: daemon-set-7fw9j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:07.494: INFO: Wrong image for pod: daemon-set-j278z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:08.740: INFO: Wrong image for pod: daemon-set-7fw9j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:08.740: INFO: Wrong image for pod: daemon-set-j278z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:09.497: INFO: Wrong image for pod: daemon-set-7fw9j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:09.497: INFO: Wrong image for pod: daemon-set-j278z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:10.496: INFO: Wrong image for pod: daemon-set-7fw9j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:10.496: INFO: Wrong image for pod: daemon-set-j278z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:10.496: INFO: Pod daemon-set-j278z is not available Feb 24 13:16:11.494: INFO: Wrong image for pod: daemon-set-7fw9j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:11.494: INFO: Wrong image for pod: daemon-set-j278z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:11.494: INFO: Pod daemon-set-j278z is not available Feb 24 13:16:12.497: INFO: Wrong image for pod: daemon-set-7fw9j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:12.497: INFO: Wrong image for pod: daemon-set-j278z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:12.497: INFO: Pod daemon-set-j278z is not available Feb 24 13:16:13.492: INFO: Wrong image for pod: daemon-set-7fw9j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:13.492: INFO: Wrong image for pod: daemon-set-j278z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:13.492: INFO: Pod daemon-set-j278z is not available Feb 24 13:16:14.498: INFO: Wrong image for pod: daemon-set-7fw9j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:14.498: INFO: Wrong image for pod: daemon-set-j278z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:14.498: INFO: Pod daemon-set-j278z is not available Feb 24 13:16:15.499: INFO: Wrong image for pod: daemon-set-7fw9j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:15.499: INFO: Wrong image for pod: daemon-set-j278z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:15.499: INFO: Pod daemon-set-j278z is not available Feb 24 13:16:16.501: INFO: Wrong image for pod: daemon-set-7fw9j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:16.501: INFO: Wrong image for pod: daemon-set-j278z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:16.501: INFO: Pod daemon-set-j278z is not available Feb 24 13:16:17.497: INFO: Wrong image for pod: daemon-set-7fw9j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:17.497: INFO: Wrong image for pod: daemon-set-j278z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:17.497: INFO: Pod daemon-set-j278z is not available Feb 24 13:16:18.512: INFO: Wrong image for pod: daemon-set-7fw9j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:18.512: INFO: Pod daemon-set-rxglj is not available Feb 24 13:16:20.141: INFO: Wrong image for pod: daemon-set-7fw9j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:20.141: INFO: Pod daemon-set-rxglj is not available Feb 24 13:16:20.501: INFO: Wrong image for pod: daemon-set-7fw9j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:20.501: INFO: Pod daemon-set-rxglj is not available Feb 24 13:16:21.507: INFO: Wrong image for pod: daemon-set-7fw9j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:21.507: INFO: Pod daemon-set-rxglj is not available Feb 24 13:16:22.505: INFO: Wrong image for pod: daemon-set-7fw9j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:22.505: INFO: Pod daemon-set-rxglj is not available Feb 24 13:16:24.596: INFO: Wrong image for pod: daemon-set-7fw9j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:24.596: INFO: Pod daemon-set-rxglj is not available Feb 24 13:16:25.495: INFO: Wrong image for pod: daemon-set-7fw9j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:25.495: INFO: Pod daemon-set-rxglj is not available Feb 24 13:16:26.496: INFO: Wrong image for pod: daemon-set-7fw9j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:26.496: INFO: Pod daemon-set-rxglj is not available Feb 24 13:16:27.492: INFO: Wrong image for pod: daemon-set-7fw9j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:28.496: INFO: Wrong image for pod: daemon-set-7fw9j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:29.502: INFO: Wrong image for pod: daemon-set-7fw9j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:30.497: INFO: Wrong image for pod: daemon-set-7fw9j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:31.492: INFO: Wrong image for pod: daemon-set-7fw9j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:32.505: INFO: Wrong image for pod: daemon-set-7fw9j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 24 13:16:32.505: INFO: Pod daemon-set-7fw9j is not available Feb 24 13:16:33.493: INFO: Pod daemon-set-5rpcx is not available STEP: Check that daemon pods are still running on every node of the cluster. Feb 24 13:16:33.508: INFO: Number of nodes with available pods: 1 Feb 24 13:16:33.508: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:16:34.528: INFO: Number of nodes with available pods: 1 Feb 24 13:16:34.528: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:16:35.526: INFO: Number of nodes with available pods: 1 Feb 24 13:16:35.526: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:16:36.561: INFO: Number of nodes with available pods: 1 Feb 24 13:16:36.562: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:16:37.533: INFO: Number of nodes with available pods: 1 Feb 24 13:16:37.533: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:16:38.531: INFO: Number of nodes with available pods: 1 Feb 24 13:16:38.531: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:16:39.575: INFO: Number of nodes with available pods: 1 Feb 24 13:16:39.575: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:16:40.528: INFO: Number of nodes with available pods: 1 Feb 24 13:16:40.528: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:16:41.520: INFO: Number of nodes with available pods: 1 Feb 24 13:16:41.520: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:16:42.527: INFO: Number of nodes with available pods: 1 Feb 24 13:16:42.527: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:16:43.520: INFO: Number of nodes with available pods: 1 Feb 24 13:16:43.520: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:16:44.526: INFO: Number of nodes with available pods: 2 Feb 24 13:16:44.526: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7774, will wait for the garbage collector to delete the pods Feb 24 13:16:44.652: INFO: Deleting DaemonSet.extensions daemon-set took: 29.648942ms Feb 24 13:16:44.953: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.894766ms Feb 24 13:16:57.963: INFO: Number of nodes with available pods: 0 Feb 24 13:16:57.963: INFO: Number of running nodes: 0, number of available pods: 0 Feb 24 13:16:57.967: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7774/daemonsets","resourceVersion":"25574357"},"items":null} Feb 24 13:16:57.972: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7774/pods","resourceVersion":"25574357"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:16:57.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7774" for this suite. Feb 24 13:17:06.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:17:06.134: INFO: namespace daemonsets-7774 deletion completed in 8.142803131s • [SLOW TEST:78.995 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:17:06.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-8027a631-5693-4693-9d3e-bdef83bbdddb STEP: Creating configMap with name cm-test-opt-upd-540aafff-7eb4-439a-bb8f-0827d3ed12d1 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-8027a631-5693-4693-9d3e-bdef83bbdddb STEP: Updating configmap cm-test-opt-upd-540aafff-7eb4-439a-bb8f-0827d3ed12d1 STEP: Creating configMap with name cm-test-opt-create-a02d07e4-9788-4172-b517-b26e8b921c13 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:18:32.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9136" for this suite. Feb 24 13:18:54.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:18:54.262: INFO: namespace projected-9136 deletion completed in 22.147894245s • [SLOW TEST:108.128 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:18:54.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-6756/secret-test-914e76c8-5983-486f-aa8b-877094e07c23 STEP: Creating a pod to test consume secrets Feb 24 13:18:54.389: INFO: Waiting up to 5m0s for pod "pod-configmaps-6174b912-e50a-4099-9db8-56757ad52abf" in namespace "secrets-6756" to be "success or failure" Feb 24 13:18:54.400: INFO: Pod "pod-configmaps-6174b912-e50a-4099-9db8-56757ad52abf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.67635ms Feb 24 13:18:56.409: INFO: Pod "pod-configmaps-6174b912-e50a-4099-9db8-56757ad52abf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019017783s Feb 24 13:18:58.416: INFO: Pod "pod-configmaps-6174b912-e50a-4099-9db8-56757ad52abf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026100992s Feb 24 13:19:00.424: INFO: Pod "pod-configmaps-6174b912-e50a-4099-9db8-56757ad52abf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034197903s Feb 24 13:19:02.431: INFO: Pod "pod-configmaps-6174b912-e50a-4099-9db8-56757ad52abf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041151064s STEP: Saw pod success Feb 24 13:19:02.431: INFO: Pod "pod-configmaps-6174b912-e50a-4099-9db8-56757ad52abf" satisfied condition "success or failure" Feb 24 13:19:02.434: INFO: Trying to get logs from node iruya-node pod pod-configmaps-6174b912-e50a-4099-9db8-56757ad52abf container env-test: STEP: delete the pod Feb 24 13:19:02.512: INFO: Waiting for pod pod-configmaps-6174b912-e50a-4099-9db8-56757ad52abf to disappear Feb 24 13:19:02.518: INFO: Pod pod-configmaps-6174b912-e50a-4099-9db8-56757ad52abf no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:19:02.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6756" for this suite. Feb 24 13:19:08.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:19:08.700: INFO: namespace secrets-6756 deletion completed in 6.173710209s • [SLOW TEST:14.437 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:19:08.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-lhg4 STEP: Creating a pod to test atomic-volume-subpath Feb 24 13:19:08.826: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lhg4" in namespace "subpath-9340" to be "success or failure" Feb 24 13:19:08.846: INFO: Pod "pod-subpath-test-configmap-lhg4": Phase="Pending", Reason="", readiness=false. Elapsed: 19.827443ms Feb 24 13:19:10.862: INFO: Pod "pod-subpath-test-configmap-lhg4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036009656s Feb 24 13:19:12.879: INFO: Pod "pod-subpath-test-configmap-lhg4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052359534s Feb 24 13:19:14.886: INFO: Pod "pod-subpath-test-configmap-lhg4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059715391s Feb 24 13:19:16.906: INFO: Pod "pod-subpath-test-configmap-lhg4": Phase="Running", Reason="", readiness=true. Elapsed: 8.080041739s Feb 24 13:19:18.916: INFO: Pod "pod-subpath-test-configmap-lhg4": Phase="Running", Reason="", readiness=true. Elapsed: 10.090075374s Feb 24 13:19:20.926: INFO: Pod "pod-subpath-test-configmap-lhg4": Phase="Running", Reason="", readiness=true. Elapsed: 12.09944622s Feb 24 13:19:22.935: INFO: Pod "pod-subpath-test-configmap-lhg4": Phase="Running", Reason="", readiness=true. Elapsed: 14.108984146s Feb 24 13:19:24.942: INFO: Pod "pod-subpath-test-configmap-lhg4": Phase="Running", Reason="", readiness=true. Elapsed: 16.11545829s Feb 24 13:19:26.950: INFO: Pod "pod-subpath-test-configmap-lhg4": Phase="Running", Reason="", readiness=true. Elapsed: 18.123584875s Feb 24 13:19:28.979: INFO: Pod "pod-subpath-test-configmap-lhg4": Phase="Running", Reason="", readiness=true. Elapsed: 20.153139429s Feb 24 13:19:30.987: INFO: Pod "pod-subpath-test-configmap-lhg4": Phase="Running", Reason="", readiness=true. Elapsed: 22.160968981s Feb 24 13:19:32.995: INFO: Pod "pod-subpath-test-configmap-lhg4": Phase="Running", Reason="", readiness=true. Elapsed: 24.168928727s Feb 24 13:19:35.003: INFO: Pod "pod-subpath-test-configmap-lhg4": Phase="Running", Reason="", readiness=true. Elapsed: 26.176298968s Feb 24 13:19:37.012: INFO: Pod "pod-subpath-test-configmap-lhg4": Phase="Running", Reason="", readiness=true. Elapsed: 28.185264252s Feb 24 13:19:39.021: INFO: Pod "pod-subpath-test-configmap-lhg4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.195176525s STEP: Saw pod success Feb 24 13:19:39.022: INFO: Pod "pod-subpath-test-configmap-lhg4" satisfied condition "success or failure" Feb 24 13:19:39.025: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-lhg4 container test-container-subpath-configmap-lhg4: STEP: delete the pod Feb 24 13:19:39.102: INFO: Waiting for pod pod-subpath-test-configmap-lhg4 to disappear Feb 24 13:19:39.108: INFO: Pod pod-subpath-test-configmap-lhg4 no longer exists STEP: Deleting pod pod-subpath-test-configmap-lhg4 Feb 24 13:19:39.108: INFO: Deleting pod "pod-subpath-test-configmap-lhg4" in namespace "subpath-9340" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:19:39.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9340" for this suite. Feb 24 13:19:45.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:19:45.231: INFO: namespace subpath-9340 deletion completed in 6.115166231s • [SLOW TEST:36.531 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:19:45.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Feb 24 13:19:45.296: INFO: Waiting up to 5m0s for pod "pod-92887710-0418-4b13-a254-71d8e66897cc" in namespace "emptydir-2807" to be "success or failure" Feb 24 13:19:45.348: INFO: Pod "pod-92887710-0418-4b13-a254-71d8e66897cc": Phase="Pending", Reason="", readiness=false. Elapsed: 51.566036ms Feb 24 13:19:47.357: INFO: Pod "pod-92887710-0418-4b13-a254-71d8e66897cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060473119s Feb 24 13:19:49.368: INFO: Pod "pod-92887710-0418-4b13-a254-71d8e66897cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072041637s Feb 24 13:19:51.386: INFO: Pod "pod-92887710-0418-4b13-a254-71d8e66897cc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090081179s Feb 24 13:19:53.402: INFO: Pod "pod-92887710-0418-4b13-a254-71d8e66897cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.10571624s STEP: Saw pod success Feb 24 13:19:53.402: INFO: Pod "pod-92887710-0418-4b13-a254-71d8e66897cc" satisfied condition "success or failure" Feb 24 13:19:53.408: INFO: Trying to get logs from node iruya-node pod pod-92887710-0418-4b13-a254-71d8e66897cc container test-container: STEP: delete the pod Feb 24 13:19:53.475: INFO: Waiting for pod pod-92887710-0418-4b13-a254-71d8e66897cc to disappear Feb 24 13:19:53.481: INFO: Pod pod-92887710-0418-4b13-a254-71d8e66897cc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:19:53.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2807" for this suite. Feb 24 13:19:59.537: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:19:59.681: INFO: namespace emptydir-2807 deletion completed in 6.186066845s • [SLOW TEST:14.450 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:19:59.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Feb 24 13:19:59.847: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-3506,SelfLink:/api/v1/namespaces/watch-3506/configmaps/e2e-watch-test-resource-version,UID:bc88dbd2-9982-44d5-8f31-a70a7a7c9f35,ResourceVersion:25574752,Generation:0,CreationTimestamp:2020-02-24 13:19:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 24 13:19:59.848: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-3506,SelfLink:/api/v1/namespaces/watch-3506/configmaps/e2e-watch-test-resource-version,UID:bc88dbd2-9982-44d5-8f31-a70a7a7c9f35,ResourceVersion:25574753,Generation:0,CreationTimestamp:2020-02-24 13:19:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:19:59.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3506" for this suite. Feb 24 13:20:05.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:20:06.017: INFO: namespace watch-3506 deletion completed in 6.165435186s • [SLOW TEST:6.336 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:20:06.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-424e6788-9753-4f3b-b5e5-b41f86455290 STEP: Creating a pod to test consume secrets Feb 24 13:20:06.678: INFO: Waiting up to 5m0s for pod "pod-secrets-c7e0d8b3-3f4b-4508-a959-9924832bb393" in namespace "secrets-1238" to be "success or failure" Feb 24 13:20:06.696: INFO: Pod "pod-secrets-c7e0d8b3-3f4b-4508-a959-9924832bb393": Phase="Pending", Reason="", readiness=false. Elapsed: 17.891867ms Feb 24 13:20:08.705: INFO: Pod "pod-secrets-c7e0d8b3-3f4b-4508-a959-9924832bb393": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02682593s Feb 24 13:20:10.714: INFO: Pod "pod-secrets-c7e0d8b3-3f4b-4508-a959-9924832bb393": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035632069s Feb 24 13:20:13.288: INFO: Pod "pod-secrets-c7e0d8b3-3f4b-4508-a959-9924832bb393": Phase="Pending", Reason="", readiness=false. Elapsed: 6.609675453s Feb 24 13:20:15.299: INFO: Pod "pod-secrets-c7e0d8b3-3f4b-4508-a959-9924832bb393": Phase="Pending", Reason="", readiness=false. Elapsed: 8.621152401s Feb 24 13:20:17.310: INFO: Pod "pod-secrets-c7e0d8b3-3f4b-4508-a959-9924832bb393": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.63248859s STEP: Saw pod success Feb 24 13:20:17.311: INFO: Pod "pod-secrets-c7e0d8b3-3f4b-4508-a959-9924832bb393" satisfied condition "success or failure" Feb 24 13:20:17.315: INFO: Trying to get logs from node iruya-node pod pod-secrets-c7e0d8b3-3f4b-4508-a959-9924832bb393 container secret-volume-test: STEP: delete the pod Feb 24 13:20:17.484: INFO: Waiting for pod pod-secrets-c7e0d8b3-3f4b-4508-a959-9924832bb393 to disappear Feb 24 13:20:17.489: INFO: Pod pod-secrets-c7e0d8b3-3f4b-4508-a959-9924832bb393 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:20:17.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1238" for this suite. Feb 24 13:20:25.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:20:25.720: INFO: namespace secrets-1238 deletion completed in 8.160989738s • [SLOW TEST:19.703 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:20:25.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Feb 24 13:20:25.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-650' Feb 24 13:20:28.870: INFO: stderr: "" Feb 24 13:20:28.870: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 24 13:20:28.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-650' Feb 24 13:20:29.674: INFO: stderr: "" Feb 24 13:20:29.674: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 Feb 24 13:20:34.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-650' Feb 24 13:20:34.768: INFO: stderr: "" Feb 24 13:20:34.768: INFO: stdout: "update-demo-nautilus-d4vj8 update-demo-nautilus-z4s5r " Feb 24 13:20:34.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d4vj8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-650' Feb 24 13:20:34.871: INFO: stderr: "" Feb 24 13:20:34.871: INFO: stdout: "" Feb 24 13:20:34.871: INFO: update-demo-nautilus-d4vj8 is created but not running Feb 24 13:20:39.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-650' Feb 24 13:20:40.081: INFO: stderr: "" Feb 24 13:20:40.081: INFO: stdout: "update-demo-nautilus-d4vj8 update-demo-nautilus-z4s5r " Feb 24 13:20:40.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d4vj8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-650' Feb 24 13:20:40.164: INFO: stderr: "" Feb 24 13:20:40.164: INFO: stdout: "" Feb 24 13:20:40.164: INFO: update-demo-nautilus-d4vj8 is created but not running Feb 24 13:20:45.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-650' Feb 24 13:20:45.322: INFO: stderr: "" Feb 24 13:20:45.322: INFO: stdout: "update-demo-nautilus-d4vj8 update-demo-nautilus-z4s5r " Feb 24 13:20:45.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d4vj8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-650' Feb 24 13:20:45.404: INFO: stderr: "" Feb 24 13:20:45.404: INFO: stdout: "true" Feb 24 13:20:45.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d4vj8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-650' Feb 24 13:20:45.487: INFO: stderr: "" Feb 24 13:20:45.487: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 24 13:20:45.487: INFO: validating pod update-demo-nautilus-d4vj8 Feb 24 13:20:45.512: INFO: got data: { "image": "nautilus.jpg" } Feb 24 13:20:45.512: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 24 13:20:45.512: INFO: update-demo-nautilus-d4vj8 is verified up and running Feb 24 13:20:45.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z4s5r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-650' Feb 24 13:20:45.596: INFO: stderr: "" Feb 24 13:20:45.596: INFO: stdout: "true" Feb 24 13:20:45.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z4s5r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-650' Feb 24 13:20:45.721: INFO: stderr: "" Feb 24 13:20:45.721: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 24 13:20:45.721: INFO: validating pod update-demo-nautilus-z4s5r Feb 24 13:20:45.730: INFO: got data: { "image": "nautilus.jpg" } Feb 24 13:20:45.730: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 24 13:20:45.730: INFO: update-demo-nautilus-z4s5r is verified up and running STEP: scaling down the replication controller Feb 24 13:20:45.732: INFO: scanned /root for discovery docs: Feb 24 13:20:45.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-650' Feb 24 13:20:46.988: INFO: stderr: "" Feb 24 13:20:46.988: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 24 13:20:46.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-650' Feb 24 13:20:47.136: INFO: stderr: "" Feb 24 13:20:47.136: INFO: stdout: "update-demo-nautilus-d4vj8 update-demo-nautilus-z4s5r " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 24 13:20:52.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-650' Feb 24 13:20:52.275: INFO: stderr: "" Feb 24 13:20:52.275: INFO: stdout: "update-demo-nautilus-d4vj8 update-demo-nautilus-z4s5r " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 24 13:20:57.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-650' Feb 24 13:20:57.473: INFO: stderr: "" Feb 24 13:20:57.473: INFO: stdout: "update-demo-nautilus-d4vj8 update-demo-nautilus-z4s5r " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 24 13:21:02.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-650' Feb 24 13:21:02.616: INFO: stderr: "" Feb 24 13:21:02.616: INFO: stdout: "update-demo-nautilus-d4vj8 " Feb 24 13:21:02.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d4vj8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-650' Feb 24 13:21:02.737: INFO: stderr: "" Feb 24 13:21:02.737: INFO: stdout: "true" Feb 24 13:21:02.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d4vj8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-650' Feb 24 13:21:02.816: INFO: stderr: "" Feb 24 13:21:02.816: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 24 13:21:02.816: INFO: validating pod update-demo-nautilus-d4vj8 Feb 24 13:21:02.828: INFO: got data: { "image": "nautilus.jpg" } Feb 24 13:21:02.828: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 24 13:21:02.828: INFO: update-demo-nautilus-d4vj8 is verified up and running STEP: scaling up the replication controller Feb 24 13:21:02.829: INFO: scanned /root for discovery docs: Feb 24 13:21:02.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-650' Feb 24 13:21:04.447: INFO: stderr: "" Feb 24 13:21:04.447: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 24 13:21:04.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-650' Feb 24 13:21:04.807: INFO: stderr: "" Feb 24 13:21:04.807: INFO: stdout: "update-demo-nautilus-d4vj8 update-demo-nautilus-jg6zg " Feb 24 13:21:04.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d4vj8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-650' Feb 24 13:21:04.979: INFO: stderr: "" Feb 24 13:21:04.979: INFO: stdout: "true" Feb 24 13:21:04.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d4vj8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-650' Feb 24 13:21:05.157: INFO: stderr: "" Feb 24 13:21:05.157: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 24 13:21:05.157: INFO: validating pod update-demo-nautilus-d4vj8 Feb 24 13:21:05.166: INFO: got data: { "image": "nautilus.jpg" } Feb 24 13:21:05.167: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 24 13:21:05.167: INFO: update-demo-nautilus-d4vj8 is verified up and running Feb 24 13:21:05.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jg6zg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-650' Feb 24 13:21:05.279: INFO: stderr: "" Feb 24 13:21:05.279: INFO: stdout: "" Feb 24 13:21:05.279: INFO: update-demo-nautilus-jg6zg is created but not running Feb 24 13:21:10.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-650' Feb 24 13:21:10.463: INFO: stderr: "" Feb 24 13:21:10.463: INFO: stdout: "update-demo-nautilus-d4vj8 update-demo-nautilus-jg6zg " Feb 24 13:21:10.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d4vj8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-650' Feb 24 13:21:10.562: INFO: stderr: "" Feb 24 13:21:10.562: INFO: stdout: "true" Feb 24 13:21:10.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d4vj8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-650' Feb 24 13:21:10.728: INFO: stderr: "" Feb 24 13:21:10.728: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 24 13:21:10.728: INFO: validating pod update-demo-nautilus-d4vj8 Feb 24 13:21:10.747: INFO: got data: { "image": "nautilus.jpg" } Feb 24 13:21:10.747: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 24 13:21:10.747: INFO: update-demo-nautilus-d4vj8 is verified up and running Feb 24 13:21:10.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jg6zg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-650' Feb 24 13:21:10.864: INFO: stderr: "" Feb 24 13:21:10.864: INFO: stdout: "" Feb 24 13:21:10.864: INFO: update-demo-nautilus-jg6zg is created but not running Feb 24 13:21:15.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-650' Feb 24 13:21:15.990: INFO: stderr: "" Feb 24 13:21:15.990: INFO: stdout: "update-demo-nautilus-d4vj8 update-demo-nautilus-jg6zg " Feb 24 13:21:15.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d4vj8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-650' Feb 24 13:21:16.119: INFO: stderr: "" Feb 24 13:21:16.119: INFO: stdout: "true" Feb 24 13:21:16.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d4vj8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-650' Feb 24 13:21:16.232: INFO: stderr: "" Feb 24 13:21:16.232: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 24 13:21:16.232: INFO: validating pod update-demo-nautilus-d4vj8 Feb 24 13:21:16.241: INFO: got data: { "image": "nautilus.jpg" } Feb 24 13:21:16.241: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 24 13:21:16.241: INFO: update-demo-nautilus-d4vj8 is verified up and running Feb 24 13:21:16.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jg6zg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-650' Feb 24 13:21:16.372: INFO: stderr: "" Feb 24 13:21:16.372: INFO: stdout: "true" Feb 24 13:21:16.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jg6zg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-650' Feb 24 13:21:16.442: INFO: stderr: "" Feb 24 13:21:16.442: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 24 13:21:16.442: INFO: validating pod update-demo-nautilus-jg6zg Feb 24 13:21:16.448: INFO: got data: { "image": "nautilus.jpg" } Feb 24 13:21:16.448: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 24 13:21:16.448: INFO: update-demo-nautilus-jg6zg is verified up and running STEP: using delete to clean up resources Feb 24 13:21:16.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-650' Feb 24 13:21:16.544: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 24 13:21:16.544: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 24 13:21:16.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-650' Feb 24 13:21:16.638: INFO: stderr: "No resources found.\n" Feb 24 13:21:16.638: INFO: stdout: "" Feb 24 13:21:16.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-650 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 24 13:21:16.722: INFO: stderr: "" Feb 24 13:21:16.722: INFO: stdout: "update-demo-nautilus-d4vj8\nupdate-demo-nautilus-jg6zg\n" Feb 24 13:21:17.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-650' Feb 24 13:21:17.426: INFO: stderr: "No resources found.\n" Feb 24 13:21:17.426: INFO: stdout: "" Feb 24 13:21:17.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-650 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 24 13:21:17.515: INFO: stderr: "" Feb 24 13:21:17.515: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:21:17.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-650" for this suite. Feb 24 13:21:40.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:21:40.268: INFO: namespace kubectl-650 deletion completed in 22.746742772s • [SLOW TEST:74.547 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:21:40.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Feb 24 13:21:49.811: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:21:51.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4122" for this suite. Feb 24 13:22:13.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:22:13.882: INFO: namespace replicaset-4122 deletion completed in 22.212467819s • [SLOW TEST:33.614 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:22:13.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 24 13:22:14.034: INFO: Number of nodes with available pods: 0 Feb 24 13:22:14.034: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:22:15.786: INFO: Number of nodes with available pods: 0 Feb 24 13:22:15.786: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:22:16.334: INFO: Number of nodes with available pods: 0 Feb 24 13:22:16.334: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:22:17.053: INFO: Number of nodes with available pods: 0 Feb 24 13:22:17.053: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:22:18.143: INFO: Number of nodes with available pods: 0 Feb 24 13:22:18.143: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:22:19.049: INFO: Number of nodes with available pods: 0 Feb 24 13:22:19.049: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:22:20.616: INFO: Number of nodes with available pods: 0 Feb 24 13:22:20.616: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:22:21.052: INFO: Number of nodes with available pods: 0 Feb 24 13:22:21.052: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:22:22.044: INFO: Number of nodes with available pods: 0 Feb 24 13:22:22.044: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:22:23.070: INFO: Number of nodes with available pods: 0 Feb 24 13:22:23.070: INFO: Node iruya-node is running more than one daemon pod Feb 24 13:22:24.068: INFO: Number of nodes with available pods: 2 Feb 24 13:22:24.068: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Feb 24 13:22:24.198: INFO: Number of nodes with available pods: 2 Feb 24 13:22:24.198: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2527, will wait for the garbage collector to delete the pods Feb 24 13:22:25.622: INFO: Deleting DaemonSet.extensions daemon-set took: 232.611377ms Feb 24 13:22:26.122: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.356154ms Feb 24 13:22:34.431: INFO: Number of nodes with available pods: 0 Feb 24 13:22:34.431: INFO: Number of running nodes: 0, number of available pods: 0 Feb 24 13:22:34.452: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2527/daemonsets","resourceVersion":"25575162"},"items":null} Feb 24 13:22:34.457: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2527/pods","resourceVersion":"25575162"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:22:34.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2527" for this suite. Feb 24 13:22:40.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:22:40.611: INFO: namespace daemonsets-2527 deletion completed in 6.129184436s • [SLOW TEST:26.728 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:22:40.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0224 13:23:10.897088 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 24 13:23:10.897: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:23:10.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5611" for this suite. Feb 24 13:23:17.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:23:18.391: INFO: namespace gc-5611 deletion completed in 7.491597808s • [SLOW TEST:37.780 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:23:18.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-331a30fe-1410-4b3b-bdce-f1cc99697fbe STEP: Creating secret with name secret-projected-all-test-volume-b3c59914-04c1-42dd-8c99-396d2fbd2b9f STEP: Creating a pod to test Check all projections for projected volume plugin Feb 24 13:23:18.818: INFO: Waiting up to 5m0s for pod "projected-volume-c930361c-0780-44f4-bcf6-01f0908e1903" in namespace "projected-9386" to be "success or failure" Feb 24 13:23:18.924: INFO: Pod "projected-volume-c930361c-0780-44f4-bcf6-01f0908e1903": Phase="Pending", Reason="", readiness=false. Elapsed: 106.420697ms Feb 24 13:23:21.125: INFO: Pod "projected-volume-c930361c-0780-44f4-bcf6-01f0908e1903": Phase="Pending", Reason="", readiness=false. Elapsed: 2.307487073s Feb 24 13:23:23.133: INFO: Pod "projected-volume-c930361c-0780-44f4-bcf6-01f0908e1903": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314984887s Feb 24 13:23:25.147: INFO: Pod "projected-volume-c930361c-0780-44f4-bcf6-01f0908e1903": Phase="Pending", Reason="", readiness=false. Elapsed: 6.328844414s Feb 24 13:23:27.162: INFO: Pod "projected-volume-c930361c-0780-44f4-bcf6-01f0908e1903": Phase="Pending", Reason="", readiness=false. Elapsed: 8.343678513s Feb 24 13:23:29.173: INFO: Pod "projected-volume-c930361c-0780-44f4-bcf6-01f0908e1903": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.355118146s STEP: Saw pod success Feb 24 13:23:29.173: INFO: Pod "projected-volume-c930361c-0780-44f4-bcf6-01f0908e1903" satisfied condition "success or failure" Feb 24 13:23:29.179: INFO: Trying to get logs from node iruya-node pod projected-volume-c930361c-0780-44f4-bcf6-01f0908e1903 container projected-all-volume-test: STEP: delete the pod Feb 24 13:23:29.244: INFO: Waiting for pod projected-volume-c930361c-0780-44f4-bcf6-01f0908e1903 to disappear Feb 24 13:23:29.247: INFO: Pod projected-volume-c930361c-0780-44f4-bcf6-01f0908e1903 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:23:29.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9386" for this suite. Feb 24 13:23:35.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:23:35.503: INFO: namespace projected-9386 deletion completed in 6.251123483s • [SLOW TEST:17.111 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:23:35.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3444 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-3444 STEP: Creating statefulset with conflicting port in namespace statefulset-3444 STEP: Waiting until pod test-pod will start running in namespace statefulset-3444 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3444 Feb 24 13:23:43.736: INFO: Observed stateful pod in namespace: statefulset-3444, name: ss-0, uid: 614a13e4-4de2-42e1-92e9-e42fe037461f, status phase: Pending. Waiting for statefulset controller to delete. Feb 24 13:23:46.495: INFO: Observed stateful pod in namespace: statefulset-3444, name: ss-0, uid: 614a13e4-4de2-42e1-92e9-e42fe037461f, status phase: Failed. Waiting for statefulset controller to delete. Feb 24 13:23:46.519: INFO: Observed stateful pod in namespace: statefulset-3444, name: ss-0, uid: 614a13e4-4de2-42e1-92e9-e42fe037461f, status phase: Failed. Waiting for statefulset controller to delete. Feb 24 13:23:46.557: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3444 STEP: Removing pod with conflicting port in namespace statefulset-3444 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3444 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 24 13:23:56.952: INFO: Deleting all statefulset in ns statefulset-3444 Feb 24 13:23:56.959: INFO: Scaling statefulset ss to 0 Feb 24 13:24:07.015: INFO: Waiting for statefulset status.replicas updated to 0 Feb 24 13:24:07.023: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:24:07.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3444" for this suite. Feb 24 13:24:13.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:24:13.191: INFO: namespace statefulset-3444 deletion completed in 6.140522922s • [SLOW TEST:37.688 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:24:13.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 24 13:24:39.402: INFO: Container started at 2020-02-24 13:24:19 +0000 UTC, pod became ready at 2020-02-24 13:24:39 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:24:39.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4258" for this suite. Feb 24 13:25:01.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:25:02.029: INFO: namespace container-probe-4258 deletion completed in 22.614206247s • [SLOW TEST:48.837 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:25:02.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Feb 24 13:25:02.764: INFO: created pod pod-service-account-defaultsa Feb 24 13:25:02.764: INFO: pod pod-service-account-defaultsa service account token volume mount: true Feb 24 13:25:02.841: INFO: created pod pod-service-account-mountsa Feb 24 13:25:02.841: INFO: pod pod-service-account-mountsa service account token volume mount: true Feb 24 13:25:02.879: INFO: created pod pod-service-account-nomountsa Feb 24 13:25:02.879: INFO: pod pod-service-account-nomountsa service account token volume mount: false Feb 24 13:25:02.939: INFO: created pod pod-service-account-defaultsa-mountspec Feb 24 13:25:02.940: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Feb 24 13:25:03.049: INFO: created pod pod-service-account-mountsa-mountspec Feb 24 13:25:03.050: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Feb 24 13:25:03.071: INFO: created pod pod-service-account-nomountsa-mountspec Feb 24 13:25:03.071: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Feb 24 13:25:03.115: INFO: created pod pod-service-account-defaultsa-nomountspec Feb 24 13:25:03.115: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Feb 24 13:25:03.260: INFO: created pod pod-service-account-mountsa-nomountspec Feb 24 13:25:03.260: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Feb 24 13:25:03.275: INFO: created pod pod-service-account-nomountsa-nomountspec Feb 24 13:25:03.275: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:25:03.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-234" for this suite. Feb 24 13:25:28.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:25:29.074: INFO: namespace svcaccounts-234 deletion completed in 25.787382432s • [SLOW TEST:27.045 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:25:29.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 24 13:25:29.126: INFO: Waiting up to 5m0s for pod "downwardapi-volume-deccce67-8e10-4638-9de7-4a08fca338b6" in namespace "projected-9775" to be "success or failure" Feb 24 13:25:29.178: INFO: Pod "downwardapi-volume-deccce67-8e10-4638-9de7-4a08fca338b6": Phase="Pending", Reason="", readiness=false. Elapsed: 51.784962ms Feb 24 13:25:31.186: INFO: Pod "downwardapi-volume-deccce67-8e10-4638-9de7-4a08fca338b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060259344s Feb 24 13:25:33.191: INFO: Pod "downwardapi-volume-deccce67-8e10-4638-9de7-4a08fca338b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065321164s Feb 24 13:25:35.201: INFO: Pod "downwardapi-volume-deccce67-8e10-4638-9de7-4a08fca338b6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075446594s Feb 24 13:25:37.212: INFO: Pod "downwardapi-volume-deccce67-8e10-4638-9de7-4a08fca338b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.08596149s STEP: Saw pod success Feb 24 13:25:37.212: INFO: Pod "downwardapi-volume-deccce67-8e10-4638-9de7-4a08fca338b6" satisfied condition "success or failure" Feb 24 13:25:37.219: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-deccce67-8e10-4638-9de7-4a08fca338b6 container client-container: STEP: delete the pod Feb 24 13:25:37.369: INFO: Waiting for pod downwardapi-volume-deccce67-8e10-4638-9de7-4a08fca338b6 to disappear Feb 24 13:25:37.379: INFO: Pod downwardapi-volume-deccce67-8e10-4638-9de7-4a08fca338b6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:25:37.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9775" for this suite. Feb 24 13:25:43.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:25:43.544: INFO: namespace projected-9775 deletion completed in 6.16103431s • [SLOW TEST:14.469 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:25:43.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:25:55.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9189" for this suite. Feb 24 13:26:01.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:26:02.025: INFO: namespace emptydir-wrapper-9189 deletion completed in 6.212030677s • [SLOW TEST:18.480 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:26:02.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Feb 24 13:26:02.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7515' Feb 24 13:26:02.444: INFO: stderr: "" Feb 24 13:26:02.444: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Feb 24 13:26:03.457: INFO: Selector matched 1 pods for map[app:redis] Feb 24 13:26:03.457: INFO: Found 0 / 1 Feb 24 13:26:04.455: INFO: Selector matched 1 pods for map[app:redis] Feb 24 13:26:04.455: INFO: Found 0 / 1 Feb 24 13:26:05.453: INFO: Selector matched 1 pods for map[app:redis] Feb 24 13:26:05.453: INFO: Found 0 / 1 Feb 24 13:26:06.458: INFO: Selector matched 1 pods for map[app:redis] Feb 24 13:26:06.458: INFO: Found 0 / 1 Feb 24 13:26:07.451: INFO: Selector matched 1 pods for map[app:redis] Feb 24 13:26:07.451: INFO: Found 0 / 1 Feb 24 13:26:08.460: INFO: Selector matched 1 pods for map[app:redis] Feb 24 13:26:08.460: INFO: Found 0 / 1 Feb 24 13:26:09.465: INFO: Selector matched 1 pods for map[app:redis] Feb 24 13:26:09.465: INFO: Found 0 / 1 Feb 24 13:26:10.454: INFO: Selector matched 1 pods for map[app:redis] Feb 24 13:26:10.455: INFO: Found 1 / 1 Feb 24 13:26:10.455: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 24 13:26:10.458: INFO: Selector matched 1 pods for map[app:redis] Feb 24 13:26:10.458: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Feb 24 13:26:10.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-8wvrh redis-master --namespace=kubectl-7515' Feb 24 13:26:10.605: INFO: stderr: "" Feb 24 13:26:10.605: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 24 Feb 13:26:09.026 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 24 Feb 13:26:09.026 # Server started, Redis version 3.2.12\n1:M 24 Feb 13:26:09.026 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 24 Feb 13:26:09.026 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Feb 24 13:26:10.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-8wvrh redis-master --namespace=kubectl-7515 --tail=1' Feb 24 13:26:10.718: INFO: stderr: "" Feb 24 13:26:10.718: INFO: stdout: "1:M 24 Feb 13:26:09.026 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Feb 24 13:26:10.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-8wvrh redis-master --namespace=kubectl-7515 --limit-bytes=1' Feb 24 13:26:10.816: INFO: stderr: "" Feb 24 13:26:10.816: INFO: stdout: " " STEP: exposing timestamps Feb 24 13:26:10.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-8wvrh redis-master --namespace=kubectl-7515 --tail=1 --timestamps' Feb 24 13:26:10.929: INFO: stderr: "" Feb 24 13:26:10.929: INFO: stdout: "2020-02-24T13:26:09.036977417Z 1:M 24 Feb 13:26:09.026 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Feb 24 13:26:13.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-8wvrh redis-master --namespace=kubectl-7515 --since=1s' Feb 24 13:26:13.560: INFO: stderr: "" Feb 24 13:26:13.560: INFO: stdout: "" Feb 24 13:26:13.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-8wvrh redis-master --namespace=kubectl-7515 --since=24h' Feb 24 13:26:13.676: INFO: stderr: "" Feb 24 13:26:13.676: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 24 Feb 13:26:09.026 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 24 Feb 13:26:09.026 # Server started, Redis version 3.2.12\n1:M 24 Feb 13:26:09.026 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 24 Feb 13:26:09.026 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Feb 24 13:26:13.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7515' Feb 24 13:26:13.750: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 24 13:26:13.750: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Feb 24 13:26:13.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-7515' Feb 24 13:26:13.911: INFO: stderr: "No resources found.\n" Feb 24 13:26:13.911: INFO: stdout: "" Feb 24 13:26:13.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-7515 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 24 13:26:14.045: INFO: stderr: "" Feb 24 13:26:14.045: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:26:14.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7515" for this suite. Feb 24 13:26:36.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:26:36.142: INFO: namespace kubectl-7515 deletion completed in 22.088575974s • [SLOW TEST:34.117 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:26:36.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Feb 24 13:26:36.201: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Feb 24 13:26:37.000: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Feb 24 13:26:40.137: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718147597, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718147597, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718147597, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718147596, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 13:26:42.143: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718147597, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718147597, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718147597, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718147596, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 13:26:44.146: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718147597, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718147597, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718147597, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718147596, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 13:26:46.146: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718147597, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718147597, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718147597, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718147596, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 13:26:48.150: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718147597, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718147597, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718147597, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718147596, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 13:26:55.355: INFO: Waited 5.200375037s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:26:55.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-5299" for this suite. Feb 24 13:27:01.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:27:01.945: INFO: namespace aggregator-5299 deletion completed in 6.224897786s • [SLOW TEST:25.803 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:27:01.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 24 13:27:02.065: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3c26ec35-b21d-449b-bd92-dbf2ce0125f3" in namespace "downward-api-4742" to be "success or failure" Feb 24 13:27:02.072: INFO: Pod "downwardapi-volume-3c26ec35-b21d-449b-bd92-dbf2ce0125f3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.997178ms Feb 24 13:27:04.088: INFO: Pod "downwardapi-volume-3c26ec35-b21d-449b-bd92-dbf2ce0125f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022903589s Feb 24 13:27:06.097: INFO: Pod "downwardapi-volume-3c26ec35-b21d-449b-bd92-dbf2ce0125f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031938198s Feb 24 13:27:08.108: INFO: Pod "downwardapi-volume-3c26ec35-b21d-449b-bd92-dbf2ce0125f3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043398506s Feb 24 13:27:10.117: INFO: Pod "downwardapi-volume-3c26ec35-b21d-449b-bd92-dbf2ce0125f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051851836s STEP: Saw pod success Feb 24 13:27:10.117: INFO: Pod "downwardapi-volume-3c26ec35-b21d-449b-bd92-dbf2ce0125f3" satisfied condition "success or failure" Feb 24 13:27:10.120: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3c26ec35-b21d-449b-bd92-dbf2ce0125f3 container client-container: STEP: delete the pod Feb 24 13:27:10.221: INFO: Waiting for pod downwardapi-volume-3c26ec35-b21d-449b-bd92-dbf2ce0125f3 to disappear Feb 24 13:27:10.229: INFO: Pod downwardapi-volume-3c26ec35-b21d-449b-bd92-dbf2ce0125f3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:27:10.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4742" for this suite. Feb 24 13:27:16.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:27:16.401: INFO: namespace downward-api-4742 deletion completed in 6.164951187s • [SLOW TEST:14.456 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:27:16.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-6961 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-6961 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6961 Feb 24 13:27:16.730: INFO: Found 0 stateful pods, waiting for 1 Feb 24 13:27:26.737: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Feb 24 13:27:26.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 24 13:27:27.325: INFO: stderr: "I0224 13:27:26.974741 1360 log.go:172] (0xc0003d8420) (0xc0005f66e0) Create stream\nI0224 13:27:26.974934 1360 log.go:172] (0xc0003d8420) (0xc0005f66e0) Stream added, broadcasting: 1\nI0224 13:27:26.982985 1360 log.go:172] (0xc0003d8420) Reply frame received for 1\nI0224 13:27:26.983049 1360 log.go:172] (0xc0003d8420) (0xc0005f4280) Create stream\nI0224 13:27:26.983065 1360 log.go:172] (0xc0003d8420) (0xc0005f4280) Stream added, broadcasting: 3\nI0224 13:27:26.984844 1360 log.go:172] (0xc0003d8420) Reply frame received for 3\nI0224 13:27:26.984871 1360 log.go:172] (0xc0003d8420) (0xc000764000) Create stream\nI0224 13:27:26.984882 1360 log.go:172] (0xc0003d8420) (0xc000764000) Stream added, broadcasting: 5\nI0224 13:27:26.986183 1360 log.go:172] (0xc0003d8420) Reply frame received for 5\nI0224 13:27:27.118459 1360 log.go:172] (0xc0003d8420) Data frame received for 5\nI0224 13:27:27.118497 1360 log.go:172] (0xc000764000) (5) Data frame handling\nI0224 13:27:27.118512 1360 log.go:172] (0xc000764000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0224 13:27:27.155328 1360 log.go:172] (0xc0003d8420) Data frame received for 3\nI0224 13:27:27.155351 1360 log.go:172] (0xc0005f4280) (3) Data frame handling\nI0224 13:27:27.155368 1360 log.go:172] (0xc0005f4280) (3) Data frame sent\nI0224 13:27:27.317436 1360 log.go:172] (0xc0003d8420) Data frame received for 1\nI0224 13:27:27.317512 1360 log.go:172] (0xc0003d8420) (0xc000764000) Stream removed, broadcasting: 5\nI0224 13:27:27.317552 1360 log.go:172] (0xc0005f66e0) (1) Data frame handling\nI0224 13:27:27.317579 1360 log.go:172] (0xc0005f66e0) (1) Data frame sent\nI0224 13:27:27.317611 1360 log.go:172] (0xc0003d8420) (0xc0005f4280) Stream removed, broadcasting: 3\nI0224 13:27:27.317636 1360 log.go:172] (0xc0003d8420) (0xc0005f66e0) Stream removed, broadcasting: 1\nI0224 13:27:27.317703 1360 log.go:172] (0xc0003d8420) Go away received\nI0224 13:27:27.318313 1360 log.go:172] (0xc0003d8420) (0xc0005f66e0) Stream removed, broadcasting: 1\nI0224 13:27:27.318388 1360 log.go:172] (0xc0003d8420) (0xc0005f4280) Stream removed, broadcasting: 3\nI0224 13:27:27.318397 1360 log.go:172] (0xc0003d8420) (0xc000764000) Stream removed, broadcasting: 5\n" Feb 24 13:27:27.325: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 24 13:27:27.325: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 24 13:27:27.336: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 24 13:27:37.345: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 24 13:27:37.346: INFO: Waiting for statefulset status.replicas updated to 0 Feb 24 13:27:37.410: INFO: POD NODE PHASE GRACE CONDITIONS Feb 24 13:27:37.411: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:16 +0000 UTC }] Feb 24 13:27:37.411: INFO: ss-1 Pending [] Feb 24 13:27:37.411: INFO: Feb 24 13:27:37.411: INFO: StatefulSet ss has not reached scale 3, at 2 Feb 24 13:27:38.438: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.962533866s Feb 24 13:27:39.487: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.935174097s Feb 24 13:27:40.624: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.886155945s Feb 24 13:27:41.635: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.749200105s Feb 24 13:27:43.688: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.738050753s Feb 24 13:27:48.842: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.685090937s STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6961 Feb 24 13:27:49.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:27:50.385: INFO: stderr: "I0224 13:27:50.022493 1377 log.go:172] (0xc000828370) (0xc000718640) Create stream\nI0224 13:27:50.022745 1377 log.go:172] (0xc000828370) (0xc000718640) Stream added, broadcasting: 1\nI0224 13:27:50.028647 1377 log.go:172] (0xc000828370) Reply frame received for 1\nI0224 13:27:50.028761 1377 log.go:172] (0xc000828370) (0xc0008ac000) Create stream\nI0224 13:27:50.028771 1377 log.go:172] (0xc000828370) (0xc0008ac000) Stream added, broadcasting: 3\nI0224 13:27:50.030041 1377 log.go:172] (0xc000828370) Reply frame received for 3\nI0224 13:27:50.030062 1377 log.go:172] (0xc000828370) (0xc0001f0280) Create stream\nI0224 13:27:50.030072 1377 log.go:172] (0xc000828370) (0xc0001f0280) Stream added, broadcasting: 5\nI0224 13:27:50.031774 1377 log.go:172] (0xc000828370) Reply frame received for 5\nI0224 13:27:50.140738 1377 log.go:172] (0xc000828370) Data frame received for 3\nI0224 13:27:50.140819 1377 log.go:172] (0xc0008ac000) (3) Data frame handling\nI0224 13:27:50.140831 1377 log.go:172] (0xc0008ac000) (3) Data frame sent\nI0224 13:27:50.140958 1377 log.go:172] (0xc000828370) Data frame received for 5\nI0224 13:27:50.140990 1377 log.go:172] (0xc0001f0280) (5) Data frame handling\nI0224 13:27:50.141011 1377 log.go:172] (0xc0001f0280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0224 13:27:50.375438 1377 log.go:172] (0xc000828370) (0xc0008ac000) Stream removed, broadcasting: 3\nI0224 13:27:50.375628 1377 log.go:172] (0xc000828370) Data frame received for 1\nI0224 13:27:50.375649 1377 log.go:172] (0xc000718640) (1) Data frame handling\nI0224 13:27:50.375664 1377 log.go:172] (0xc000718640) (1) Data frame sent\nI0224 13:27:50.375673 1377 log.go:172] (0xc000828370) (0xc0001f0280) Stream removed, broadcasting: 5\nI0224 13:27:50.375719 1377 log.go:172] (0xc000828370) (0xc000718640) Stream removed, broadcasting: 1\nI0224 13:27:50.375734 1377 log.go:172] (0xc000828370) Go away received\nI0224 13:27:50.376461 1377 log.go:172] (0xc000828370) (0xc000718640) Stream removed, broadcasting: 1\nI0224 13:27:50.376492 1377 log.go:172] (0xc000828370) (0xc0008ac000) Stream removed, broadcasting: 3\nI0224 13:27:50.376502 1377 log.go:172] (0xc000828370) (0xc0001f0280) Stream removed, broadcasting: 5\n" Feb 24 13:27:50.385: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 24 13:27:50.385: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 24 13:27:50.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:27:50.977: INFO: stderr: "I0224 13:27:50.557156 1394 log.go:172] (0xc000898370) (0xc0008b25a0) Create stream\nI0224 13:27:50.557387 1394 log.go:172] (0xc000898370) (0xc0008b25a0) Stream added, broadcasting: 1\nI0224 13:27:50.563784 1394 log.go:172] (0xc000898370) Reply frame received for 1\nI0224 13:27:50.563804 1394 log.go:172] (0xc000898370) (0xc0008b2640) Create stream\nI0224 13:27:50.563811 1394 log.go:172] (0xc000898370) (0xc0008b2640) Stream added, broadcasting: 3\nI0224 13:27:50.565210 1394 log.go:172] (0xc000898370) Reply frame received for 3\nI0224 13:27:50.565228 1394 log.go:172] (0xc000898370) (0xc00073a000) Create stream\nI0224 13:27:50.565237 1394 log.go:172] (0xc000898370) (0xc00073a000) Stream added, broadcasting: 5\nI0224 13:27:50.566431 1394 log.go:172] (0xc000898370) Reply frame received for 5\nI0224 13:27:50.784589 1394 log.go:172] (0xc000898370) Data frame received for 5\nI0224 13:27:50.784637 1394 log.go:172] (0xc00073a000) (5) Data frame handling\nI0224 13:27:50.784663 1394 log.go:172] (0xc00073a000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0224 13:27:50.891820 1394 log.go:172] (0xc000898370) Data frame received for 3\nI0224 13:27:50.891851 1394 log.go:172] (0xc0008b2640) (3) Data frame handling\nI0224 13:27:50.891868 1394 log.go:172] (0xc000898370) Data frame received for 5\nI0224 13:27:50.891908 1394 log.go:172] (0xc00073a000) (5) Data frame handling\nI0224 13:27:50.891920 1394 log.go:172] (0xc00073a000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0224 13:27:50.891934 1394 log.go:172] (0xc0008b2640) (3) Data frame sent\nI0224 13:27:50.972589 1394 log.go:172] (0xc000898370) Data frame received for 1\nI0224 13:27:50.972692 1394 log.go:172] (0xc0008b25a0) (1) Data frame handling\nI0224 13:27:50.972724 1394 log.go:172] (0xc0008b25a0) (1) Data frame sent\nI0224 13:27:50.973193 1394 log.go:172] (0xc000898370) (0xc0008b25a0) Stream removed, broadcasting: 1\nI0224 13:27:50.973715 1394 log.go:172] (0xc000898370) (0xc0008b2640) Stream removed, broadcasting: 3\nI0224 13:27:50.973791 1394 log.go:172] (0xc000898370) (0xc00073a000) Stream removed, broadcasting: 5\nI0224 13:27:50.973813 1394 log.go:172] (0xc000898370) (0xc0008b25a0) Stream removed, broadcasting: 1\nI0224 13:27:50.973824 1394 log.go:172] (0xc000898370) (0xc0008b2640) Stream removed, broadcasting: 3\nI0224 13:27:50.973832 1394 log.go:172] (0xc000898370) (0xc00073a000) Stream removed, broadcasting: 5\nI0224 13:27:50.973905 1394 log.go:172] (0xc000898370) Go away received\n" Feb 24 13:27:50.978: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 24 13:27:50.978: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 24 13:27:50.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:27:51.424: INFO: stderr: "I0224 13:27:51.190288 1412 log.go:172] (0xc0006ce0b0) (0xc000796140) Create stream\nI0224 13:27:51.190419 1412 log.go:172] (0xc0006ce0b0) (0xc000796140) Stream added, broadcasting: 1\nI0224 13:27:51.198600 1412 log.go:172] (0xc0006ce0b0) Reply frame received for 1\nI0224 13:27:51.198671 1412 log.go:172] (0xc0006ce0b0) (0xc000594280) Create stream\nI0224 13:27:51.198682 1412 log.go:172] (0xc0006ce0b0) (0xc000594280) Stream added, broadcasting: 3\nI0224 13:27:51.200853 1412 log.go:172] (0xc0006ce0b0) Reply frame received for 3\nI0224 13:27:51.200885 1412 log.go:172] (0xc0006ce0b0) (0xc0006f4000) Create stream\nI0224 13:27:51.200894 1412 log.go:172] (0xc0006ce0b0) (0xc0006f4000) Stream added, broadcasting: 5\nI0224 13:27:51.202646 1412 log.go:172] (0xc0006ce0b0) Reply frame received for 5\nI0224 13:27:51.303790 1412 log.go:172] (0xc0006ce0b0) Data frame received for 5\nI0224 13:27:51.303822 1412 log.go:172] (0xc0006f4000) (5) Data frame handling\nI0224 13:27:51.303833 1412 log.go:172] (0xc0006f4000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0224 13:27:51.303849 1412 log.go:172] (0xc0006ce0b0) Data frame received for 3\nI0224 13:27:51.303860 1412 log.go:172] (0xc000594280) (3) Data frame handling\nI0224 13:27:51.303868 1412 log.go:172] (0xc000594280) (3) Data frame sent\nI0224 13:27:51.418848 1412 log.go:172] (0xc0006ce0b0) (0xc000594280) Stream removed, broadcasting: 3\nI0224 13:27:51.418953 1412 log.go:172] (0xc0006ce0b0) Data frame received for 1\nI0224 13:27:51.418980 1412 log.go:172] (0xc000796140) (1) Data frame handling\nI0224 13:27:51.418998 1412 log.go:172] (0xc000796140) (1) Data frame sent\nI0224 13:27:51.419010 1412 log.go:172] (0xc0006ce0b0) (0xc0006f4000) Stream removed, broadcasting: 5\nI0224 13:27:51.419060 1412 log.go:172] (0xc0006ce0b0) (0xc000796140) Stream removed, broadcasting: 1\nI0224 13:27:51.419095 1412 log.go:172] (0xc0006ce0b0) Go away received\nI0224 13:27:51.419598 1412 log.go:172] (0xc0006ce0b0) (0xc000796140) Stream removed, broadcasting: 1\nI0224 13:27:51.419625 1412 log.go:172] (0xc0006ce0b0) (0xc000594280) Stream removed, broadcasting: 3\nI0224 13:27:51.419639 1412 log.go:172] (0xc0006ce0b0) (0xc0006f4000) Stream removed, broadcasting: 5\n" Feb 24 13:27:51.424: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 24 13:27:51.424: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 24 13:27:51.436: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 24 13:27:51.436: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=false Feb 24 13:28:01.446: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 24 13:28:01.446: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 24 13:28:01.446: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Feb 24 13:28:01.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 24 13:28:01.980: INFO: stderr: "I0224 13:28:01.680704 1426 log.go:172] (0xc000202420) (0xc0007146e0) Create stream\nI0224 13:28:01.680873 1426 log.go:172] (0xc000202420) (0xc0007146e0) Stream added, broadcasting: 1\nI0224 13:28:01.687042 1426 log.go:172] (0xc000202420) Reply frame received for 1\nI0224 13:28:01.687082 1426 log.go:172] (0xc000202420) (0xc0006b0320) Create stream\nI0224 13:28:01.687094 1426 log.go:172] (0xc000202420) (0xc0006b0320) Stream added, broadcasting: 3\nI0224 13:28:01.689326 1426 log.go:172] (0xc000202420) Reply frame received for 3\nI0224 13:28:01.689356 1426 log.go:172] (0xc000202420) (0xc0006b03c0) Create stream\nI0224 13:28:01.689365 1426 log.go:172] (0xc000202420) (0xc0006b03c0) Stream added, broadcasting: 5\nI0224 13:28:01.693974 1426 log.go:172] (0xc000202420) Reply frame received for 5\nI0224 13:28:01.832693 1426 log.go:172] (0xc000202420) Data frame received for 5\nI0224 13:28:01.832749 1426 log.go:172] (0xc0006b03c0) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0224 13:28:01.832874 1426 log.go:172] (0xc000202420) Data frame received for 3\nI0224 13:28:01.832957 1426 log.go:172] (0xc0006b0320) (3) Data frame handling\nI0224 13:28:01.832988 1426 log.go:172] (0xc0006b03c0) (5) Data frame sent\nI0224 13:28:01.833007 1426 log.go:172] (0xc0006b0320) (3) Data frame sent\nI0224 13:28:01.972149 1426 log.go:172] (0xc000202420) (0xc0006b0320) Stream removed, broadcasting: 3\nI0224 13:28:01.972357 1426 log.go:172] (0xc000202420) Data frame received for 1\nI0224 13:28:01.972511 1426 log.go:172] (0xc000202420) (0xc0006b03c0) Stream removed, broadcasting: 5\nI0224 13:28:01.972563 1426 log.go:172] (0xc0007146e0) (1) Data frame handling\nI0224 13:28:01.972583 1426 log.go:172] (0xc0007146e0) (1) Data frame sent\nI0224 13:28:01.972594 1426 log.go:172] (0xc000202420) (0xc0007146e0) Stream removed, broadcasting: 1\nI0224 13:28:01.972610 1426 log.go:172] (0xc000202420) Go away received\nI0224 13:28:01.973397 1426 log.go:172] (0xc000202420) (0xc0007146e0) Stream removed, broadcasting: 1\nI0224 13:28:01.973413 1426 log.go:172] (0xc000202420) (0xc0006b0320) Stream removed, broadcasting: 3\nI0224 13:28:01.973418 1426 log.go:172] (0xc000202420) (0xc0006b03c0) Stream removed, broadcasting: 5\n" Feb 24 13:28:01.980: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 24 13:28:01.980: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 24 13:28:01.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 24 13:28:02.345: INFO: stderr: "I0224 13:28:02.121091 1449 log.go:172] (0xc0009842c0) (0xc0009506e0) Create stream\nI0224 13:28:02.121225 1449 log.go:172] (0xc0009842c0) (0xc0009506e0) Stream added, broadcasting: 1\nI0224 13:28:02.123702 1449 log.go:172] (0xc0009842c0) Reply frame received for 1\nI0224 13:28:02.123723 1449 log.go:172] (0xc0009842c0) (0xc000950780) Create stream\nI0224 13:28:02.123727 1449 log.go:172] (0xc0009842c0) (0xc000950780) Stream added, broadcasting: 3\nI0224 13:28:02.124623 1449 log.go:172] (0xc0009842c0) Reply frame received for 3\nI0224 13:28:02.124649 1449 log.go:172] (0xc0009842c0) (0xc00071a280) Create stream\nI0224 13:28:02.124660 1449 log.go:172] (0xc0009842c0) (0xc00071a280) Stream added, broadcasting: 5\nI0224 13:28:02.125532 1449 log.go:172] (0xc0009842c0) Reply frame received for 5\nI0224 13:28:02.208405 1449 log.go:172] (0xc0009842c0) Data frame received for 5\nI0224 13:28:02.208507 1449 log.go:172] (0xc00071a280) (5) Data frame handling\nI0224 13:28:02.208530 1449 log.go:172] (0xc00071a280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0224 13:28:02.237489 1449 log.go:172] (0xc0009842c0) Data frame received for 3\nI0224 13:28:02.237524 1449 log.go:172] (0xc000950780) (3) Data frame handling\nI0224 13:28:02.237551 1449 log.go:172] (0xc000950780) (3) Data frame sent\nI0224 13:28:02.336220 1449 log.go:172] (0xc0009842c0) Data frame received for 1\nI0224 13:28:02.336395 1449 log.go:172] (0xc0009842c0) (0xc00071a280) Stream removed, broadcasting: 5\nI0224 13:28:02.336449 1449 log.go:172] (0xc0009506e0) (1) Data frame handling\nI0224 13:28:02.336580 1449 log.go:172] (0xc0009506e0) (1) Data frame sent\nI0224 13:28:02.336701 1449 log.go:172] (0xc0009842c0) (0xc000950780) Stream removed, broadcasting: 3\nI0224 13:28:02.336853 1449 log.go:172] (0xc0009842c0) (0xc0009506e0) Stream removed, broadcasting: 1\nI0224 13:28:02.336870 1449 log.go:172] (0xc0009842c0) Go away received\nI0224 13:28:02.337754 1449 log.go:172] (0xc0009842c0) (0xc0009506e0) Stream removed, broadcasting: 1\nI0224 13:28:02.337831 1449 log.go:172] (0xc0009842c0) (0xc000950780) Stream removed, broadcasting: 3\nI0224 13:28:02.337953 1449 log.go:172] (0xc0009842c0) (0xc00071a280) Stream removed, broadcasting: 5\n" Feb 24 13:28:02.345: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 24 13:28:02.345: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 24 13:28:02.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 24 13:28:03.063: INFO: stderr: "I0224 13:28:02.538681 1469 log.go:172] (0xc000970370) (0xc00090c5a0) Create stream\nI0224 13:28:02.539131 1469 log.go:172] (0xc000970370) (0xc00090c5a0) Stream added, broadcasting: 1\nI0224 13:28:02.571422 1469 log.go:172] (0xc000970370) Reply frame received for 1\nI0224 13:28:02.571635 1469 log.go:172] (0xc000970370) (0xc000902000) Create stream\nI0224 13:28:02.571661 1469 log.go:172] (0xc000970370) (0xc000902000) Stream added, broadcasting: 3\nI0224 13:28:02.574933 1469 log.go:172] (0xc000970370) Reply frame received for 3\nI0224 13:28:02.574968 1469 log.go:172] (0xc000970370) (0xc000690280) Create stream\nI0224 13:28:02.574981 1469 log.go:172] (0xc000970370) (0xc000690280) Stream added, broadcasting: 5\nI0224 13:28:02.577577 1469 log.go:172] (0xc000970370) Reply frame received for 5\nI0224 13:28:02.909228 1469 log.go:172] (0xc000970370) Data frame received for 5\nI0224 13:28:02.909460 1469 log.go:172] (0xc000690280) (5) Data frame handling\nI0224 13:28:02.909488 1469 log.go:172] (0xc000690280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0224 13:28:02.936230 1469 log.go:172] (0xc000970370) Data frame received for 3\nI0224 13:28:02.936326 1469 log.go:172] (0xc000902000) (3) Data frame handling\nI0224 13:28:02.936350 1469 log.go:172] (0xc000902000) (3) Data frame sent\nI0224 13:28:03.055429 1469 log.go:172] (0xc000970370) (0xc000902000) Stream removed, broadcasting: 3\nI0224 13:28:03.055531 1469 log.go:172] (0xc000970370) Data frame received for 1\nI0224 13:28:03.055554 1469 log.go:172] (0xc00090c5a0) (1) Data frame handling\nI0224 13:28:03.055571 1469 log.go:172] (0xc00090c5a0) (1) Data frame sent\nI0224 13:28:03.055599 1469 log.go:172] (0xc000970370) (0xc000690280) Stream removed, broadcasting: 5\nI0224 13:28:03.055746 1469 log.go:172] (0xc000970370) (0xc00090c5a0) Stream removed, broadcasting: 1\nI0224 13:28:03.055850 1469 log.go:172] (0xc000970370) Go away received\nI0224 13:28:03.056852 1469 log.go:172] (0xc000970370) (0xc00090c5a0) Stream removed, broadcasting: 1\nI0224 13:28:03.056877 1469 log.go:172] (0xc000970370) (0xc000902000) Stream removed, broadcasting: 3\nI0224 13:28:03.056888 1469 log.go:172] (0xc000970370) (0xc000690280) Stream removed, broadcasting: 5\n" Feb 24 13:28:03.064: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 24 13:28:03.064: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 24 13:28:03.064: INFO: Waiting for statefulset status.replicas updated to 0 Feb 24 13:28:03.071: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Feb 24 13:28:13.096: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 24 13:28:13.097: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 24 13:28:13.097: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 24 13:28:13.115: INFO: POD NODE PHASE GRACE CONDITIONS Feb 24 13:28:13.115: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:16 +0000 UTC }] Feb 24 13:28:13.115: INFO: ss-1 iruya-server-sfge57q7djm7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC }] Feb 24 13:28:13.115: INFO: ss-2 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC }] Feb 24 13:28:13.115: INFO: Feb 24 13:28:13.115: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 24 13:28:15.448: INFO: POD NODE PHASE GRACE CONDITIONS Feb 24 13:28:15.448: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:16 +0000 UTC }] Feb 24 13:28:15.448: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC }] Feb 24 13:28:15.448: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC }] Feb 24 13:28:15.448: INFO: Feb 24 13:28:15.448: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 24 13:28:16.470: INFO: POD NODE PHASE GRACE CONDITIONS Feb 24 13:28:16.470: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:16 +0000 UTC }] Feb 24 13:28:16.470: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC }] Feb 24 13:28:16.470: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC }] Feb 24 13:28:16.470: INFO: Feb 24 13:28:16.470: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 24 13:28:17.480: INFO: POD NODE PHASE GRACE CONDITIONS Feb 24 13:28:17.480: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:16 +0000 UTC }] Feb 24 13:28:17.480: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC }] Feb 24 13:28:17.480: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC }] Feb 24 13:28:17.481: INFO: Feb 24 13:28:17.481: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 24 13:28:18.499: INFO: POD NODE PHASE GRACE CONDITIONS Feb 24 13:28:18.499: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:16 +0000 UTC }] Feb 24 13:28:18.499: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC }] Feb 24 13:28:18.500: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC }] Feb 24 13:28:18.500: INFO: Feb 24 13:28:18.500: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 24 13:28:19.508: INFO: POD NODE PHASE GRACE CONDITIONS Feb 24 13:28:19.508: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:16 +0000 UTC }] Feb 24 13:28:19.508: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC }] Feb 24 13:28:19.508: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC }] Feb 24 13:28:19.508: INFO: Feb 24 13:28:19.508: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 24 13:28:20.523: INFO: POD NODE PHASE GRACE CONDITIONS Feb 24 13:28:20.523: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:16 +0000 UTC }] Feb 24 13:28:20.523: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC }] Feb 24 13:28:20.523: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC }] Feb 24 13:28:20.524: INFO: Feb 24 13:28:20.524: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 24 13:28:21.534: INFO: POD NODE PHASE GRACE CONDITIONS Feb 24 13:28:21.535: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:16 +0000 UTC }] Feb 24 13:28:21.535: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC }] Feb 24 13:28:21.535: INFO: Feb 24 13:28:21.535: INFO: StatefulSet ss has not reached scale 0, at 2 Feb 24 13:28:22.555: INFO: POD NODE PHASE GRACE CONDITIONS Feb 24 13:28:22.555: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:16 +0000 UTC }] Feb 24 13:28:22.555: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:28:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:27:37 +0000 UTC }] Feb 24 13:28:22.555: INFO: Feb 24 13:28:22.555: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6961 Feb 24 13:28:23.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:28:23.760: INFO: rc: 1 Feb 24 13:28:23.761: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc0033cd290 exit status 1 true [0xc002a0c760 0xc002a0c790 0xc002a0c7a8] [0xc002a0c760 0xc002a0c790 0xc002a0c7a8] [0xc002a0c788 0xc002a0c7a0] [0xba6c50 0xba6c50] 0xc00270c9c0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Feb 24 13:28:33.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:28:33.905: INFO: rc: 1 Feb 24 13:28:33.905: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0026ea810 exit status 1 true [0xc0024b9288 0xc0024b92a0 0xc0024b92b8] [0xc0024b9288 0xc0024b92a0 0xc0024b92b8] [0xc0024b9298 0xc0024b92b0] [0xba6c50 0xba6c50] 0xc002dbdec0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 13:28:43.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:28:44.033: INFO: rc: 1 Feb 24 13:28:44.033: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0033cd350 exit status 1 true [0xc002a0c7b0 0xc002a0c7c8 0xc002a0c7e0] [0xc002a0c7b0 0xc002a0c7c8 0xc002a0c7e0] [0xc002a0c7c0 0xc002a0c7d8] [0xba6c50 0xba6c50] 0xc00270cea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 13:28:54.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:28:54.133: INFO: rc: 1 Feb 24 13:28:54.133: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002e8e3c0 exit status 1 true [0xc002a16148 0xc002a16180 0xc002a161b0] [0xc002a16148 0xc002a16180 0xc002a161b0] [0xc002a16170 0xc002a161a0] [0xba6c50 0xba6c50] 0xc0023c9200 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 13:29:04.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:29:04.217: INFO: rc: 1 Feb 24 13:29:04.217: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a080f0 exit status 1 true [0xc000186000 0xc0001863f8 0xc000187028] [0xc000186000 0xc0001863f8 0xc000187028] [0xc000186300 0xc000186e08] [0xba6c50 0xba6c50] 0xc002dbc780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 13:29:14.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:29:14.298: INFO: rc: 1 Feb 24 13:29:14.299: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a08210 exit status 1 true [0xc000187288 0xc000187640 0xc000187a30] [0xc000187288 0xc000187640 0xc000187a30] [0xc000187440 0xc000187948] [0xba6c50 0xba6c50] 0xc002dbcc60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 13:29:24.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:29:24.420: INFO: rc: 1 Feb 24 13:29:24.420: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0024fc0c0 exit status 1 true [0xc0002725a0 0xc0002728d0 0xc0002729a8] [0xc0002725a0 0xc0002728d0 0xc0002729a8] [0xc000272860 0xc000272990] [0xba6c50 0xba6c50] 0xc002788840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 13:29:34.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:29:34.548: INFO: rc: 1 Feb 24 13:29:34.548: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a08330 exit status 1 true [0xc000187a60 0xc000187af8 0xc000187dc8] [0xc000187a60 0xc000187af8 0xc000187dc8] [0xc000187ab8 0xc000187d10] [0xba6c50 0xba6c50] 0xc002dbd080 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 13:29:44.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:29:44.677: INFO: rc: 1 Feb 24 13:29:44.677: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0024fc1b0 exit status 1 true [0xc0002729e0 0xc000272ce0 0xc000272d78] [0xc0002729e0 0xc000272ce0 0xc000272d78] [0xc000272bf8 0xc000272d10] [0xba6c50 0xba6c50] 0xc002788ea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 13:29:54.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:29:54.825: INFO: rc: 1 Feb 24 13:29:54.825: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a08420 exit status 1 true [0xc000187f10 0xc00121c228 0xc00121c518] [0xc000187f10 0xc00121c228 0xc00121c518] [0xc00121c1b0 0xc00121c3f8] [0xba6c50 0xba6c50] 0xc002dbd440 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 13:30:04.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:30:04.938: INFO: rc: 1 Feb 24 13:30:04.939: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0025d60f0 exit status 1 true [0xc0024b8028 0xc0024b8088 0xc0024b80b8] [0xc0024b8028 0xc0024b8088 0xc0024b80b8] [0xc0024b8050 0xc0024b80b0] [0xba6c50 0xba6c50] 0xc002484fc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 13:30:14.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:30:15.077: INFO: rc: 1 Feb 24 13:30:15.078: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0025d61b0 exit status 1 true [0xc0024b80e8 0xc0024b8140 0xc0024b8188] [0xc0024b80e8 0xc0024b8140 0xc0024b8188] [0xc0024b8120 0xc0024b8180] [0xba6c50 0xba6c50] 0xc002485bc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 13:30:25.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:30:25.168: INFO: rc: 1 Feb 24 13:30:25.168: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a760c0 exit status 1 true [0xc002928000 0xc002928040 0xc002928058] [0xc002928000 0xc002928040 0xc002928058] [0xc002928028 0xc002928050] [0xba6c50 0xba6c50] 0xc0023f63c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 13:30:35.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:30:37.127: INFO: rc: 1 Feb 24 13:30:37.127: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a084e0 exit status 1 true [0xc00121c670 0xc00121c828 0xc00121c948] [0xc00121c670 0xc00121c828 0xc00121c948] [0xc00121c720 0xc00121c8a8] [0xba6c50 0xba6c50] 0xc002dbd980 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 13:30:47.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:30:47.279: INFO: rc: 1 Feb 24 13:30:47.279: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a085a0 exit status 1 true [0xc00121c9c0 0xc00121ca28 0xc00121cac0] [0xc00121c9c0 0xc00121ca28 0xc00121cac0] [0xc00121c9e0 0xc00121caa0] [0xba6c50 0xba6c50] 0xc002dbde60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 13:30:57.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:30:57.391: INFO: rc: 1 Feb 24 13:30:57.391: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0025d6090 exit status 1 true [0xc000186198 0xc000186640 0xc000187288] [0xc000186198 0xc000186640 0xc000187288] [0xc0001863f8 0xc000187028] [0xba6c50 0xba6c50] 0xc002484fc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 13:31:07.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:31:07.504: INFO: rc: 1 Feb 24 13:31:07.505: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0024fc120 exit status 1 true [0xc0024b8028 0xc0024b8088 0xc0024b80b8] [0xc0024b8028 0xc0024b8088 0xc0024b80b8] [0xc0024b8050 0xc0024b80b0] [0xba6c50 0xba6c50] 0xc002788840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 13:31:17.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:31:17.649: INFO: rc: 1 Feb 24 13:31:17.649: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0024fc240 exit status 1 true [0xc0024b80e8 0xc0024b8140 0xc0024b8188] [0xc0024b80e8 0xc0024b8140 0xc0024b8188] [0xc0024b8120 0xc0024b8180] [0xba6c50 0xba6c50] 0xc002788ea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 13:31:27.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:31:27.789: INFO: rc: 1 Feb 24 13:31:27.789: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a08090 exit status 1 true [0xc0002725a0 0xc0002728d0 0xc0002729a8] [0xc0002725a0 0xc0002728d0 0xc0002729a8] [0xc000272860 0xc000272990] [0xba6c50 0xba6c50] 0xc002dbc780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 13:31:37.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:31:37.938: INFO: rc: 1 Feb 24 13:31:37.938: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a08180 exit status 1 true [0xc0002729e0 0xc000272ce0 0xc000272d78] [0xc0002729e0 0xc000272ce0 0xc000272d78] [0xc000272bf8 0xc000272d10] [0xba6c50 0xba6c50] 0xc002dbcc60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 13:31:47.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:31:48.060: INFO: rc: 1 Feb 24 13:31:48.060: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0024fc360 exit status 1 true [0xc0024b8190 0xc0024b81e0 0xc0024b8220] [0xc0024b8190 0xc0024b81e0 0xc0024b8220] [0xc0024b81a8 0xc0024b8208] [0xba6c50 0xba6c50] 0xc002789740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 13:31:58.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:31:58.174: INFO: rc: 1 Feb 24 13:31:58.174: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0024fc450 exit status 1 true [0xc0024b8230 0xc0024b8260 0xc0024b8298] [0xc0024b8230 0xc0024b8260 0xc0024b8298] [0xc0024b8258 0xc0024b8278] [0xba6c50 0xba6c50] 0xc002789ce0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 13:32:08.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:32:08.317: INFO: rc: 1 Feb 24 13:32:08.317: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0025d6180 exit status 1 true [0xc000187398 0xc000187750 0xc000187a60] [0xc000187398 0xc000187750 0xc000187a60] [0xc000187640 0xc000187a30] [0xba6c50 0xba6c50] 0xc002485bc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 13:32:18.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:32:18.424: INFO: rc: 1 Feb 24 13:32:18.424: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0024fc510 exit status 1 true [0xc0024b82b0 0xc0024b82e8 0xc0024b8340] [0xc0024b82b0 0xc0024b82e8 0xc0024b8340] [0xc0024b82d8 0xc0024b8320] [0xba6c50 0xba6c50] 0xc0023f6000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 13:32:28.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:32:28.554: INFO: rc: 1 Feb 24 13:32:28.554: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0025d62d0 exit status 1 true [0xc000187a80 0xc000187b80 0xc000187f10] [0xc000187a80 0xc000187b80 0xc000187f10] [0xc000187af8 0xc000187dc8] [0xba6c50 0xba6c50] 0xc001cdff80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 13:32:38.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:32:38.661: INFO: rc: 1 Feb 24 13:32:38.661: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a082d0 exit status 1 true [0xc000272da8 0xc000272e98 0xc000272f88] [0xc000272da8 0xc000272e98 0xc000272f88] [0xc000272e50 0xc000272f20] [0xba6c50 0xba6c50] 0xc002dbd080 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 13:32:48.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:32:48.754: INFO: rc: 1 Feb 24 13:32:48.754: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0024fc630 exit status 1 true [0xc0024b8358 0xc0024b8388 0xc0024b83c8] [0xc0024b8358 0xc0024b8388 0xc0024b83c8] [0xc0024b8380 0xc0024b83b0] [0xba6c50 0xba6c50] 0xc0023f6480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 13:32:58.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:32:58.891: INFO: rc: 1 Feb 24 13:32:58.891: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0024fc0c0 exit status 1 true [0xc0024b8030 0xc0024b8098 0xc0024b80e8] [0xc0024b8030 0xc0024b8098 0xc0024b80e8] [0xc0024b8088 0xc0024b80b8] [0xba6c50 0xba6c50] 0xc002788840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 13:33:08.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:33:08.966: INFO: rc: 1 Feb 24 13:33:08.967: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0024fc1e0 exit status 1 true [0xc0024b8100 0xc0024b8160 0xc0024b8190] [0xc0024b8100 0xc0024b8160 0xc0024b8190] [0xc0024b8140 0xc0024b8188] [0xba6c50 0xba6c50] 0xc002788ea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 13:33:18.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:33:19.097: INFO: rc: 1 Feb 24 13:33:19.097: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0025d60c0 exit status 1 true [0xc000186000 0xc0001863f8 0xc000187028] [0xc000186000 0xc0001863f8 0xc000187028] [0xc000186300 0xc000186e08] [0xba6c50 0xba6c50] 0xc002484fc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 13:33:29.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 24 13:33:29.198: INFO: rc: 1 Feb 24 13:33:29.198: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Feb 24 13:33:29.198: INFO: Scaling statefulset ss to 0 Feb 24 13:33:29.210: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 24 13:33:29.212: INFO: Deleting all statefulset in ns statefulset-6961 Feb 24 13:33:29.215: INFO: Scaling statefulset ss to 0 Feb 24 13:33:29.223: INFO: Waiting for statefulset status.replicas updated to 0 Feb 24 13:33:29.225: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:33:29.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6961" for this suite. Feb 24 13:33:35.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:33:35.417: INFO: namespace statefulset-6961 deletion completed in 6.168750484s • [SLOW TEST:379.015 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:33:35.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 24 13:33:35.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-4944' Feb 24 13:33:35.803: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 24 13:33:35.803: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Feb 24 13:33:35.810: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Feb 24 13:33:35.825: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Feb 24 13:33:35.849: INFO: scanned /root for discovery docs: Feb 24 13:33:35.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-4944' Feb 24 13:33:58.119: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 24 13:33:58.119: INFO: stdout: "Created e2e-test-nginx-rc-c5b3da5f8c14b8c75bba3c46254d618f\nScaling up e2e-test-nginx-rc-c5b3da5f8c14b8c75bba3c46254d618f from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-c5b3da5f8c14b8c75bba3c46254d618f up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-c5b3da5f8c14b8c75bba3c46254d618f to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Feb 24 13:33:58.119: INFO: stdout: "Created e2e-test-nginx-rc-c5b3da5f8c14b8c75bba3c46254d618f\nScaling up e2e-test-nginx-rc-c5b3da5f8c14b8c75bba3c46254d618f from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-c5b3da5f8c14b8c75bba3c46254d618f up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-c5b3da5f8c14b8c75bba3c46254d618f to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Feb 24 13:33:58.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4944' Feb 24 13:33:58.242: INFO: stderr: "" Feb 24 13:33:58.242: INFO: stdout: "e2e-test-nginx-rc-c5b3da5f8c14b8c75bba3c46254d618f-x8nc6 " Feb 24 13:33:58.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-c5b3da5f8c14b8c75bba3c46254d618f-x8nc6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4944' Feb 24 13:33:58.341: INFO: stderr: "" Feb 24 13:33:58.341: INFO: stdout: "true" Feb 24 13:33:58.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-c5b3da5f8c14b8c75bba3c46254d618f-x8nc6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4944' Feb 24 13:33:58.428: INFO: stderr: "" Feb 24 13:33:58.428: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Feb 24 13:33:58.429: INFO: e2e-test-nginx-rc-c5b3da5f8c14b8c75bba3c46254d618f-x8nc6 is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Feb 24 13:33:58.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-4944' Feb 24 13:33:58.631: INFO: stderr: "" Feb 24 13:33:58.631: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:33:58.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4944" for this suite. Feb 24 13:34:20.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:34:20.775: INFO: namespace kubectl-4944 deletion completed in 22.1252835s • [SLOW TEST:45.357 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:34:20.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Feb 24 13:34:20.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4585' Feb 24 13:34:21.237: INFO: stderr: "" Feb 24 13:34:21.237: INFO: stdout: "pod/pause created\n" Feb 24 13:34:21.237: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Feb 24 13:34:21.237: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4585" to be "running and ready" Feb 24 13:34:21.286: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 49.376531ms Feb 24 13:34:23.293: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055523463s Feb 24 13:34:25.305: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068268785s Feb 24 13:34:27.314: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077109623s Feb 24 13:34:29.321: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.084321416s Feb 24 13:34:29.321: INFO: Pod "pause" satisfied condition "running and ready" Feb 24 13:34:29.322: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Feb 24 13:34:29.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-4585' Feb 24 13:34:29.464: INFO: stderr: "" Feb 24 13:34:29.464: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Feb 24 13:34:29.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4585' Feb 24 13:34:29.606: INFO: stderr: "" Feb 24 13:34:29.606: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 8s testing-label-value\n" STEP: removing the label testing-label of a pod Feb 24 13:34:29.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-4585' Feb 24 13:34:29.717: INFO: stderr: "" Feb 24 13:34:29.717: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Feb 24 13:34:29.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4585' Feb 24 13:34:29.809: INFO: stderr: "" Feb 24 13:34:29.809: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 8s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Feb 24 13:34:29.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4585' Feb 24 13:34:29.933: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 24 13:34:29.933: INFO: stdout: "pod \"pause\" force deleted\n" Feb 24 13:34:29.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-4585' Feb 24 13:34:30.044: INFO: stderr: "No resources found.\n" Feb 24 13:34:30.044: INFO: stdout: "" Feb 24 13:34:30.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-4585 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 24 13:34:30.129: INFO: stderr: "" Feb 24 13:34:30.129: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:34:30.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4585" for this suite. Feb 24 13:34:36.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:34:36.312: INFO: namespace kubectl-4585 deletion completed in 6.176158992s • [SLOW TEST:15.538 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:34:36.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 24 13:34:36.535: INFO: Waiting up to 5m0s for pod "downwardapi-volume-33e8dc98-6b81-419f-8835-f080e203b836" in namespace "downward-api-6053" to be "success or failure" Feb 24 13:34:36.561: INFO: Pod "downwardapi-volume-33e8dc98-6b81-419f-8835-f080e203b836": Phase="Pending", Reason="", readiness=false. Elapsed: 25.762185ms Feb 24 13:34:38.578: INFO: Pod "downwardapi-volume-33e8dc98-6b81-419f-8835-f080e203b836": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043009358s Feb 24 13:34:40.592: INFO: Pod "downwardapi-volume-33e8dc98-6b81-419f-8835-f080e203b836": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056807414s Feb 24 13:34:42.621: INFO: Pod "downwardapi-volume-33e8dc98-6b81-419f-8835-f080e203b836": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085757843s Feb 24 13:34:44.638: INFO: Pod "downwardapi-volume-33e8dc98-6b81-419f-8835-f080e203b836": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.102409825s STEP: Saw pod success Feb 24 13:34:44.638: INFO: Pod "downwardapi-volume-33e8dc98-6b81-419f-8835-f080e203b836" satisfied condition "success or failure" Feb 24 13:34:44.645: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-33e8dc98-6b81-419f-8835-f080e203b836 container client-container: STEP: delete the pod Feb 24 13:34:44.729: INFO: Waiting for pod downwardapi-volume-33e8dc98-6b81-419f-8835-f080e203b836 to disappear Feb 24 13:34:44.738: INFO: Pod downwardapi-volume-33e8dc98-6b81-419f-8835-f080e203b836 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:34:44.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6053" for this suite. Feb 24 13:34:50.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:34:50.935: INFO: namespace downward-api-6053 deletion completed in 6.191295936s • [SLOW TEST:14.622 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:34:50.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Feb 24 13:34:51.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6486' Feb 24 13:34:51.437: INFO: stderr: "" Feb 24 13:34:51.437: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 24 13:34:52.445: INFO: Selector matched 1 pods for map[app:redis] Feb 24 13:34:52.445: INFO: Found 0 / 1 Feb 24 13:34:53.448: INFO: Selector matched 1 pods for map[app:redis] Feb 24 13:34:53.448: INFO: Found 0 / 1 Feb 24 13:34:54.450: INFO: Selector matched 1 pods for map[app:redis] Feb 24 13:34:54.450: INFO: Found 0 / 1 Feb 24 13:34:55.452: INFO: Selector matched 1 pods for map[app:redis] Feb 24 13:34:55.452: INFO: Found 0 / 1 Feb 24 13:34:56.465: INFO: Selector matched 1 pods for map[app:redis] Feb 24 13:34:56.465: INFO: Found 0 / 1 Feb 24 13:34:57.446: INFO: Selector matched 1 pods for map[app:redis] Feb 24 13:34:57.446: INFO: Found 0 / 1 Feb 24 13:34:58.452: INFO: Selector matched 1 pods for map[app:redis] Feb 24 13:34:58.452: INFO: Found 0 / 1 Feb 24 13:34:59.454: INFO: Selector matched 1 pods for map[app:redis] Feb 24 13:34:59.454: INFO: Found 1 / 1 Feb 24 13:34:59.454: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Feb 24 13:34:59.459: INFO: Selector matched 1 pods for map[app:redis] Feb 24 13:34:59.459: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 24 13:34:59.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-wnqnd --namespace=kubectl-6486 -p {"metadata":{"annotations":{"x":"y"}}}' Feb 24 13:34:59.589: INFO: stderr: "" Feb 24 13:34:59.589: INFO: stdout: "pod/redis-master-wnqnd patched\n" STEP: checking annotations Feb 24 13:34:59.618: INFO: Selector matched 1 pods for map[app:redis] Feb 24 13:34:59.618: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:34:59.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6486" for this suite. Feb 24 13:35:21.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:35:21.770: INFO: namespace kubectl-6486 deletion completed in 22.146724565s • [SLOW TEST:30.835 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:35:21.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 24 13:35:30.492: INFO: Successfully updated pod "pod-update-da1d026e-6bc4-4dbc-9364-37d5d354f98b" STEP: verifying the updated pod is in kubernetes Feb 24 13:35:30.514: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:35:30.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9823" for this suite. Feb 24 13:35:52.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:35:52.777: INFO: namespace pods-9823 deletion completed in 22.191420346s • [SLOW TEST:31.006 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:35:52.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-2872 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2872 to expose endpoints map[] Feb 24 13:35:52.905: INFO: successfully validated that service endpoint-test2 in namespace services-2872 exposes endpoints map[] (19.292791ms elapsed) STEP: Creating pod pod1 in namespace services-2872 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2872 to expose endpoints map[pod1:[80]] Feb 24 13:35:57.069: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.120466063s elapsed, will retry) Feb 24 13:36:01.176: INFO: successfully validated that service endpoint-test2 in namespace services-2872 exposes endpoints map[pod1:[80]] (8.227115508s elapsed) STEP: Creating pod pod2 in namespace services-2872 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2872 to expose endpoints map[pod1:[80] pod2:[80]] Feb 24 13:36:06.144: INFO: Unexpected endpoints: found map[e1217f39-91d5-4dd3-8f50-b8d49725f186:[80]], expected map[pod1:[80] pod2:[80]] (4.922674848s elapsed, will retry) Feb 24 13:36:09.209: INFO: successfully validated that service endpoint-test2 in namespace services-2872 exposes endpoints map[pod1:[80] pod2:[80]] (7.98853109s elapsed) STEP: Deleting pod pod1 in namespace services-2872 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2872 to expose endpoints map[pod2:[80]] Feb 24 13:36:10.303: INFO: successfully validated that service endpoint-test2 in namespace services-2872 exposes endpoints map[pod2:[80]] (1.088053502s elapsed) STEP: Deleting pod pod2 in namespace services-2872 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2872 to expose endpoints map[] Feb 24 13:36:11.360: INFO: successfully validated that service endpoint-test2 in namespace services-2872 exposes endpoints map[] (1.03617702s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:36:12.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2872" for this suite. Feb 24 13:36:18.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:36:18.816: INFO: namespace services-2872 deletion completed in 6.424014503s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:26.039 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:36:18.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:36:25.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4835" for this suite. Feb 24 13:36:31.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:36:31.365: INFO: namespace namespaces-4835 deletion completed in 6.127137119s STEP: Destroying namespace "nsdeletetest-8103" for this suite. Feb 24 13:36:31.367: INFO: Namespace nsdeletetest-8103 was already deleted STEP: Destroying namespace "nsdeletetest-8727" for this suite. Feb 24 13:36:37.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:36:37.508: INFO: namespace nsdeletetest-8727 deletion completed in 6.141243465s • [SLOW TEST:18.692 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:36:37.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-9160/configmap-test-88c77c55-d1dc-40f5-9ee4-2f58991783f1 STEP: Creating a pod to test consume configMaps Feb 24 13:36:37.599: INFO: Waiting up to 5m0s for pod "pod-configmaps-b16ab4d0-bcc2-4b2f-a7ae-9db5208024d5" in namespace "configmap-9160" to be "success or failure" Feb 24 13:36:37.605: INFO: Pod "pod-configmaps-b16ab4d0-bcc2-4b2f-a7ae-9db5208024d5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.831517ms Feb 24 13:36:39.617: INFO: Pod "pod-configmaps-b16ab4d0-bcc2-4b2f-a7ae-9db5208024d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018268632s Feb 24 13:36:41.629: INFO: Pod "pod-configmaps-b16ab4d0-bcc2-4b2f-a7ae-9db5208024d5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029543592s Feb 24 13:36:43.648: INFO: Pod "pod-configmaps-b16ab4d0-bcc2-4b2f-a7ae-9db5208024d5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048410264s Feb 24 13:36:45.657: INFO: Pod "pod-configmaps-b16ab4d0-bcc2-4b2f-a7ae-9db5208024d5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05742531s Feb 24 13:36:47.663: INFO: Pod "pod-configmaps-b16ab4d0-bcc2-4b2f-a7ae-9db5208024d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063968708s STEP: Saw pod success Feb 24 13:36:47.663: INFO: Pod "pod-configmaps-b16ab4d0-bcc2-4b2f-a7ae-9db5208024d5" satisfied condition "success or failure" Feb 24 13:36:47.667: INFO: Trying to get logs from node iruya-node pod pod-configmaps-b16ab4d0-bcc2-4b2f-a7ae-9db5208024d5 container env-test: STEP: delete the pod Feb 24 13:36:47.733: INFO: Waiting for pod pod-configmaps-b16ab4d0-bcc2-4b2f-a7ae-9db5208024d5 to disappear Feb 24 13:36:47.738: INFO: Pod pod-configmaps-b16ab4d0-bcc2-4b2f-a7ae-9db5208024d5 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:36:47.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9160" for this suite. Feb 24 13:36:53.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:36:53.945: INFO: namespace configmap-9160 deletion completed in 6.160901443s • [SLOW TEST:16.437 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:36:53.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 24 13:36:54.069: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7afa5aac-ca15-493d-8be6-8a3606071ab5" in namespace "projected-2428" to be "success or failure" Feb 24 13:36:54.081: INFO: Pod "downwardapi-volume-7afa5aac-ca15-493d-8be6-8a3606071ab5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.046362ms Feb 24 13:36:56.089: INFO: Pod "downwardapi-volume-7afa5aac-ca15-493d-8be6-8a3606071ab5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020012162s Feb 24 13:36:58.120: INFO: Pod "downwardapi-volume-7afa5aac-ca15-493d-8be6-8a3606071ab5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051156196s Feb 24 13:37:00.133: INFO: Pod "downwardapi-volume-7afa5aac-ca15-493d-8be6-8a3606071ab5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063726841s Feb 24 13:37:02.142: INFO: Pod "downwardapi-volume-7afa5aac-ca15-493d-8be6-8a3606071ab5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072827313s Feb 24 13:37:04.151: INFO: Pod "downwardapi-volume-7afa5aac-ca15-493d-8be6-8a3606071ab5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082003496s STEP: Saw pod success Feb 24 13:37:04.151: INFO: Pod "downwardapi-volume-7afa5aac-ca15-493d-8be6-8a3606071ab5" satisfied condition "success or failure" Feb 24 13:37:04.155: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-7afa5aac-ca15-493d-8be6-8a3606071ab5 container client-container: STEP: delete the pod Feb 24 13:37:04.278: INFO: Waiting for pod downwardapi-volume-7afa5aac-ca15-493d-8be6-8a3606071ab5 to disappear Feb 24 13:37:04.382: INFO: Pod downwardapi-volume-7afa5aac-ca15-493d-8be6-8a3606071ab5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:37:04.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2428" for this suite. Feb 24 13:37:10.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:37:10.539: INFO: namespace projected-2428 deletion completed in 6.149098514s • [SLOW TEST:16.594 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:37:10.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 24 13:37:10.680: INFO: Waiting up to 5m0s for pod "downward-api-d2fbd13a-f3be-4db6-979f-817abebfbb8a" in namespace "downward-api-5114" to be "success or failure" Feb 24 13:37:10.749: INFO: Pod "downward-api-d2fbd13a-f3be-4db6-979f-817abebfbb8a": Phase="Pending", Reason="", readiness=false. Elapsed: 69.203032ms Feb 24 13:37:12.764: INFO: Pod "downward-api-d2fbd13a-f3be-4db6-979f-817abebfbb8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083860459s Feb 24 13:37:14.773: INFO: Pod "downward-api-d2fbd13a-f3be-4db6-979f-817abebfbb8a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092629768s Feb 24 13:37:17.116: INFO: Pod "downward-api-d2fbd13a-f3be-4db6-979f-817abebfbb8a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435861865s Feb 24 13:37:19.148: INFO: Pod "downward-api-d2fbd13a-f3be-4db6-979f-817abebfbb8a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.468082208s Feb 24 13:37:21.163: INFO: Pod "downward-api-d2fbd13a-f3be-4db6-979f-817abebfbb8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.482341241s STEP: Saw pod success Feb 24 13:37:21.163: INFO: Pod "downward-api-d2fbd13a-f3be-4db6-979f-817abebfbb8a" satisfied condition "success or failure" Feb 24 13:37:21.166: INFO: Trying to get logs from node iruya-node pod downward-api-d2fbd13a-f3be-4db6-979f-817abebfbb8a container dapi-container: STEP: delete the pod Feb 24 13:37:21.412: INFO: Waiting for pod downward-api-d2fbd13a-f3be-4db6-979f-817abebfbb8a to disappear Feb 24 13:37:21.426: INFO: Pod downward-api-d2fbd13a-f3be-4db6-979f-817abebfbb8a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:37:21.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5114" for this suite. Feb 24 13:37:27.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:37:27.662: INFO: namespace downward-api-5114 deletion completed in 6.226807626s • [SLOW TEST:17.122 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:37:27.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-a32bb57b-461c-41a6-b98b-bc4153950011 STEP: Creating a pod to test consume configMaps Feb 24 13:37:27.812: INFO: Waiting up to 5m0s for pod "pod-configmaps-66e81abc-575a-4f29-b3d1-fc45ce337ac3" in namespace "configmap-4932" to be "success or failure" Feb 24 13:37:27.818: INFO: Pod "pod-configmaps-66e81abc-575a-4f29-b3d1-fc45ce337ac3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.494333ms Feb 24 13:37:29.832: INFO: Pod "pod-configmaps-66e81abc-575a-4f29-b3d1-fc45ce337ac3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019463886s Feb 24 13:37:32.475: INFO: Pod "pod-configmaps-66e81abc-575a-4f29-b3d1-fc45ce337ac3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.662962658s Feb 24 13:37:34.491: INFO: Pod "pod-configmaps-66e81abc-575a-4f29-b3d1-fc45ce337ac3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.679072604s Feb 24 13:37:36.508: INFO: Pod "pod-configmaps-66e81abc-575a-4f29-b3d1-fc45ce337ac3": Phase="Running", Reason="", readiness=true. Elapsed: 8.695350171s Feb 24 13:37:38.522: INFO: Pod "pod-configmaps-66e81abc-575a-4f29-b3d1-fc45ce337ac3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.709779283s STEP: Saw pod success Feb 24 13:37:38.522: INFO: Pod "pod-configmaps-66e81abc-575a-4f29-b3d1-fc45ce337ac3" satisfied condition "success or failure" Feb 24 13:37:38.530: INFO: Trying to get logs from node iruya-node pod pod-configmaps-66e81abc-575a-4f29-b3d1-fc45ce337ac3 container configmap-volume-test: STEP: delete the pod Feb 24 13:37:38.642: INFO: Waiting for pod pod-configmaps-66e81abc-575a-4f29-b3d1-fc45ce337ac3 to disappear Feb 24 13:37:38.648: INFO: Pod pod-configmaps-66e81abc-575a-4f29-b3d1-fc45ce337ac3 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:37:38.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4932" for this suite. Feb 24 13:37:44.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:37:44.837: INFO: namespace configmap-4932 deletion completed in 6.183018701s • [SLOW TEST:17.175 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:37:44.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 24 13:37:55.533: INFO: Successfully updated pod "pod-update-activedeadlineseconds-b21577fe-51a4-45dc-8616-9109c1455274" Feb 24 13:37:55.533: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-b21577fe-51a4-45dc-8616-9109c1455274" in namespace "pods-7301" to be "terminated due to deadline exceeded" Feb 24 13:37:55.554: INFO: Pod "pod-update-activedeadlineseconds-b21577fe-51a4-45dc-8616-9109c1455274": Phase="Running", Reason="", readiness=true. Elapsed: 21.068977ms Feb 24 13:37:57.565: INFO: Pod "pod-update-activedeadlineseconds-b21577fe-51a4-45dc-8616-9109c1455274": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.031646762s Feb 24 13:37:57.565: INFO: Pod "pod-update-activedeadlineseconds-b21577fe-51a4-45dc-8616-9109c1455274" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:37:57.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7301" for this suite. Feb 24 13:38:03.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:38:03.768: INFO: namespace pods-7301 deletion completed in 6.197357286s • [SLOW TEST:18.931 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:38:03.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-b170d361-caf5-413b-a4d2-758171dbab33 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-b170d361-caf5-413b-a4d2-758171dbab33 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:39:29.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4350" for this suite. Feb 24 13:39:51.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:39:52.120: INFO: namespace projected-4350 deletion completed in 22.162030588s • [SLOW TEST:108.352 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:39:52.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-f51a9c75-805e-450b-bc56-3dd3242b37ef STEP: Creating a pod to test consume configMaps Feb 24 13:39:52.213: INFO: Waiting up to 5m0s for pod "pod-configmaps-99fbbdf2-2c88-4acd-a45a-517c321604bb" in namespace "configmap-3885" to be "success or failure" Feb 24 13:39:52.255: INFO: Pod "pod-configmaps-99fbbdf2-2c88-4acd-a45a-517c321604bb": Phase="Pending", Reason="", readiness=false. Elapsed: 41.947184ms Feb 24 13:39:54.263: INFO: Pod "pod-configmaps-99fbbdf2-2c88-4acd-a45a-517c321604bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049942294s Feb 24 13:39:56.269: INFO: Pod "pod-configmaps-99fbbdf2-2c88-4acd-a45a-517c321604bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055721834s Feb 24 13:39:58.312: INFO: Pod "pod-configmaps-99fbbdf2-2c88-4acd-a45a-517c321604bb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099260046s Feb 24 13:40:00.321: INFO: Pod "pod-configmaps-99fbbdf2-2c88-4acd-a45a-517c321604bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.107668463s STEP: Saw pod success Feb 24 13:40:00.321: INFO: Pod "pod-configmaps-99fbbdf2-2c88-4acd-a45a-517c321604bb" satisfied condition "success or failure" Feb 24 13:40:00.326: INFO: Trying to get logs from node iruya-node pod pod-configmaps-99fbbdf2-2c88-4acd-a45a-517c321604bb container configmap-volume-test: STEP: delete the pod Feb 24 13:40:00.449: INFO: Waiting for pod pod-configmaps-99fbbdf2-2c88-4acd-a45a-517c321604bb to disappear Feb 24 13:40:00.456: INFO: Pod pod-configmaps-99fbbdf2-2c88-4acd-a45a-517c321604bb no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:40:00.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3885" for this suite. Feb 24 13:40:06.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:40:06.631: INFO: namespace configmap-3885 deletion completed in 6.169850985s • [SLOW TEST:14.511 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:40:06.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Feb 24 13:40:06.712: INFO: Waiting up to 5m0s for pod "pod-b5befbe4-8371-46ef-8af5-514a1e528bb1" in namespace "emptydir-7649" to be "success or failure" Feb 24 13:40:06.716: INFO: Pod "pod-b5befbe4-8371-46ef-8af5-514a1e528bb1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.907518ms Feb 24 13:40:08.726: INFO: Pod "pod-b5befbe4-8371-46ef-8af5-514a1e528bb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014690474s Feb 24 13:40:10.733: INFO: Pod "pod-b5befbe4-8371-46ef-8af5-514a1e528bb1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02104394s Feb 24 13:40:12.742: INFO: Pod "pod-b5befbe4-8371-46ef-8af5-514a1e528bb1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030101094s Feb 24 13:40:14.758: INFO: Pod "pod-b5befbe4-8371-46ef-8af5-514a1e528bb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.045992435s STEP: Saw pod success Feb 24 13:40:14.758: INFO: Pod "pod-b5befbe4-8371-46ef-8af5-514a1e528bb1" satisfied condition "success or failure" Feb 24 13:40:14.763: INFO: Trying to get logs from node iruya-node pod pod-b5befbe4-8371-46ef-8af5-514a1e528bb1 container test-container: STEP: delete the pod Feb 24 13:40:14.860: INFO: Waiting for pod pod-b5befbe4-8371-46ef-8af5-514a1e528bb1 to disappear Feb 24 13:40:14.871: INFO: Pod pod-b5befbe4-8371-46ef-8af5-514a1e528bb1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:40:14.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7649" for this suite. Feb 24 13:40:20.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:40:21.110: INFO: namespace emptydir-7649 deletion completed in 6.198970142s • [SLOW TEST:14.477 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:40:21.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 24 13:40:21.329: INFO: Creating deployment "nginx-deployment" Feb 24 13:40:21.337: INFO: Waiting for observed generation 1 Feb 24 13:40:24.836: INFO: Waiting for all required pods to come up Feb 24 13:40:25.608: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Feb 24 13:40:57.296: INFO: Waiting for deployment "nginx-deployment" to complete Feb 24 13:40:57.306: INFO: Updating deployment "nginx-deployment" with a non-existent image Feb 24 13:40:57.319: INFO: Updating deployment nginx-deployment Feb 24 13:40:57.319: INFO: Waiting for observed generation 2 Feb 24 13:40:59.715: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Feb 24 13:41:00.274: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Feb 24 13:41:00.282: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 24 13:41:00.369: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Feb 24 13:41:00.369: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Feb 24 13:41:00.478: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 24 13:41:00.489: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Feb 24 13:41:00.489: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Feb 24 13:41:00.522: INFO: Updating deployment nginx-deployment Feb 24 13:41:00.522: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Feb 24 13:41:00.535: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Feb 24 13:41:01.382: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 24 13:41:02.175: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-416,SelfLink:/apis/apps/v1/namespaces/deployment-416/deployments/nginx-deployment,UID:78cb77dc-2082-4d04-823a-bb7bf82b5855,ResourceVersion:25577955,Generation:3,CreationTimestamp:2020-02-24 13:40:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-02-24 13:40:58 +0000 UTC 2020-02-24 13:40:21 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-02-24 13:41:01 +0000 UTC 2020-02-24 13:41:01 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Feb 24 13:41:02.421: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-416,SelfLink:/apis/apps/v1/namespaces/deployment-416/replicasets/nginx-deployment-55fb7cb77f,UID:4d9d9ae0-14c2-4843-9494-c2eebe203f86,ResourceVersion:25577945,Generation:3,CreationTimestamp:2020-02-24 13:40:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 78cb77dc-2082-4d04-823a-bb7bf82b5855 0xc00230d417 0xc00230d418}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 24 13:41:02.421: INFO: All old ReplicaSets of Deployment "nginx-deployment": Feb 24 13:41:02.421: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-416,SelfLink:/apis/apps/v1/namespaces/deployment-416/replicasets/nginx-deployment-7b8c6f4498,UID:d0b94556-a669-411c-ac34-e1d28f91f333,ResourceVersion:25577984,Generation:3,CreationTimestamp:2020-02-24 13:40:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 78cb77dc-2082-4d04-823a-bb7bf82b5855 0xc00230d4e7 0xc00230d4e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Feb 24 13:41:04.394: INFO: Pod "nginx-deployment-55fb7cb77f-26jwc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-26jwc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-55fb7cb77f-26jwc,UID:7fe2b180-cd93-48b0-ae52-91c3733decb9,ResourceVersion:25577993,Generation:0,CreationTimestamp:2020-02-24 13:41:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4d9d9ae0-14c2-4843-9494-c2eebe203f86 0xc001f18357 0xc001f18358}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f183e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f18400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:41:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.394: INFO: Pod "nginx-deployment-55fb7cb77f-6g882" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6g882,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-55fb7cb77f-6g882,UID:77ee33fc-5ee7-4d30-8c8f-d10cba7ad9f9,ResourceVersion:25577983,Generation:0,CreationTimestamp:2020-02-24 13:41:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4d9d9ae0-14c2-4843-9494-c2eebe203f86 0xc001f18487 0xc001f18488}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f18510} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f18530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:41:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.394: INFO: Pod "nginx-deployment-55fb7cb77f-6j6hv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6j6hv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-55fb7cb77f-6j6hv,UID:3028d141-2a46-4fd3-ae2e-6c9d3fcd6533,ResourceVersion:25577989,Generation:0,CreationTimestamp:2020-02-24 13:41:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4d9d9ae0-14c2-4843-9494-c2eebe203f86 0xc001f185b7 0xc001f185b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f18630} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f18650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:41:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.394: INFO: Pod "nginx-deployment-55fb7cb77f-7rq9h" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7rq9h,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-55fb7cb77f-7rq9h,UID:0b4bb1b5-af52-46ef-8c6c-7906856677f9,ResourceVersion:25577934,Generation:0,CreationTimestamp:2020-02-24 13:40:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4d9d9ae0-14c2-4843-9494-c2eebe203f86 0xc001f186e7 0xc001f186e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f18760} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f18780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-24 13:40:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.394: INFO: Pod "nginx-deployment-55fb7cb77f-f4rwm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-f4rwm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-55fb7cb77f-f4rwm,UID:4cd6f1ea-728d-4bf2-a62e-762110013c58,ResourceVersion:25577910,Generation:0,CreationTimestamp:2020-02-24 13:40:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4d9d9ae0-14c2-4843-9494-c2eebe203f86 0xc001f18857 0xc001f18858}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f188c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f188e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-24 13:40:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.395: INFO: Pod "nginx-deployment-55fb7cb77f-fvt92" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fvt92,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-55fb7cb77f-fvt92,UID:ff4363ec-5c83-40ba-9493-4c06deeef184,ResourceVersion:25577939,Generation:0,CreationTimestamp:2020-02-24 13:40:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4d9d9ae0-14c2-4843-9494-c2eebe203f86 0xc001f189d7 0xc001f189d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f18a50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f18a70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-24 13:40:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.395: INFO: Pod "nginx-deployment-55fb7cb77f-hmkrw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hmkrw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-55fb7cb77f-hmkrw,UID:fc0d8c9b-1470-406c-879d-45d8a0a955fc,ResourceVersion:25577924,Generation:0,CreationTimestamp:2020-02-24 13:40:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4d9d9ae0-14c2-4843-9494-c2eebe203f86 0xc001f18b67 0xc001f18b68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f18be0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f18c00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-24 13:40:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.395: INFO: Pod "nginx-deployment-55fb7cb77f-p2vh9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-p2vh9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-55fb7cb77f-p2vh9,UID:1b357243-42e0-4b8f-8234-ec7070836417,ResourceVersion:25577977,Generation:0,CreationTimestamp:2020-02-24 13:41:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4d9d9ae0-14c2-4843-9494-c2eebe203f86 0xc001f18ce7 0xc001f18ce8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f18d60} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f18db0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:41:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.395: INFO: Pod "nginx-deployment-55fb7cb77f-ql8jp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ql8jp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-55fb7cb77f-ql8jp,UID:c7042fb8-879c-4b0e-ab2f-d51b1101f581,ResourceVersion:25577998,Generation:0,CreationTimestamp:2020-02-24 13:41:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4d9d9ae0-14c2-4843-9494-c2eebe203f86 0xc001f18e47 0xc001f18e48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f18eb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f18ed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:41:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.395: INFO: Pod "nginx-deployment-55fb7cb77f-rnlv7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rnlv7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-55fb7cb77f-rnlv7,UID:33deadb9-761e-49d9-b66b-8cb8fb4d3316,ResourceVersion:25577987,Generation:0,CreationTimestamp:2020-02-24 13:41:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4d9d9ae0-14c2-4843-9494-c2eebe203f86 0xc001f18f67 0xc001f18f68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f18fe0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f19000}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:41:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.396: INFO: Pod "nginx-deployment-55fb7cb77f-t9hmx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-t9hmx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-55fb7cb77f-t9hmx,UID:0db8d27f-952d-4cb0-8380-0835eee57dc2,ResourceVersion:25577981,Generation:0,CreationTimestamp:2020-02-24 13:41:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4d9d9ae0-14c2-4843-9494-c2eebe203f86 0xc001f19087 0xc001f19088}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f190f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f19110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:41:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.396: INFO: Pod "nginx-deployment-55fb7cb77f-wh29c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wh29c,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-55fb7cb77f-wh29c,UID:e3d162a9-d309-4286-9f2c-6573434d906b,ResourceVersion:25577956,Generation:0,CreationTimestamp:2020-02-24 13:41:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4d9d9ae0-14c2-4843-9494-c2eebe203f86 0xc001f19197 0xc001f19198}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f19200} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f19230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:41:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.396: INFO: Pod "nginx-deployment-55fb7cb77f-wx8nh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wx8nh,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-55fb7cb77f-wx8nh,UID:ac2c15bc-338b-486d-b652-b0e98b30ccca,ResourceVersion:25577932,Generation:0,CreationTimestamp:2020-02-24 13:40:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4d9d9ae0-14c2-4843-9494-c2eebe203f86 0xc001f192b7 0xc001f192b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f19330} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f19370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-24 13:40:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.396: INFO: Pod "nginx-deployment-7b8c6f4498-2qxt4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2qxt4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-7b8c6f4498-2qxt4,UID:5e8fa551-8ee5-4edb-9369-3ff56c83d67d,ResourceVersion:25577991,Generation:0,CreationTimestamp:2020-02-24 13:41:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d0b94556-a669-411c-ac34-e1d28f91f333 0xc001f195a7 0xc001f195a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f19660} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f196c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:41:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.397: INFO: Pod "nginx-deployment-7b8c6f4498-46z58" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-46z58,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-7b8c6f4498-46z58,UID:c07d9ce7-f577-445c-b386-6a16ede849cd,ResourceVersion:25577844,Generation:0,CreationTimestamp:2020-02-24 13:40:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d0b94556-a669-411c-ac34-e1d28f91f333 0xc001f19807 0xc001f19808}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f19880} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f198b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:21 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-02-24 13:40:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-24 13:40:47 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f168bd49df16c5fc2cd0326cddfcece9f9da625b9bff6e6ec2343379deec9ad7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.397: INFO: Pod "nginx-deployment-7b8c6f4498-5s75t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5s75t,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-7b8c6f4498-5s75t,UID:cc2cc348-25fd-4d81-99a2-c4ad9347a89b,ResourceVersion:25577980,Generation:0,CreationTimestamp:2020-02-24 13:41:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d0b94556-a669-411c-ac34-e1d28f91f333 0xc001f19997 0xc001f19998}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f19a90} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f19ab0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:41:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.397: INFO: Pod "nginx-deployment-7b8c6f4498-5sq96" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5sq96,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-7b8c6f4498-5sq96,UID:88abd456-fa16-4888-b7af-354ded58e82c,ResourceVersion:25577837,Generation:0,CreationTimestamp:2020-02-24 13:40:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d0b94556-a669-411c-ac34-e1d28f91f333 0xc001f19b37 0xc001f19b38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f19bb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f19bd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:21 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-02-24 13:40:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-24 13:40:47 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f955274a602067f65bef8f12e0f7bf367b46d5730e130cfad5764dc67f4d31a9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.397: INFO: Pod "nginx-deployment-7b8c6f4498-6nkv4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6nkv4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-7b8c6f4498-6nkv4,UID:7b585d06-722c-4823-8cc7-f0e6e9b50886,ResourceVersion:25577961,Generation:0,CreationTimestamp:2020-02-24 13:41:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d0b94556-a669-411c-ac34-e1d28f91f333 0xc001f19d57 0xc001f19d58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f19df0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f19e10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:41:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.397: INFO: Pod "nginx-deployment-7b8c6f4498-99h8k" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-99h8k,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-7b8c6f4498-99h8k,UID:ae07368f-195d-473e-8505-a4a3f5c29b26,ResourceVersion:25577878,Generation:0,CreationTimestamp:2020-02-24 13:40:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d0b94556-a669-411c-ac34-e1d28f91f333 0xc001f19ea7 0xc001f19ea8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f19fe0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000c8c000}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:21 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-24 13:40:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-24 13:40:52 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://69f34f07c1e581c6e7c63706c4df21bfbcb178d6fa644a4c8e1648325d5d0b14}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.398: INFO: Pod "nginx-deployment-7b8c6f4498-b28kp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-b28kp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-7b8c6f4498-b28kp,UID:af8dff71-5f29-4e8d-bca7-3006d042df7f,ResourceVersion:25577963,Generation:0,CreationTimestamp:2020-02-24 13:41:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d0b94556-a669-411c-ac34-e1d28f91f333 0xc000c8c0d7 0xc000c8c0d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000c8c150} {node.kubernetes.io/unreachable Exists NoExecute 0xc000c8c170}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:41:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.398: INFO: Pod "nginx-deployment-7b8c6f4498-cgdjk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cgdjk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-7b8c6f4498-cgdjk,UID:381ff631-5926-4519-9089-b984d1094693,ResourceVersion:25577832,Generation:0,CreationTimestamp:2020-02-24 13:40:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d0b94556-a669-411c-ac34-e1d28f91f333 0xc000c8c1f7 0xc000c8c1f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000c8c260} {node.kubernetes.io/unreachable Exists NoExecute 0xc000c8c280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:21 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-02-24 13:40:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-24 13:40:44 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8904089320cbe6f4068be8a1e487f2e04bcb2827049d7698edd2330ced56e4d3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.398: INFO: Pod "nginx-deployment-7b8c6f4498-flx2d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-flx2d,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-7b8c6f4498-flx2d,UID:0487eaf1-bae6-463a-9cf9-15ec7cd1c736,ResourceVersion:25577994,Generation:0,CreationTimestamp:2020-02-24 13:41:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d0b94556-a669-411c-ac34-e1d28f91f333 0xc000c8c377 0xc000c8c378}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000c8c3f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000c8c410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:41:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.398: INFO: Pod "nginx-deployment-7b8c6f4498-gzxtk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gzxtk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-7b8c6f4498-gzxtk,UID:c11dac6d-97a4-4a69-990d-231c9533172d,ResourceVersion:25577875,Generation:0,CreationTimestamp:2020-02-24 13:40:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d0b94556-a669-411c-ac34-e1d28f91f333 0xc000c8c497 0xc000c8c498}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000c8c510} {node.kubernetes.io/unreachable Exists NoExecute 0xc000c8c530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:21 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.6,StartTime:2020-02-24 13:40:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-24 13:40:54 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://10b4d6c0d9fdeed093239c859c44230752e1b166508a63d5f725dc23af1ba021}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.399: INFO: Pod "nginx-deployment-7b8c6f4498-h7699" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-h7699,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-7b8c6f4498-h7699,UID:39ef6d12-af1e-42d2-b7be-a042fd0e229a,ResourceVersion:25577986,Generation:0,CreationTimestamp:2020-02-24 13:41:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d0b94556-a669-411c-ac34-e1d28f91f333 0xc000c8c607 0xc000c8c608}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000c8c670} {node.kubernetes.io/unreachable Exists NoExecute 0xc000c8c690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:41:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.399: INFO: Pod "nginx-deployment-7b8c6f4498-jt8tz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jt8tz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-7b8c6f4498-jt8tz,UID:715cd748-761a-48a1-a8c6-30d8f2801692,ResourceVersion:25577992,Generation:0,CreationTimestamp:2020-02-24 13:41:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d0b94556-a669-411c-ac34-e1d28f91f333 0xc000c8c717 0xc000c8c718}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000c8c790} {node.kubernetes.io/unreachable Exists NoExecute 0xc000c8c7b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:41:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.400: INFO: Pod "nginx-deployment-7b8c6f4498-kntdj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kntdj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-7b8c6f4498-kntdj,UID:fc79c3a6-fe76-47da-a69c-167ca620d0e7,ResourceVersion:25577840,Generation:0,CreationTimestamp:2020-02-24 13:40:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d0b94556-a669-411c-ac34-e1d28f91f333 0xc000c8c837 0xc000c8c838}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000c8c8a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000c8c8c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:21 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-02-24 13:40:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-24 13:40:47 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://7b8e610e732760e17420d876f925c17a0e1fba9ac7fed1c55a986efcd3edfcca}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.400: INFO: Pod "nginx-deployment-7b8c6f4498-kz5hp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kz5hp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-7b8c6f4498-kz5hp,UID:65931f95-c3fd-4124-91e3-809311adc5e6,ResourceVersion:25577982,Generation:0,CreationTimestamp:2020-02-24 13:41:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d0b94556-a669-411c-ac34-e1d28f91f333 0xc000c8c997 0xc000c8c998}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000c8ca00} {node.kubernetes.io/unreachable Exists NoExecute 0xc000c8ca20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:41:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.401: INFO: Pod "nginx-deployment-7b8c6f4498-mqbrd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mqbrd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-7b8c6f4498-mqbrd,UID:d0402fc5-e314-4380-b055-78011d6d5060,ResourceVersion:25577948,Generation:0,CreationTimestamp:2020-02-24 13:41:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d0b94556-a669-411c-ac34-e1d28f91f333 0xc000c8caa7 0xc000c8caa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000c8cb30} {node.kubernetes.io/unreachable Exists NoExecute 0xc000c8cb50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:41:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.401: INFO: Pod "nginx-deployment-7b8c6f4498-n52kp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-n52kp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-7b8c6f4498-n52kp,UID:88a961e4-e77b-4780-9c6e-6412b5719a59,ResourceVersion:25577978,Generation:0,CreationTimestamp:2020-02-24 13:41:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d0b94556-a669-411c-ac34-e1d28f91f333 0xc000c8cbd7 0xc000c8cbd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000c8cc50} {node.kubernetes.io/unreachable Exists NoExecute 0xc000c8cc70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:41:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.401: INFO: Pod "nginx-deployment-7b8c6f4498-rb9nb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rb9nb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-7b8c6f4498-rb9nb,UID:be0688b3-6516-41e8-a7b0-b02fd1690a44,ResourceVersion:25577979,Generation:0,CreationTimestamp:2020-02-24 13:41:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d0b94556-a669-411c-ac34-e1d28f91f333 0xc000c8ccf7 0xc000c8ccf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000c8cd60} {node.kubernetes.io/unreachable Exists NoExecute 0xc000c8cd80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:41:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.403: INFO: Pod "nginx-deployment-7b8c6f4498-rg7pg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rg7pg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-7b8c6f4498-rg7pg,UID:2928e964-de76-4a16-8915-8a1e5fe38020,ResourceVersion:25577988,Generation:0,CreationTimestamp:2020-02-24 13:41:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d0b94556-a669-411c-ac34-e1d28f91f333 0xc000c8ce07 0xc000c8ce08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000c8ce70} {node.kubernetes.io/unreachable Exists NoExecute 0xc000c8ce90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:41:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.404: INFO: Pod "nginx-deployment-7b8c6f4498-vd8fc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vd8fc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-7b8c6f4498-vd8fc,UID:85020340-6a0e-4867-8b52-7e77a1da29a9,ResourceVersion:25577861,Generation:0,CreationTimestamp:2020-02-24 13:40:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d0b94556-a669-411c-ac34-e1d28f91f333 0xc000c8cf17 0xc000c8cf18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000c8cf90} {node.kubernetes.io/unreachable Exists NoExecute 0xc000c8cfb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:21 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-24 13:40:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-24 13:40:48 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f0f2e400960be5d0f69e94cbcc21d1f27132922457622c988e1ae01eeb116638}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 24 13:41:04.404: INFO: Pod "nginx-deployment-7b8c6f4498-x5nnl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-x5nnl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-416,SelfLink:/api/v1/namespaces/deployment-416/pods/nginx-deployment-7b8c6f4498-x5nnl,UID:013310a4-cb64-452c-9077-5eb289d86740,ResourceVersion:25577866,Generation:0,CreationTimestamp:2020-02-24 13:40:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d0b94556-a669-411c-ac34-e1d28f91f333 0xc000c8d087 0xc000c8d088}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5xvv8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5xvv8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5xvv8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000c8d100} {node.kubernetes.io/unreachable Exists NoExecute 0xc000c8d120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 13:40:21 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-02-24 13:40:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-24 13:40:53 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://110537aebb3ad5fed3404b695ae08dac9f82e754c681b99c21fe0c7c02dcb1e6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:41:04.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-416" for this suite. Feb 24 13:42:42.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:42:42.324: INFO: namespace deployment-416 deletion completed in 1m35.510523631s • [SLOW TEST:141.214 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:42:42.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3690.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3690.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3690.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3690.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 24 13:43:06.898: INFO: File jessie_udp@dns-test-service-3.dns-3690.svc.cluster.local from pod dns-3690/dns-test-7463bad4-ad72-4dbe-9650-8ac444fa7178 contains '' instead of 'foo.example.com.' Feb 24 13:43:06.898: INFO: Lookups using dns-3690/dns-test-7463bad4-ad72-4dbe-9650-8ac444fa7178 failed for: [jessie_udp@dns-test-service-3.dns-3690.svc.cluster.local] Feb 24 13:43:11.932: INFO: DNS probes using dns-test-7463bad4-ad72-4dbe-9650-8ac444fa7178 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3690.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3690.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3690.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3690.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 24 13:43:28.211: INFO: File wheezy_udp@dns-test-service-3.dns-3690.svc.cluster.local from pod dns-3690/dns-test-7244fef9-67d7-4be5-bbe1-50db6241035a contains '' instead of 'bar.example.com.' Feb 24 13:43:28.217: INFO: File jessie_udp@dns-test-service-3.dns-3690.svc.cluster.local from pod dns-3690/dns-test-7244fef9-67d7-4be5-bbe1-50db6241035a contains '' instead of 'bar.example.com.' Feb 24 13:43:28.217: INFO: Lookups using dns-3690/dns-test-7244fef9-67d7-4be5-bbe1-50db6241035a failed for: [wheezy_udp@dns-test-service-3.dns-3690.svc.cluster.local jessie_udp@dns-test-service-3.dns-3690.svc.cluster.local] Feb 24 13:43:33.232: INFO: File wheezy_udp@dns-test-service-3.dns-3690.svc.cluster.local from pod dns-3690/dns-test-7244fef9-67d7-4be5-bbe1-50db6241035a contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 24 13:43:33.241: INFO: File jessie_udp@dns-test-service-3.dns-3690.svc.cluster.local from pod dns-3690/dns-test-7244fef9-67d7-4be5-bbe1-50db6241035a contains '' instead of 'bar.example.com.' Feb 24 13:43:33.241: INFO: Lookups using dns-3690/dns-test-7244fef9-67d7-4be5-bbe1-50db6241035a failed for: [wheezy_udp@dns-test-service-3.dns-3690.svc.cluster.local jessie_udp@dns-test-service-3.dns-3690.svc.cluster.local] Feb 24 13:43:38.235: INFO: File wheezy_udp@dns-test-service-3.dns-3690.svc.cluster.local from pod dns-3690/dns-test-7244fef9-67d7-4be5-bbe1-50db6241035a contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 24 13:43:38.244: INFO: Lookups using dns-3690/dns-test-7244fef9-67d7-4be5-bbe1-50db6241035a failed for: [wheezy_udp@dns-test-service-3.dns-3690.svc.cluster.local] Feb 24 13:43:43.240: INFO: DNS probes using dns-test-7244fef9-67d7-4be5-bbe1-50db6241035a succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3690.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3690.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3690.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3690.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 24 13:43:57.534: INFO: File wheezy_udp@dns-test-service-3.dns-3690.svc.cluster.local from pod dns-3690/dns-test-2b5eaf2b-f1c1-4927-9efc-539162c2bc45 contains '' instead of '10.98.79.204' Feb 24 13:43:57.542: INFO: File jessie_udp@dns-test-service-3.dns-3690.svc.cluster.local from pod dns-3690/dns-test-2b5eaf2b-f1c1-4927-9efc-539162c2bc45 contains '' instead of '10.98.79.204' Feb 24 13:43:57.542: INFO: Lookups using dns-3690/dns-test-2b5eaf2b-f1c1-4927-9efc-539162c2bc45 failed for: [wheezy_udp@dns-test-service-3.dns-3690.svc.cluster.local jessie_udp@dns-test-service-3.dns-3690.svc.cluster.local] Feb 24 13:44:02.581: INFO: DNS probes using dns-test-2b5eaf2b-f1c1-4927-9efc-539162c2bc45 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:44:02.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3690" for this suite. Feb 24 13:44:10.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:44:10.960: INFO: namespace dns-3690 deletion completed in 8.149853079s • [SLOW TEST:88.636 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:44:10.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 24 13:44:11.065: INFO: Waiting up to 5m0s for pod "downward-api-9cef3d6b-6772-4893-88f8-a7c5d137f473" in namespace "downward-api-2526" to be "success or failure" Feb 24 13:44:11.091: INFO: Pod "downward-api-9cef3d6b-6772-4893-88f8-a7c5d137f473": Phase="Pending", Reason="", readiness=false. Elapsed: 26.299163ms Feb 24 13:44:14.405: INFO: Pod "downward-api-9cef3d6b-6772-4893-88f8-a7c5d137f473": Phase="Pending", Reason="", readiness=false. Elapsed: 3.340255638s Feb 24 13:44:16.417: INFO: Pod "downward-api-9cef3d6b-6772-4893-88f8-a7c5d137f473": Phase="Pending", Reason="", readiness=false. Elapsed: 5.351984732s Feb 24 13:44:18.426: INFO: Pod "downward-api-9cef3d6b-6772-4893-88f8-a7c5d137f473": Phase="Pending", Reason="", readiness=false. Elapsed: 7.36123884s Feb 24 13:44:20.435: INFO: Pod "downward-api-9cef3d6b-6772-4893-88f8-a7c5d137f473": Phase="Pending", Reason="", readiness=false. Elapsed: 9.37004391s Feb 24 13:44:22.445: INFO: Pod "downward-api-9cef3d6b-6772-4893-88f8-a7c5d137f473": Phase="Pending", Reason="", readiness=false. Elapsed: 11.379953142s Feb 24 13:44:24.453: INFO: Pod "downward-api-9cef3d6b-6772-4893-88f8-a7c5d137f473": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.387932607s STEP: Saw pod success Feb 24 13:44:24.453: INFO: Pod "downward-api-9cef3d6b-6772-4893-88f8-a7c5d137f473" satisfied condition "success or failure" Feb 24 13:44:24.457: INFO: Trying to get logs from node iruya-node pod downward-api-9cef3d6b-6772-4893-88f8-a7c5d137f473 container dapi-container: STEP: delete the pod Feb 24 13:44:24.542: INFO: Waiting for pod downward-api-9cef3d6b-6772-4893-88f8-a7c5d137f473 to disappear Feb 24 13:44:24.550: INFO: Pod downward-api-9cef3d6b-6772-4893-88f8-a7c5d137f473 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:44:24.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2526" for this suite. Feb 24 13:44:30.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:44:30.719: INFO: namespace downward-api-2526 deletion completed in 6.161371852s • [SLOW TEST:19.758 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:44:30.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 24 13:44:30.878: INFO: Waiting up to 5m0s for pod "downwardapi-volume-30df7c08-efef-4cb0-9ce2-a67e2071fb75" in namespace "downward-api-981" to be "success or failure" Feb 24 13:44:30.965: INFO: Pod "downwardapi-volume-30df7c08-efef-4cb0-9ce2-a67e2071fb75": Phase="Pending", Reason="", readiness=false. Elapsed: 86.952592ms Feb 24 13:44:33.411: INFO: Pod "downwardapi-volume-30df7c08-efef-4cb0-9ce2-a67e2071fb75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.533065482s Feb 24 13:44:35.419: INFO: Pod "downwardapi-volume-30df7c08-efef-4cb0-9ce2-a67e2071fb75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.54128489s Feb 24 13:44:37.428: INFO: Pod "downwardapi-volume-30df7c08-efef-4cb0-9ce2-a67e2071fb75": Phase="Pending", Reason="", readiness=false. Elapsed: 6.549787836s Feb 24 13:44:39.436: INFO: Pod "downwardapi-volume-30df7c08-efef-4cb0-9ce2-a67e2071fb75": Phase="Pending", Reason="", readiness=false. Elapsed: 8.558574247s Feb 24 13:44:41.445: INFO: Pod "downwardapi-volume-30df7c08-efef-4cb0-9ce2-a67e2071fb75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.566736915s STEP: Saw pod success Feb 24 13:44:41.445: INFO: Pod "downwardapi-volume-30df7c08-efef-4cb0-9ce2-a67e2071fb75" satisfied condition "success or failure" Feb 24 13:44:41.450: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-30df7c08-efef-4cb0-9ce2-a67e2071fb75 container client-container: STEP: delete the pod Feb 24 13:44:41.556: INFO: Waiting for pod downwardapi-volume-30df7c08-efef-4cb0-9ce2-a67e2071fb75 to disappear Feb 24 13:44:41.662: INFO: Pod downwardapi-volume-30df7c08-efef-4cb0-9ce2-a67e2071fb75 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:44:41.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-981" for this suite. Feb 24 13:44:47.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:44:47.872: INFO: namespace downward-api-981 deletion completed in 6.201670742s • [SLOW TEST:17.154 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:44:47.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-dad47227-2267-4115-9c07-a6f1c04e50a1 in namespace container-probe-9901 Feb 24 13:44:56.039: INFO: Started pod busybox-dad47227-2267-4115-9c07-a6f1c04e50a1 in namespace container-probe-9901 STEP: checking the pod's current state and verifying that restartCount is present Feb 24 13:44:56.043: INFO: Initial restart count of pod busybox-dad47227-2267-4115-9c07-a6f1c04e50a1 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:48:57.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9901" for this suite. Feb 24 13:49:03.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:49:03.797: INFO: namespace container-probe-9901 deletion completed in 6.300345875s • [SLOW TEST:255.924 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:49:03.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Feb 24 13:49:14.457: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2090 pod-service-account-f5a1e899-5fd7-48a5-8401-7ac73809316b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Feb 24 13:49:17.809: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2090 pod-service-account-f5a1e899-5fd7-48a5-8401-7ac73809316b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Feb 24 13:49:18.421: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2090 pod-service-account-f5a1e899-5fd7-48a5-8401-7ac73809316b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:49:18.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2090" for this suite. Feb 24 13:49:24.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:49:25.108: INFO: namespace svcaccounts-2090 deletion completed in 6.160500604s • [SLOW TEST:21.311 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:49:25.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-8e828aea-e1ed-472b-90b3-d7f4b61ac6e8 STEP: Creating a pod to test consume secrets Feb 24 13:49:25.229: INFO: Waiting up to 5m0s for pod "pod-secrets-13fa7ee7-46db-4369-8423-dcb0fd1f5d7c" in namespace "secrets-2493" to be "success or failure" Feb 24 13:49:25.232: INFO: Pod "pod-secrets-13fa7ee7-46db-4369-8423-dcb0fd1f5d7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.934772ms Feb 24 13:49:27.239: INFO: Pod "pod-secrets-13fa7ee7-46db-4369-8423-dcb0fd1f5d7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010385395s Feb 24 13:49:29.246: INFO: Pod "pod-secrets-13fa7ee7-46db-4369-8423-dcb0fd1f5d7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017318603s Feb 24 13:49:32.259: INFO: Pod "pod-secrets-13fa7ee7-46db-4369-8423-dcb0fd1f5d7c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.030112363s Feb 24 13:49:34.280: INFO: Pod "pod-secrets-13fa7ee7-46db-4369-8423-dcb0fd1f5d7c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.050785208s Feb 24 13:49:36.286: INFO: Pod "pod-secrets-13fa7ee7-46db-4369-8423-dcb0fd1f5d7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.056778874s STEP: Saw pod success Feb 24 13:49:36.286: INFO: Pod "pod-secrets-13fa7ee7-46db-4369-8423-dcb0fd1f5d7c" satisfied condition "success or failure" Feb 24 13:49:36.290: INFO: Trying to get logs from node iruya-node pod pod-secrets-13fa7ee7-46db-4369-8423-dcb0fd1f5d7c container secret-volume-test: STEP: delete the pod Feb 24 13:49:36.427: INFO: Waiting for pod pod-secrets-13fa7ee7-46db-4369-8423-dcb0fd1f5d7c to disappear Feb 24 13:49:36.434: INFO: Pod pod-secrets-13fa7ee7-46db-4369-8423-dcb0fd1f5d7c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:49:36.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2493" for this suite. Feb 24 13:49:42.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:49:42.680: INFO: namespace secrets-2493 deletion completed in 6.238640998s • [SLOW TEST:17.571 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:49:42.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 24 13:49:42.794: INFO: Waiting up to 5m0s for pod "downwardapi-volume-71f36d06-afc6-4107-b1cd-7368b8d2ab72" in namespace "downward-api-4538" to be "success or failure" Feb 24 13:49:42.802: INFO: Pod "downwardapi-volume-71f36d06-afc6-4107-b1cd-7368b8d2ab72": Phase="Pending", Reason="", readiness=false. Elapsed: 7.64152ms Feb 24 13:49:44.810: INFO: Pod "downwardapi-volume-71f36d06-afc6-4107-b1cd-7368b8d2ab72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015688508s Feb 24 13:49:46.819: INFO: Pod "downwardapi-volume-71f36d06-afc6-4107-b1cd-7368b8d2ab72": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025103033s Feb 24 13:49:48.827: INFO: Pod "downwardapi-volume-71f36d06-afc6-4107-b1cd-7368b8d2ab72": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03290505s Feb 24 13:49:50.840: INFO: Pod "downwardapi-volume-71f36d06-afc6-4107-b1cd-7368b8d2ab72": Phase="Running", Reason="", readiness=true. Elapsed: 8.045400659s Feb 24 13:49:52.861: INFO: Pod "downwardapi-volume-71f36d06-afc6-4107-b1cd-7368b8d2ab72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.066724597s STEP: Saw pod success Feb 24 13:49:52.861: INFO: Pod "downwardapi-volume-71f36d06-afc6-4107-b1cd-7368b8d2ab72" satisfied condition "success or failure" Feb 24 13:49:52.870: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-71f36d06-afc6-4107-b1cd-7368b8d2ab72 container client-container: STEP: delete the pod Feb 24 13:49:52.926: INFO: Waiting for pod downwardapi-volume-71f36d06-afc6-4107-b1cd-7368b8d2ab72 to disappear Feb 24 13:49:52.974: INFO: Pod downwardapi-volume-71f36d06-afc6-4107-b1cd-7368b8d2ab72 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:49:52.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4538" for this suite. Feb 24 13:49:59.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:49:59.205: INFO: namespace downward-api-4538 deletion completed in 6.224339195s • [SLOW TEST:16.525 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:49:59.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-db97b770-8c5e-485b-a9ee-6f2686228e3b STEP: Creating a pod to test consume configMaps Feb 24 13:49:59.337: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e19a69eb-2cfb-43d2-b518-9e7ef201fa32" in namespace "projected-2958" to be "success or failure" Feb 24 13:49:59.342: INFO: Pod "pod-projected-configmaps-e19a69eb-2cfb-43d2-b518-9e7ef201fa32": Phase="Pending", Reason="", readiness=false. Elapsed: 4.918492ms Feb 24 13:50:01.351: INFO: Pod "pod-projected-configmaps-e19a69eb-2cfb-43d2-b518-9e7ef201fa32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014168538s Feb 24 13:50:03.937: INFO: Pod "pod-projected-configmaps-e19a69eb-2cfb-43d2-b518-9e7ef201fa32": Phase="Pending", Reason="", readiness=false. Elapsed: 4.599658496s Feb 24 13:50:05.949: INFO: Pod "pod-projected-configmaps-e19a69eb-2cfb-43d2-b518-9e7ef201fa32": Phase="Pending", Reason="", readiness=false. Elapsed: 6.612291014s Feb 24 13:50:07.956: INFO: Pod "pod-projected-configmaps-e19a69eb-2cfb-43d2-b518-9e7ef201fa32": Phase="Pending", Reason="", readiness=false. Elapsed: 8.618474157s Feb 24 13:50:09.964: INFO: Pod "pod-projected-configmaps-e19a69eb-2cfb-43d2-b518-9e7ef201fa32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.627096809s STEP: Saw pod success Feb 24 13:50:09.964: INFO: Pod "pod-projected-configmaps-e19a69eb-2cfb-43d2-b518-9e7ef201fa32" satisfied condition "success or failure" Feb 24 13:50:09.967: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-e19a69eb-2cfb-43d2-b518-9e7ef201fa32 container projected-configmap-volume-test: STEP: delete the pod Feb 24 13:50:10.037: INFO: Waiting for pod pod-projected-configmaps-e19a69eb-2cfb-43d2-b518-9e7ef201fa32 to disappear Feb 24 13:50:10.051: INFO: Pod pod-projected-configmaps-e19a69eb-2cfb-43d2-b518-9e7ef201fa32 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:50:10.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2958" for this suite. Feb 24 13:50:16.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:50:16.198: INFO: namespace projected-2958 deletion completed in 6.137094794s • [SLOW TEST:16.992 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:50:16.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0224 13:50:31.606423 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 24 13:50:31.606: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:50:31.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7994" for this suite. Feb 24 13:50:46.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:50:47.308: INFO: namespace gc-7994 deletion completed in 15.539514897s • [SLOW TEST:31.110 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:50:47.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 24 13:50:47.703: INFO: Waiting up to 5m0s for pod "pod-73567b5b-7387-4707-a6f5-b4d8cc3ce483" in namespace "emptydir-4952" to be "success or failure" Feb 24 13:50:47.749: INFO: Pod "pod-73567b5b-7387-4707-a6f5-b4d8cc3ce483": Phase="Pending", Reason="", readiness=false. Elapsed: 45.932514ms Feb 24 13:50:49.860: INFO: Pod "pod-73567b5b-7387-4707-a6f5-b4d8cc3ce483": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156269739s Feb 24 13:50:51.872: INFO: Pod "pod-73567b5b-7387-4707-a6f5-b4d8cc3ce483": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168470269s Feb 24 13:50:53.881: INFO: Pod "pod-73567b5b-7387-4707-a6f5-b4d8cc3ce483": Phase="Pending", Reason="", readiness=false. Elapsed: 6.177823196s Feb 24 13:50:55.894: INFO: Pod "pod-73567b5b-7387-4707-a6f5-b4d8cc3ce483": Phase="Pending", Reason="", readiness=false. Elapsed: 8.190645888s Feb 24 13:50:57.902: INFO: Pod "pod-73567b5b-7387-4707-a6f5-b4d8cc3ce483": Phase="Pending", Reason="", readiness=false. Elapsed: 10.199081979s Feb 24 13:50:59.910: INFO: Pod "pod-73567b5b-7387-4707-a6f5-b4d8cc3ce483": Phase="Pending", Reason="", readiness=false. Elapsed: 12.206453053s Feb 24 13:51:01.936: INFO: Pod "pod-73567b5b-7387-4707-a6f5-b4d8cc3ce483": Phase="Pending", Reason="", readiness=false. Elapsed: 14.23233225s Feb 24 13:51:03.949: INFO: Pod "pod-73567b5b-7387-4707-a6f5-b4d8cc3ce483": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.245726176s STEP: Saw pod success Feb 24 13:51:03.949: INFO: Pod "pod-73567b5b-7387-4707-a6f5-b4d8cc3ce483" satisfied condition "success or failure" Feb 24 13:51:03.956: INFO: Trying to get logs from node iruya-node pod pod-73567b5b-7387-4707-a6f5-b4d8cc3ce483 container test-container: STEP: delete the pod Feb 24 13:51:04.054: INFO: Waiting for pod pod-73567b5b-7387-4707-a6f5-b4d8cc3ce483 to disappear Feb 24 13:51:04.061: INFO: Pod pod-73567b5b-7387-4707-a6f5-b4d8cc3ce483 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:51:04.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4952" for this suite. Feb 24 13:51:10.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:51:10.167: INFO: namespace emptydir-4952 deletion completed in 6.100107399s • [SLOW TEST:22.858 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:51:10.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-bg98 STEP: Creating a pod to test atomic-volume-subpath Feb 24 13:51:10.261: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-bg98" in namespace "subpath-7384" to be "success or failure" Feb 24 13:51:10.332: INFO: Pod "pod-subpath-test-downwardapi-bg98": Phase="Pending", Reason="", readiness=false. Elapsed: 71.185955ms Feb 24 13:51:12.340: INFO: Pod "pod-subpath-test-downwardapi-bg98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07867473s Feb 24 13:51:14.354: INFO: Pod "pod-subpath-test-downwardapi-bg98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093462371s Feb 24 13:51:16.368: INFO: Pod "pod-subpath-test-downwardapi-bg98": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10713076s Feb 24 13:51:18.375: INFO: Pod "pod-subpath-test-downwardapi-bg98": Phase="Pending", Reason="", readiness=false. Elapsed: 8.113911718s Feb 24 13:51:21.293: INFO: Pod "pod-subpath-test-downwardapi-bg98": Phase="Pending", Reason="", readiness=false. Elapsed: 11.032047528s Feb 24 13:51:23.304: INFO: Pod "pod-subpath-test-downwardapi-bg98": Phase="Running", Reason="", readiness=true. Elapsed: 13.042798646s Feb 24 13:51:25.313: INFO: Pod "pod-subpath-test-downwardapi-bg98": Phase="Running", Reason="", readiness=true. Elapsed: 15.052081884s Feb 24 13:51:27.321: INFO: Pod "pod-subpath-test-downwardapi-bg98": Phase="Running", Reason="", readiness=true. Elapsed: 17.060479356s Feb 24 13:51:29.331: INFO: Pod "pod-subpath-test-downwardapi-bg98": Phase="Running", Reason="", readiness=true. Elapsed: 19.069664265s Feb 24 13:51:31.337: INFO: Pod "pod-subpath-test-downwardapi-bg98": Phase="Running", Reason="", readiness=true. Elapsed: 21.075735507s Feb 24 13:51:33.347: INFO: Pod "pod-subpath-test-downwardapi-bg98": Phase="Running", Reason="", readiness=true. Elapsed: 23.085753136s Feb 24 13:51:35.354: INFO: Pod "pod-subpath-test-downwardapi-bg98": Phase="Running", Reason="", readiness=true. Elapsed: 25.093072846s Feb 24 13:51:37.364: INFO: Pod "pod-subpath-test-downwardapi-bg98": Phase="Running", Reason="", readiness=true. Elapsed: 27.103381503s Feb 24 13:51:39.377: INFO: Pod "pod-subpath-test-downwardapi-bg98": Phase="Running", Reason="", readiness=true. Elapsed: 29.11603434s Feb 24 13:51:41.393: INFO: Pod "pod-subpath-test-downwardapi-bg98": Phase="Running", Reason="", readiness=true. Elapsed: 31.132479652s Feb 24 13:51:43.401: INFO: Pod "pod-subpath-test-downwardapi-bg98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.139788857s STEP: Saw pod success Feb 24 13:51:43.401: INFO: Pod "pod-subpath-test-downwardapi-bg98" satisfied condition "success or failure" Feb 24 13:51:43.405: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-bg98 container test-container-subpath-downwardapi-bg98: STEP: delete the pod Feb 24 13:51:43.523: INFO: Waiting for pod pod-subpath-test-downwardapi-bg98 to disappear Feb 24 13:51:43.617: INFO: Pod pod-subpath-test-downwardapi-bg98 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-bg98 Feb 24 13:51:43.617: INFO: Deleting pod "pod-subpath-test-downwardapi-bg98" in namespace "subpath-7384" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:51:43.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7384" for this suite. Feb 24 13:51:49.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:51:49.876: INFO: namespace subpath-7384 deletion completed in 6.244576568s • [SLOW TEST:39.708 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:51:49.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Feb 24 13:52:02.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-00926bcb-0b8e-4167-bb5a-7369bb8531b1 -c busybox-main-container --namespace=emptydir-5798 -- cat /usr/share/volumeshare/shareddata.txt' Feb 24 13:52:02.541: INFO: stderr: "I0224 13:52:02.227440 2367 log.go:172] (0xc0009e4420) (0xc000a18820) Create stream\nI0224 13:52:02.227730 2367 log.go:172] (0xc0009e4420) (0xc000a18820) Stream added, broadcasting: 1\nI0224 13:52:02.234535 2367 log.go:172] (0xc0009e4420) Reply frame received for 1\nI0224 13:52:02.234590 2367 log.go:172] (0xc0009e4420) (0xc000666280) Create stream\nI0224 13:52:02.234601 2367 log.go:172] (0xc0009e4420) (0xc000666280) Stream added, broadcasting: 3\nI0224 13:52:02.236188 2367 log.go:172] (0xc0009e4420) Reply frame received for 3\nI0224 13:52:02.236287 2367 log.go:172] (0xc0009e4420) (0xc0005ba000) Create stream\nI0224 13:52:02.236302 2367 log.go:172] (0xc0009e4420) (0xc0005ba000) Stream added, broadcasting: 5\nI0224 13:52:02.237686 2367 log.go:172] (0xc0009e4420) Reply frame received for 5\nI0224 13:52:02.358142 2367 log.go:172] (0xc0009e4420) Data frame received for 3\nI0224 13:52:02.358257 2367 log.go:172] (0xc000666280) (3) Data frame handling\nI0224 13:52:02.358288 2367 log.go:172] (0xc000666280) (3) Data frame sent\nI0224 13:52:02.530498 2367 log.go:172] (0xc0009e4420) (0xc000666280) Stream removed, broadcasting: 3\nI0224 13:52:02.530796 2367 log.go:172] (0xc0009e4420) Data frame received for 1\nI0224 13:52:02.530813 2367 log.go:172] (0xc000a18820) (1) Data frame handling\nI0224 13:52:02.530840 2367 log.go:172] (0xc000a18820) (1) Data frame sent\nI0224 13:52:02.530896 2367 log.go:172] (0xc0009e4420) (0xc000a18820) Stream removed, broadcasting: 1\nI0224 13:52:02.531036 2367 log.go:172] (0xc0009e4420) (0xc0005ba000) Stream removed, broadcasting: 5\nI0224 13:52:02.531161 2367 log.go:172] (0xc0009e4420) Go away received\nI0224 13:52:02.532021 2367 log.go:172] (0xc0009e4420) (0xc000a18820) Stream removed, broadcasting: 1\nI0224 13:52:02.532066 2367 log.go:172] (0xc0009e4420) (0xc000666280) Stream removed, broadcasting: 3\nI0224 13:52:02.532077 2367 log.go:172] (0xc0009e4420) (0xc0005ba000) Stream removed, broadcasting: 5\n" Feb 24 13:52:02.541: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 24 13:52:02.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5798" for this suite. Feb 24 13:52:08.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 24 13:52:08.758: INFO: namespace emptydir-5798 deletion completed in 6.202027343s • [SLOW TEST:18.882 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 24 13:52:08.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 24 13:52:08.894: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/:
alternatives.log
alternatives.l... (200; 21.159823ms)
Feb 24 13:52:08.901: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.968954ms)
Feb 24 13:52:08.910: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.052152ms)
Feb 24 13:52:08.919: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.911203ms)
Feb 24 13:52:08.925: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.132265ms)
Feb 24 13:52:08.930: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.976439ms)
Feb 24 13:52:08.934: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.9923ms)
Feb 24 13:52:08.941: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.754476ms)
Feb 24 13:52:08.949: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.843191ms)
Feb 24 13:52:09.038: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 88.334185ms)
Feb 24 13:52:09.060: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 22.469813ms)
Feb 24 13:52:09.084: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 23.928442ms)
Feb 24 13:52:09.092: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.175791ms)
Feb 24 13:52:09.099: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.943374ms)
Feb 24 13:52:09.107: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.751311ms)
Feb 24 13:52:09.115: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.885203ms)
Feb 24 13:52:09.124: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.441298ms)
Feb 24 13:52:09.132: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.246404ms)
Feb 24 13:52:09.159: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 26.741909ms)
Feb 24 13:52:09.166: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.152885ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 13:52:09.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-6988" for this suite.
Feb 24 13:52:15.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 13:52:15.329: INFO: namespace proxy-6988 deletion completed in 6.159179953s

• [SLOW TEST:6.570 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 13:52:15.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 24 13:52:15.594: INFO: Waiting up to 5m0s for pod "pod-3a0655b5-8040-4f8e-ba45-3dcb77a4314c" in namespace "emptydir-5184" to be "success or failure"
Feb 24 13:52:15.726: INFO: Pod "pod-3a0655b5-8040-4f8e-ba45-3dcb77a4314c": Phase="Pending", Reason="", readiness=false. Elapsed: 131.498516ms
Feb 24 13:52:17.735: INFO: Pod "pod-3a0655b5-8040-4f8e-ba45-3dcb77a4314c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140598614s
Feb 24 13:52:19.744: INFO: Pod "pod-3a0655b5-8040-4f8e-ba45-3dcb77a4314c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149366607s
Feb 24 13:52:21.752: INFO: Pod "pod-3a0655b5-8040-4f8e-ba45-3dcb77a4314c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.157501739s
Feb 24 13:52:23.760: INFO: Pod "pod-3a0655b5-8040-4f8e-ba45-3dcb77a4314c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.165379155s
STEP: Saw pod success
Feb 24 13:52:23.760: INFO: Pod "pod-3a0655b5-8040-4f8e-ba45-3dcb77a4314c" satisfied condition "success or failure"
Feb 24 13:52:23.767: INFO: Trying to get logs from node iruya-node pod pod-3a0655b5-8040-4f8e-ba45-3dcb77a4314c container test-container: 
STEP: delete the pod
Feb 24 13:52:24.724: INFO: Waiting for pod pod-3a0655b5-8040-4f8e-ba45-3dcb77a4314c to disappear
Feb 24 13:52:24.734: INFO: Pod pod-3a0655b5-8040-4f8e-ba45-3dcb77a4314c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 13:52:24.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5184" for this suite.
Feb 24 13:52:30.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 13:52:30.976: INFO: namespace emptydir-5184 deletion completed in 6.226794676s

• [SLOW TEST:15.646 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 13:52:30.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 24 13:52:41.659: INFO: Successfully updated pod "labelsupdate56378aa5-37af-454d-b593-1a7195e74187"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 13:52:43.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2122" for this suite.
Feb 24 13:53:05.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 13:53:06.012: INFO: namespace projected-2122 deletion completed in 22.170351605s

• [SLOW TEST:35.036 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 13:53:06.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Feb 24 13:53:06.123: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix606374569/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 13:53:06.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4713" for this suite.
Feb 24 13:53:12.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 13:53:12.322: INFO: namespace kubectl-4713 deletion completed in 6.133767486s

• [SLOW TEST:6.310 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 13:53:12.322: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 13:53:20.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6463" for this suite.
Feb 24 13:54:12.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 13:54:12.768: INFO: namespace kubelet-test-6463 deletion completed in 52.176575904s

• [SLOW TEST:60.446 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 13:54:12.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-rtvv
STEP: Creating a pod to test atomic-volume-subpath
Feb 24 13:54:12.989: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-rtvv" in namespace "subpath-6240" to be "success or failure"
Feb 24 13:54:13.000: INFO: Pod "pod-subpath-test-configmap-rtvv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.957291ms
Feb 24 13:54:15.008: INFO: Pod "pod-subpath-test-configmap-rtvv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019001626s
Feb 24 13:54:17.016: INFO: Pod "pod-subpath-test-configmap-rtvv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026890739s
Feb 24 13:54:19.079: INFO: Pod "pod-subpath-test-configmap-rtvv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090379936s
Feb 24 13:54:21.087: INFO: Pod "pod-subpath-test-configmap-rtvv": Phase="Running", Reason="", readiness=true. Elapsed: 8.098234387s
Feb 24 13:54:23.096: INFO: Pod "pod-subpath-test-configmap-rtvv": Phase="Running", Reason="", readiness=true. Elapsed: 10.107117289s
Feb 24 13:54:25.103: INFO: Pod "pod-subpath-test-configmap-rtvv": Phase="Running", Reason="", readiness=true. Elapsed: 12.113886609s
Feb 24 13:54:27.115: INFO: Pod "pod-subpath-test-configmap-rtvv": Phase="Running", Reason="", readiness=true. Elapsed: 14.126078633s
Feb 24 13:54:29.182: INFO: Pod "pod-subpath-test-configmap-rtvv": Phase="Running", Reason="", readiness=true. Elapsed: 16.193180208s
Feb 24 13:54:31.192: INFO: Pod "pod-subpath-test-configmap-rtvv": Phase="Running", Reason="", readiness=true. Elapsed: 18.202899318s
Feb 24 13:54:33.200: INFO: Pod "pod-subpath-test-configmap-rtvv": Phase="Running", Reason="", readiness=true. Elapsed: 20.211064317s
Feb 24 13:54:35.206: INFO: Pod "pod-subpath-test-configmap-rtvv": Phase="Running", Reason="", readiness=true. Elapsed: 22.217189283s
Feb 24 13:54:37.213: INFO: Pod "pod-subpath-test-configmap-rtvv": Phase="Running", Reason="", readiness=true. Elapsed: 24.224546591s
Feb 24 13:54:39.227: INFO: Pod "pod-subpath-test-configmap-rtvv": Phase="Running", Reason="", readiness=true. Elapsed: 26.238281361s
Feb 24 13:54:41.241: INFO: Pod "pod-subpath-test-configmap-rtvv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.251911716s
STEP: Saw pod success
Feb 24 13:54:41.241: INFO: Pod "pod-subpath-test-configmap-rtvv" satisfied condition "success or failure"
Feb 24 13:54:41.245: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-rtvv container test-container-subpath-configmap-rtvv: 
STEP: delete the pod
Feb 24 13:54:41.390: INFO: Waiting for pod pod-subpath-test-configmap-rtvv to disappear
Feb 24 13:54:41.398: INFO: Pod pod-subpath-test-configmap-rtvv no longer exists
STEP: Deleting pod pod-subpath-test-configmap-rtvv
Feb 24 13:54:41.398: INFO: Deleting pod "pod-subpath-test-configmap-rtvv" in namespace "subpath-6240"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 13:54:41.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6240" for this suite.
Feb 24 13:54:47.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 13:54:47.545: INFO: namespace subpath-6240 deletion completed in 6.1380737s

• [SLOW TEST:34.776 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 13:54:47.545: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-65e762b5-a0df-4cc7-bcc4-2bb70036ff4c
STEP: Creating a pod to test consume configMaps
Feb 24 13:54:47.618: INFO: Waiting up to 5m0s for pod "pod-configmaps-bec8b8ef-39d5-4967-96fa-30cfeb5819f0" in namespace "configmap-9141" to be "success or failure"
Feb 24 13:54:47.674: INFO: Pod "pod-configmaps-bec8b8ef-39d5-4967-96fa-30cfeb5819f0": Phase="Pending", Reason="", readiness=false. Elapsed: 55.266163ms
Feb 24 13:54:49.681: INFO: Pod "pod-configmaps-bec8b8ef-39d5-4967-96fa-30cfeb5819f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062468783s
Feb 24 13:54:51.689: INFO: Pod "pod-configmaps-bec8b8ef-39d5-4967-96fa-30cfeb5819f0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070562464s
Feb 24 13:54:53.697: INFO: Pod "pod-configmaps-bec8b8ef-39d5-4967-96fa-30cfeb5819f0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07908188s
Feb 24 13:54:55.704: INFO: Pod "pod-configmaps-bec8b8ef-39d5-4967-96fa-30cfeb5819f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.085757784s
STEP: Saw pod success
Feb 24 13:54:55.704: INFO: Pod "pod-configmaps-bec8b8ef-39d5-4967-96fa-30cfeb5819f0" satisfied condition "success or failure"
Feb 24 13:54:55.707: INFO: Trying to get logs from node iruya-node pod pod-configmaps-bec8b8ef-39d5-4967-96fa-30cfeb5819f0 container configmap-volume-test: 
STEP: delete the pod
Feb 24 13:54:55.765: INFO: Waiting for pod pod-configmaps-bec8b8ef-39d5-4967-96fa-30cfeb5819f0 to disappear
Feb 24 13:54:55.771: INFO: Pod pod-configmaps-bec8b8ef-39d5-4967-96fa-30cfeb5819f0 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 13:54:55.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9141" for this suite.
Feb 24 13:55:01.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 13:55:02.122: INFO: namespace configmap-9141 deletion completed in 6.326203894s

• [SLOW TEST:14.577 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 13:55:02.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-edd9ec28-07c3-4148-822a-ebdea111ba1d
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 13:55:02.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5604" for this suite.
Feb 24 13:55:08.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 13:55:08.394: INFO: namespace configmap-5604 deletion completed in 6.185125475s

• [SLOW TEST:6.272 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 13:55:08.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Feb 24 13:55:16.572: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Feb 24 13:55:26.679: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 13:55:26.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4576" for this suite.
Feb 24 13:55:32.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 13:55:32.900: INFO: namespace pods-4576 deletion completed in 6.210094827s

• [SLOW TEST:24.505 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 13:55:32.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-3014
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 24 13:55:32.981: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 24 13:56:09.340: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3014 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 24 13:56:09.340: INFO: >>> kubeConfig: /root/.kube/config
I0224 13:56:09.453102       8 log.go:172] (0xc0017122c0) (0xc0011890e0) Create stream
I0224 13:56:09.453142       8 log.go:172] (0xc0017122c0) (0xc0011890e0) Stream added, broadcasting: 1
I0224 13:56:09.463980       8 log.go:172] (0xc0017122c0) Reply frame received for 1
I0224 13:56:09.464030       8 log.go:172] (0xc0017122c0) (0xc001dbd540) Create stream
I0224 13:56:09.464045       8 log.go:172] (0xc0017122c0) (0xc001dbd540) Stream added, broadcasting: 3
I0224 13:56:09.465699       8 log.go:172] (0xc0017122c0) Reply frame received for 3
I0224 13:56:09.465729       8 log.go:172] (0xc0017122c0) (0xc000210000) Create stream
I0224 13:56:09.465738       8 log.go:172] (0xc0017122c0) (0xc000210000) Stream added, broadcasting: 5
I0224 13:56:09.466880       8 log.go:172] (0xc0017122c0) Reply frame received for 5
I0224 13:56:09.685541       8 log.go:172] (0xc0017122c0) Data frame received for 3
I0224 13:56:09.685617       8 log.go:172] (0xc001dbd540) (3) Data frame handling
I0224 13:56:09.685637       8 log.go:172] (0xc001dbd540) (3) Data frame sent
I0224 13:56:09.849810       8 log.go:172] (0xc0017122c0) Data frame received for 1
I0224 13:56:09.849909       8 log.go:172] (0xc0017122c0) (0xc001dbd540) Stream removed, broadcasting: 3
I0224 13:56:09.849963       8 log.go:172] (0xc0011890e0) (1) Data frame handling
I0224 13:56:09.849974       8 log.go:172] (0xc0011890e0) (1) Data frame sent
I0224 13:56:09.849997       8 log.go:172] (0xc0017122c0) (0xc000210000) Stream removed, broadcasting: 5
I0224 13:56:09.850083       8 log.go:172] (0xc0017122c0) (0xc0011890e0) Stream removed, broadcasting: 1
I0224 13:56:09.850098       8 log.go:172] (0xc0017122c0) Go away received
I0224 13:56:09.850645       8 log.go:172] (0xc0017122c0) (0xc0011890e0) Stream removed, broadcasting: 1
I0224 13:56:09.850673       8 log.go:172] (0xc0017122c0) (0xc001dbd540) Stream removed, broadcasting: 3
I0224 13:56:09.850682       8 log.go:172] (0xc0017122c0) (0xc000210000) Stream removed, broadcasting: 5
Feb 24 13:56:09.850: INFO: Found all expected endpoints: [netserver-0]
Feb 24 13:56:09.860: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3014 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 24 13:56:09.860: INFO: >>> kubeConfig: /root/.kube/config
I0224 13:56:09.923756       8 log.go:172] (0xc001d20580) (0xc001dbd7c0) Create stream
I0224 13:56:09.923857       8 log.go:172] (0xc001d20580) (0xc001dbd7c0) Stream added, broadcasting: 1
I0224 13:56:09.932011       8 log.go:172] (0xc001d20580) Reply frame received for 1
I0224 13:56:09.932048       8 log.go:172] (0xc001d20580) (0xc001dbd9a0) Create stream
I0224 13:56:09.932061       8 log.go:172] (0xc001d20580) (0xc001dbd9a0) Stream added, broadcasting: 3
I0224 13:56:09.935266       8 log.go:172] (0xc001d20580) Reply frame received for 3
I0224 13:56:09.935361       8 log.go:172] (0xc001d20580) (0xc001189360) Create stream
I0224 13:56:09.935372       8 log.go:172] (0xc001d20580) (0xc001189360) Stream added, broadcasting: 5
I0224 13:56:09.937036       8 log.go:172] (0xc001d20580) Reply frame received for 5
I0224 13:56:10.070070       8 log.go:172] (0xc001d20580) Data frame received for 3
I0224 13:56:10.070240       8 log.go:172] (0xc001dbd9a0) (3) Data frame handling
I0224 13:56:10.070302       8 log.go:172] (0xc001dbd9a0) (3) Data frame sent
I0224 13:56:10.212325       8 log.go:172] (0xc001d20580) (0xc001dbd9a0) Stream removed, broadcasting: 3
I0224 13:56:10.212420       8 log.go:172] (0xc001d20580) Data frame received for 1
I0224 13:56:10.212445       8 log.go:172] (0xc001d20580) (0xc001189360) Stream removed, broadcasting: 5
I0224 13:56:10.212491       8 log.go:172] (0xc001dbd7c0) (1) Data frame handling
I0224 13:56:10.212559       8 log.go:172] (0xc001dbd7c0) (1) Data frame sent
I0224 13:56:10.212571       8 log.go:172] (0xc001d20580) (0xc001dbd7c0) Stream removed, broadcasting: 1
I0224 13:56:10.212586       8 log.go:172] (0xc001d20580) Go away received
I0224 13:56:10.212803       8 log.go:172] (0xc001d20580) (0xc001dbd7c0) Stream removed, broadcasting: 1
I0224 13:56:10.212817       8 log.go:172] (0xc001d20580) (0xc001dbd9a0) Stream removed, broadcasting: 3
I0224 13:56:10.212828       8 log.go:172] (0xc001d20580) (0xc001189360) Stream removed, broadcasting: 5
Feb 24 13:56:10.212: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 13:56:10.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3014" for this suite.
Feb 24 13:56:34.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 13:56:34.368: INFO: namespace pod-network-test-3014 deletion completed in 24.145370526s

• [SLOW TEST:61.467 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 13:56:34.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-3615
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb 24 13:56:34.478: INFO: Found 0 stateful pods, waiting for 3
Feb 24 13:56:44.491: INFO: Found 2 stateful pods, waiting for 3
Feb 24 13:56:54.495: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 24 13:56:54.495: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 24 13:56:54.495: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 24 13:57:04.513: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 24 13:57:04.513: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 24 13:57:04.513: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb 24 13:57:04.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3615 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 24 13:57:04.890: INFO: stderr: "I0224 13:57:04.696444    2420 log.go:172] (0xc00097a9a0) (0xc000a0e8c0) Create stream\nI0224 13:57:04.696528    2420 log.go:172] (0xc00097a9a0) (0xc000a0e8c0) Stream added, broadcasting: 1\nI0224 13:57:04.701007    2420 log.go:172] (0xc00097a9a0) Reply frame received for 1\nI0224 13:57:04.701036    2420 log.go:172] (0xc00097a9a0) (0xc000a0e000) Create stream\nI0224 13:57:04.701047    2420 log.go:172] (0xc00097a9a0) (0xc000a0e000) Stream added, broadcasting: 3\nI0224 13:57:04.702278    2420 log.go:172] (0xc00097a9a0) Reply frame received for 3\nI0224 13:57:04.702295    2420 log.go:172] (0xc00097a9a0) (0xc000588280) Create stream\nI0224 13:57:04.702302    2420 log.go:172] (0xc00097a9a0) (0xc000588280) Stream added, broadcasting: 5\nI0224 13:57:04.703248    2420 log.go:172] (0xc00097a9a0) Reply frame received for 5\nI0224 13:57:04.790472    2420 log.go:172] (0xc00097a9a0) Data frame received for 5\nI0224 13:57:04.790574    2420 log.go:172] (0xc000588280) (5) Data frame handling\nI0224 13:57:04.790595    2420 log.go:172] (0xc000588280) (5) Data frame sent\n+ mvI0224 13:57:04.791205    2420 log.go:172] (0xc00097a9a0) Data frame received for 5\nI0224 13:57:04.791268    2420 log.go:172] (0xc000588280) (5) Data frame handling\nI0224 13:57:04.791299    2420 log.go:172] (0xc000588280) (5) Data frame sent\n -v /usr/share/nginx/html/index.html /tmp/\nI0224 13:57:04.816830    2420 log.go:172] (0xc00097a9a0) Data frame received for 3\nI0224 13:57:04.816908    2420 log.go:172] (0xc000a0e000) (3) Data frame handling\nI0224 13:57:04.816922    2420 log.go:172] (0xc000a0e000) (3) Data frame sent\nI0224 13:57:04.884240    2420 log.go:172] (0xc00097a9a0) (0xc000a0e000) Stream removed, broadcasting: 3\nI0224 13:57:04.884312    2420 log.go:172] (0xc00097a9a0) Data frame received for 1\nI0224 13:57:04.884325    2420 log.go:172] (0xc000a0e8c0) (1) Data frame handling\nI0224 13:57:04.884333    2420 log.go:172] (0xc000a0e8c0) (1) Data frame sent\nI0224 13:57:04.884340    2420 log.go:172] (0xc00097a9a0) (0xc000588280) Stream removed, broadcasting: 5\nI0224 13:57:04.884403    2420 log.go:172] (0xc00097a9a0) (0xc000a0e8c0) Stream removed, broadcasting: 1\nI0224 13:57:04.884453    2420 log.go:172] (0xc00097a9a0) Go away received\nI0224 13:57:04.884697    2420 log.go:172] (0xc00097a9a0) (0xc000a0e8c0) Stream removed, broadcasting: 1\nI0224 13:57:04.884737    2420 log.go:172] (0xc00097a9a0) (0xc000a0e000) Stream removed, broadcasting: 3\nI0224 13:57:04.884747    2420 log.go:172] (0xc00097a9a0) (0xc000588280) Stream removed, broadcasting: 5\n"
Feb 24 13:57:04.891: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 24 13:57:04.891: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 24 13:57:14.936: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb 24 13:57:24.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3615 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 24 13:57:25.347: INFO: stderr: "I0224 13:57:25.127052    2437 log.go:172] (0xc00013adc0) (0xc000634780) Create stream\nI0224 13:57:25.127363    2437 log.go:172] (0xc00013adc0) (0xc000634780) Stream added, broadcasting: 1\nI0224 13:57:25.130845    2437 log.go:172] (0xc00013adc0) Reply frame received for 1\nI0224 13:57:25.130873    2437 log.go:172] (0xc00013adc0) (0xc000634820) Create stream\nI0224 13:57:25.130881    2437 log.go:172] (0xc00013adc0) (0xc000634820) Stream added, broadcasting: 3\nI0224 13:57:25.132346    2437 log.go:172] (0xc00013adc0) Reply frame received for 3\nI0224 13:57:25.132369    2437 log.go:172] (0xc00013adc0) (0xc000a06000) Create stream\nI0224 13:57:25.132379    2437 log.go:172] (0xc00013adc0) (0xc000a06000) Stream added, broadcasting: 5\nI0224 13:57:25.133482    2437 log.go:172] (0xc00013adc0) Reply frame received for 5\nI0224 13:57:25.215580    2437 log.go:172] (0xc00013adc0) Data frame received for 3\nI0224 13:57:25.215622    2437 log.go:172] (0xc000634820) (3) Data frame handling\nI0224 13:57:25.215629    2437 log.go:172] (0xc000634820) (3) Data frame sent\nI0224 13:57:25.215653    2437 log.go:172] (0xc00013adc0) Data frame received for 5\nI0224 13:57:25.215679    2437 log.go:172] (0xc000a06000) (5) Data frame handling\nI0224 13:57:25.215695    2437 log.go:172] (0xc000a06000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0224 13:57:25.336338    2437 log.go:172] (0xc00013adc0) (0xc000634820) Stream removed, broadcasting: 3\nI0224 13:57:25.336445    2437 log.go:172] (0xc00013adc0) Data frame received for 1\nI0224 13:57:25.336457    2437 log.go:172] (0xc000634780) (1) Data frame handling\nI0224 13:57:25.336482    2437 log.go:172] (0xc000634780) (1) Data frame sent\nI0224 13:57:25.336491    2437 log.go:172] (0xc00013adc0) (0xc000634780) Stream removed, broadcasting: 1\nI0224 13:57:25.336849    2437 log.go:172] (0xc00013adc0) (0xc000a06000) Stream removed, broadcasting: 5\nI0224 13:57:25.337076    2437 log.go:172] (0xc00013adc0) (0xc000634780) Stream removed, broadcasting: 1\nI0224 13:57:25.337095    2437 log.go:172] (0xc00013adc0) (0xc000634820) Stream removed, broadcasting: 3\nI0224 13:57:25.337104    2437 log.go:172] (0xc00013adc0) (0xc000a06000) Stream removed, broadcasting: 5\nI0224 13:57:25.337791    2437 log.go:172] (0xc00013adc0) Go away received\n"
Feb 24 13:57:25.347: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 24 13:57:25.347: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 24 13:57:35.458: INFO: Waiting for StatefulSet statefulset-3615/ss2 to complete update
Feb 24 13:57:35.458: INFO: Waiting for Pod statefulset-3615/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 24 13:57:35.458: INFO: Waiting for Pod statefulset-3615/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 24 13:57:45.557: INFO: Waiting for StatefulSet statefulset-3615/ss2 to complete update
Feb 24 13:57:45.558: INFO: Waiting for Pod statefulset-3615/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 24 13:57:45.558: INFO: Waiting for Pod statefulset-3615/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 24 13:57:55.472: INFO: Waiting for StatefulSet statefulset-3615/ss2 to complete update
Feb 24 13:57:55.472: INFO: Waiting for Pod statefulset-3615/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 24 13:57:55.472: INFO: Waiting for Pod statefulset-3615/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 24 13:58:05.475: INFO: Waiting for StatefulSet statefulset-3615/ss2 to complete update
Feb 24 13:58:05.475: INFO: Waiting for Pod statefulset-3615/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 24 13:58:15.472: INFO: Waiting for StatefulSet statefulset-3615/ss2 to complete update
Feb 24 13:58:15.472: INFO: Waiting for Pod statefulset-3615/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Rolling back to a previous revision
Feb 24 13:58:25.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3615 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 24 13:58:25.923: INFO: stderr: "I0224 13:58:25.646593    2456 log.go:172] (0xc0008e0580) (0xc0007bcc80) Create stream\nI0224 13:58:25.646700    2456 log.go:172] (0xc0008e0580) (0xc0007bcc80) Stream added, broadcasting: 1\nI0224 13:58:25.654856    2456 log.go:172] (0xc0008e0580) Reply frame received for 1\nI0224 13:58:25.654906    2456 log.go:172] (0xc0008e0580) (0xc0007bc000) Create stream\nI0224 13:58:25.654922    2456 log.go:172] (0xc0008e0580) (0xc0007bc000) Stream added, broadcasting: 3\nI0224 13:58:25.656277    2456 log.go:172] (0xc0008e0580) Reply frame received for 3\nI0224 13:58:25.656302    2456 log.go:172] (0xc0008e0580) (0xc0007bc140) Create stream\nI0224 13:58:25.656309    2456 log.go:172] (0xc0008e0580) (0xc0007bc140) Stream added, broadcasting: 5\nI0224 13:58:25.657678    2456 log.go:172] (0xc0008e0580) Reply frame received for 5\nI0224 13:58:25.765676    2456 log.go:172] (0xc0008e0580) Data frame received for 5\nI0224 13:58:25.765726    2456 log.go:172] (0xc0007bc140) (5) Data frame handling\nI0224 13:58:25.765747    2456 log.go:172] (0xc0007bc140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0224 13:58:25.823537    2456 log.go:172] (0xc0008e0580) Data frame received for 3\nI0224 13:58:25.823584    2456 log.go:172] (0xc0007bc000) (3) Data frame handling\nI0224 13:58:25.823598    2456 log.go:172] (0xc0007bc000) (3) Data frame sent\nI0224 13:58:25.917048    2456 log.go:172] (0xc0008e0580) (0xc0007bc000) Stream removed, broadcasting: 3\nI0224 13:58:25.917627    2456 log.go:172] (0xc0008e0580) Data frame received for 1\nI0224 13:58:25.917881    2456 log.go:172] (0xc0008e0580) (0xc0007bc140) Stream removed, broadcasting: 5\nI0224 13:58:25.918035    2456 log.go:172] (0xc0007bcc80) (1) Data frame handling\nI0224 13:58:25.918101    2456 log.go:172] (0xc0007bcc80) (1) Data frame sent\nI0224 13:58:25.918149    2456 log.go:172] (0xc0008e0580) (0xc0007bcc80) Stream removed, broadcasting: 1\nI0224 13:58:25.918187    2456 log.go:172] (0xc0008e0580) Go away received\nI0224 13:58:25.918696    2456 log.go:172] (0xc0008e0580) (0xc0007bcc80) Stream removed, broadcasting: 1\nI0224 13:58:25.918795    2456 log.go:172] (0xc0008e0580) (0xc0007bc000) Stream removed, broadcasting: 3\nI0224 13:58:25.918802    2456 log.go:172] (0xc0008e0580) (0xc0007bc140) Stream removed, broadcasting: 5\n"
Feb 24 13:58:25.923: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 24 13:58:25.923: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 24 13:58:35.995: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb 24 13:58:46.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3615 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 24 13:58:46.395: INFO: stderr: "I0224 13:58:46.219277    2475 log.go:172] (0xc00090c370) (0xc00076a640) Create stream\nI0224 13:58:46.219479    2475 log.go:172] (0xc00090c370) (0xc00076a640) Stream added, broadcasting: 1\nI0224 13:58:46.221479    2475 log.go:172] (0xc00090c370) Reply frame received for 1\nI0224 13:58:46.221522    2475 log.go:172] (0xc00090c370) (0xc00098a000) Create stream\nI0224 13:58:46.221550    2475 log.go:172] (0xc00090c370) (0xc00098a000) Stream added, broadcasting: 3\nI0224 13:58:46.222375    2475 log.go:172] (0xc00090c370) Reply frame received for 3\nI0224 13:58:46.222401    2475 log.go:172] (0xc00090c370) (0xc000570280) Create stream\nI0224 13:58:46.222411    2475 log.go:172] (0xc00090c370) (0xc000570280) Stream added, broadcasting: 5\nI0224 13:58:46.223301    2475 log.go:172] (0xc00090c370) Reply frame received for 5\nI0224 13:58:46.307154    2475 log.go:172] (0xc00090c370) Data frame received for 5\nI0224 13:58:46.307217    2475 log.go:172] (0xc000570280) (5) Data frame handling\nI0224 13:58:46.307239    2475 log.go:172] (0xc000570280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0224 13:58:46.307443    2475 log.go:172] (0xc00090c370) Data frame received for 3\nI0224 13:58:46.307457    2475 log.go:172] (0xc00098a000) (3) Data frame handling\nI0224 13:58:46.307470    2475 log.go:172] (0xc00098a000) (3) Data frame sent\nI0224 13:58:46.386966    2475 log.go:172] (0xc00090c370) (0xc00098a000) Stream removed, broadcasting: 3\nI0224 13:58:46.387097    2475 log.go:172] (0xc00090c370) Data frame received for 1\nI0224 13:58:46.387116    2475 log.go:172] (0xc00076a640) (1) Data frame handling\nI0224 13:58:46.387157    2475 log.go:172] (0xc00076a640) (1) Data frame sent\nI0224 13:58:46.387199    2475 log.go:172] (0xc00090c370) (0xc00076a640) Stream removed, broadcasting: 1\nI0224 13:58:46.387439    2475 log.go:172] (0xc00090c370) (0xc000570280) Stream removed, broadcasting: 5\nI0224 13:58:46.387585    2475 log.go:172] (0xc00090c370) (0xc00076a640) Stream removed, broadcasting: 1\nI0224 13:58:46.387630    2475 log.go:172] (0xc00090c370) (0xc00098a000) Stream removed, broadcasting: 3\nI0224 13:58:46.387660    2475 log.go:172] (0xc00090c370) (0xc000570280) Stream removed, broadcasting: 5\nI0224 13:58:46.387760    2475 log.go:172] (0xc00090c370) Go away received\n"
Feb 24 13:58:46.395: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 24 13:58:46.396: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 24 13:58:56.435: INFO: Waiting for StatefulSet statefulset-3615/ss2 to complete update
Feb 24 13:58:56.435: INFO: Waiting for Pod statefulset-3615/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 24 13:58:56.435: INFO: Waiting for Pod statefulset-3615/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 24 13:58:56.435: INFO: Waiting for Pod statefulset-3615/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 24 13:59:06.465: INFO: Waiting for StatefulSet statefulset-3615/ss2 to complete update
Feb 24 13:59:06.465: INFO: Waiting for Pod statefulset-3615/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 24 13:59:06.465: INFO: Waiting for Pod statefulset-3615/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 24 13:59:16.449: INFO: Waiting for StatefulSet statefulset-3615/ss2 to complete update
Feb 24 13:59:16.449: INFO: Waiting for Pod statefulset-3615/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 24 13:59:26.447: INFO: Waiting for StatefulSet statefulset-3615/ss2 to complete update
Feb 24 13:59:26.447: INFO: Waiting for Pod statefulset-3615/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 24 13:59:36.450: INFO: Deleting all statefulset in ns statefulset-3615
Feb 24 13:59:36.455: INFO: Scaling statefulset ss2 to 0
Feb 24 14:00:16.505: INFO: Waiting for statefulset status.replicas updated to 0
Feb 24 14:00:16.511: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:00:16.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3615" for this suite.
Feb 24 14:00:24.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:00:24.730: INFO: namespace statefulset-3615 deletion completed in 8.18409647s

• [SLOW TEST:230.362 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:00:24.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-063207fb-4c4a-4649-9e6c-d891fdf25775
STEP: Creating configMap with name cm-test-opt-upd-86cf0404-100e-467b-aab1-6f225af2b723
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-063207fb-4c4a-4649-9e6c-d891fdf25775
STEP: Updating configmap cm-test-opt-upd-86cf0404-100e-467b-aab1-6f225af2b723
STEP: Creating configMap with name cm-test-opt-create-53edca19-bed2-4e42-8e28-b7bea150f6ef
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:00:39.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9426" for this suite.
Feb 24 14:01:01.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:01:01.312: INFO: namespace configmap-9426 deletion completed in 22.187713296s

• [SLOW TEST:36.582 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:01:01.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:01:01.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3688" for this suite.
Feb 24 14:01:07.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:01:07.746: INFO: namespace kubelet-test-3688 deletion completed in 6.187350599s

• [SLOW TEST:6.434 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:01:07.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 24 14:01:07.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-6602'
Feb 24 14:01:10.648: INFO: stderr: ""
Feb 24 14:01:10.649: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Feb 24 14:01:20.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-6602 -o json'
Feb 24 14:01:20.832: INFO: stderr: ""
Feb 24 14:01:20.832: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-24T14:01:10Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-6602\",\n        \"resourceVersion\": \"25580951\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-6602/pods/e2e-test-nginx-pod\",\n        \"uid\": \"aefec2cb-97e0-4371-bc74-e879ceecaa8c\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-hr22v\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-hr22v\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-hr22v\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-24T14:01:10Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-24T14:01:17Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-24T14:01:17Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-24T14:01:10Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://8a86acc93e49d59f51d198a828833bb63a03c77c9253ac50f1574a915a38c4a8\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-24T14:01:17Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-24T14:01:10Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb 24 14:01:20.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-6602'
Feb 24 14:01:21.176: INFO: stderr: ""
Feb 24 14:01:21.176: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Feb 24 14:01:21.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-6602'
Feb 24 14:01:29.718: INFO: stderr: ""
Feb 24 14:01:29.718: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:01:29.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6602" for this suite.
Feb 24 14:01:35.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:01:35.905: INFO: namespace kubectl-6602 deletion completed in 6.150019249s

• [SLOW TEST:28.158 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:01:35.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 24 14:01:35.961: INFO: Waiting up to 5m0s for pod "downwardapi-volume-26d505db-e6d2-4815-a37d-398b4d98d553" in namespace "projected-5944" to be "success or failure"
Feb 24 14:01:36.003: INFO: Pod "downwardapi-volume-26d505db-e6d2-4815-a37d-398b4d98d553": Phase="Pending", Reason="", readiness=false. Elapsed: 42.721914ms
Feb 24 14:01:38.010: INFO: Pod "downwardapi-volume-26d505db-e6d2-4815-a37d-398b4d98d553": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049129093s
Feb 24 14:01:40.023: INFO: Pod "downwardapi-volume-26d505db-e6d2-4815-a37d-398b4d98d553": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062542117s
Feb 24 14:01:42.030: INFO: Pod "downwardapi-volume-26d505db-e6d2-4815-a37d-398b4d98d553": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068902799s
Feb 24 14:01:44.042: INFO: Pod "downwardapi-volume-26d505db-e6d2-4815-a37d-398b4d98d553": Phase="Pending", Reason="", readiness=false. Elapsed: 8.081268712s
Feb 24 14:01:46.053: INFO: Pod "downwardapi-volume-26d505db-e6d2-4815-a37d-398b4d98d553": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.092557753s
STEP: Saw pod success
Feb 24 14:01:46.053: INFO: Pod "downwardapi-volume-26d505db-e6d2-4815-a37d-398b4d98d553" satisfied condition "success or failure"
Feb 24 14:01:46.060: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-26d505db-e6d2-4815-a37d-398b4d98d553 container client-container: 
STEP: delete the pod
Feb 24 14:01:46.886: INFO: Waiting for pod downwardapi-volume-26d505db-e6d2-4815-a37d-398b4d98d553 to disappear
Feb 24 14:01:46.896: INFO: Pod downwardapi-volume-26d505db-e6d2-4815-a37d-398b4d98d553 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:01:46.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5944" for this suite.
Feb 24 14:01:52.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:01:53.122: INFO: namespace projected-5944 deletion completed in 6.220971832s

• [SLOW TEST:17.217 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:01:53.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-3474
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 24 14:01:53.178: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 24 14:02:33.472: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-3474 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 24 14:02:33.472: INFO: >>> kubeConfig: /root/.kube/config
I0224 14:02:33.595397       8 log.go:172] (0xc0006b9080) (0xc001827180) Create stream
I0224 14:02:33.595504       8 log.go:172] (0xc0006b9080) (0xc001827180) Stream added, broadcasting: 1
I0224 14:02:33.610844       8 log.go:172] (0xc0006b9080) Reply frame received for 1
I0224 14:02:33.610933       8 log.go:172] (0xc0006b9080) (0xc001504960) Create stream
I0224 14:02:33.610962       8 log.go:172] (0xc0006b9080) (0xc001504960) Stream added, broadcasting: 3
I0224 14:02:33.614878       8 log.go:172] (0xc0006b9080) Reply frame received for 3
I0224 14:02:33.614920       8 log.go:172] (0xc0006b9080) (0xc002d10a00) Create stream
I0224 14:02:33.614933       8 log.go:172] (0xc0006b9080) (0xc002d10a00) Stream added, broadcasting: 5
I0224 14:02:33.617992       8 log.go:172] (0xc0006b9080) Reply frame received for 5
I0224 14:02:33.905801       8 log.go:172] (0xc0006b9080) Data frame received for 3
I0224 14:02:33.905947       8 log.go:172] (0xc001504960) (3) Data frame handling
I0224 14:02:33.905978       8 log.go:172] (0xc001504960) (3) Data frame sent
I0224 14:02:34.095661       8 log.go:172] (0xc0006b9080) Data frame received for 1
I0224 14:02:34.095696       8 log.go:172] (0xc001827180) (1) Data frame handling
I0224 14:02:34.095738       8 log.go:172] (0xc001827180) (1) Data frame sent
I0224 14:02:34.095857       8 log.go:172] (0xc0006b9080) (0xc002d10a00) Stream removed, broadcasting: 5
I0224 14:02:34.095895       8 log.go:172] (0xc0006b9080) (0xc001504960) Stream removed, broadcasting: 3
I0224 14:02:34.096065       8 log.go:172] (0xc0006b9080) (0xc001827180) Stream removed, broadcasting: 1
I0224 14:02:34.096339       8 log.go:172] (0xc0006b9080) Go away received
I0224 14:02:34.096639       8 log.go:172] (0xc0006b9080) (0xc001827180) Stream removed, broadcasting: 1
I0224 14:02:34.096674       8 log.go:172] (0xc0006b9080) (0xc001504960) Stream removed, broadcasting: 3
I0224 14:02:34.096692       8 log.go:172] (0xc0006b9080) (0xc002d10a00) Stream removed, broadcasting: 5
Feb 24 14:02:34.096: INFO: Waiting for endpoints: map[]
Feb 24 14:02:34.112: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-3474 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 24 14:02:34.112: INFO: >>> kubeConfig: /root/.kube/config
I0224 14:02:34.175230       8 log.go:172] (0xc0006b9ad0) (0xc0018275e0) Create stream
I0224 14:02:34.175264       8 log.go:172] (0xc0006b9ad0) (0xc0018275e0) Stream added, broadcasting: 1
I0224 14:02:34.192994       8 log.go:172] (0xc0006b9ad0) Reply frame received for 1
I0224 14:02:34.193046       8 log.go:172] (0xc0006b9ad0) (0xc00267f720) Create stream
I0224 14:02:34.193054       8 log.go:172] (0xc0006b9ad0) (0xc00267f720) Stream added, broadcasting: 3
I0224 14:02:34.196037       8 log.go:172] (0xc0006b9ad0) Reply frame received for 3
I0224 14:02:34.196070       8 log.go:172] (0xc0006b9ad0) (0xc001827680) Create stream
I0224 14:02:34.196078       8 log.go:172] (0xc0006b9ad0) (0xc001827680) Stream added, broadcasting: 5
I0224 14:02:34.198030       8 log.go:172] (0xc0006b9ad0) Reply frame received for 5
I0224 14:02:34.349775       8 log.go:172] (0xc0006b9ad0) Data frame received for 3
I0224 14:02:34.349832       8 log.go:172] (0xc00267f720) (3) Data frame handling
I0224 14:02:34.349848       8 log.go:172] (0xc00267f720) (3) Data frame sent
I0224 14:02:34.494921       8 log.go:172] (0xc0006b9ad0) Data frame received for 1
I0224 14:02:34.495029       8 log.go:172] (0xc0006b9ad0) (0xc00267f720) Stream removed, broadcasting: 3
I0224 14:02:34.495087       8 log.go:172] (0xc0018275e0) (1) Data frame handling
I0224 14:02:34.495142       8 log.go:172] (0xc0018275e0) (1) Data frame sent
I0224 14:02:34.495193       8 log.go:172] (0xc0006b9ad0) (0xc001827680) Stream removed, broadcasting: 5
I0224 14:02:34.495243       8 log.go:172] (0xc0006b9ad0) (0xc0018275e0) Stream removed, broadcasting: 1
I0224 14:02:34.495291       8 log.go:172] (0xc0006b9ad0) Go away received
I0224 14:02:34.495550       8 log.go:172] (0xc0006b9ad0) (0xc0018275e0) Stream removed, broadcasting: 1
I0224 14:02:34.495586       8 log.go:172] (0xc0006b9ad0) (0xc00267f720) Stream removed, broadcasting: 3
I0224 14:02:34.495607       8 log.go:172] (0xc0006b9ad0) (0xc001827680) Stream removed, broadcasting: 5
Feb 24 14:02:34.495: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:02:34.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3474" for this suite.
Feb 24 14:02:56.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:02:56.677: INFO: namespace pod-network-test-3474 deletion completed in 22.16601621s

• [SLOW TEST:63.554 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:02:56.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 24 14:02:56.793: INFO: Waiting up to 5m0s for pod "pod-e04d63e5-7c10-4760-812a-2fb49346972a" in namespace "emptydir-4384" to be "success or failure"
Feb 24 14:02:56.806: INFO: Pod "pod-e04d63e5-7c10-4760-812a-2fb49346972a": Phase="Pending", Reason="", readiness=false. Elapsed: 13.255377ms
Feb 24 14:02:58.815: INFO: Pod "pod-e04d63e5-7c10-4760-812a-2fb49346972a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021467988s
Feb 24 14:03:00.829: INFO: Pod "pod-e04d63e5-7c10-4760-812a-2fb49346972a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036006784s
Feb 24 14:03:02.835: INFO: Pod "pod-e04d63e5-7c10-4760-812a-2fb49346972a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042090325s
Feb 24 14:03:04.844: INFO: Pod "pod-e04d63e5-7c10-4760-812a-2fb49346972a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051290394s
STEP: Saw pod success
Feb 24 14:03:04.845: INFO: Pod "pod-e04d63e5-7c10-4760-812a-2fb49346972a" satisfied condition "success or failure"
Feb 24 14:03:04.850: INFO: Trying to get logs from node iruya-node pod pod-e04d63e5-7c10-4760-812a-2fb49346972a container test-container: 
STEP: delete the pod
Feb 24 14:03:04.924: INFO: Waiting for pod pod-e04d63e5-7c10-4760-812a-2fb49346972a to disappear
Feb 24 14:03:04.935: INFO: Pod pod-e04d63e5-7c10-4760-812a-2fb49346972a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:03:04.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4384" for this suite.
Feb 24 14:03:11.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:03:11.155: INFO: namespace emptydir-4384 deletion completed in 6.213019542s

• [SLOW TEST:14.478 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:03:11.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 24 14:03:11.254: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2509cf9d-b538-49a5-b586-f4698a3cef8e" in namespace "projected-9814" to be "success or failure"
Feb 24 14:03:11.261: INFO: Pod "downwardapi-volume-2509cf9d-b538-49a5-b586-f4698a3cef8e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.764296ms
Feb 24 14:03:13.270: INFO: Pod "downwardapi-volume-2509cf9d-b538-49a5-b586-f4698a3cef8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015891349s
Feb 24 14:03:15.278: INFO: Pod "downwardapi-volume-2509cf9d-b538-49a5-b586-f4698a3cef8e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023985453s
Feb 24 14:03:17.286: INFO: Pod "downwardapi-volume-2509cf9d-b538-49a5-b586-f4698a3cef8e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031849268s
Feb 24 14:03:19.296: INFO: Pod "downwardapi-volume-2509cf9d-b538-49a5-b586-f4698a3cef8e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042523482s
Feb 24 14:03:21.302: INFO: Pod "downwardapi-volume-2509cf9d-b538-49a5-b586-f4698a3cef8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.04866919s
STEP: Saw pod success
Feb 24 14:03:21.303: INFO: Pod "downwardapi-volume-2509cf9d-b538-49a5-b586-f4698a3cef8e" satisfied condition "success or failure"
Feb 24 14:03:21.310: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-2509cf9d-b538-49a5-b586-f4698a3cef8e container client-container: 
STEP: delete the pod
Feb 24 14:03:21.355: INFO: Waiting for pod downwardapi-volume-2509cf9d-b538-49a5-b586-f4698a3cef8e to disappear
Feb 24 14:03:21.367: INFO: Pod downwardapi-volume-2509cf9d-b538-49a5-b586-f4698a3cef8e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:03:21.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9814" for this suite.
Feb 24 14:03:27.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:03:27.554: INFO: namespace projected-9814 deletion completed in 6.180815087s

• [SLOW TEST:16.398 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:03:27.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-cd2debb1-8ab2-4dd6-b937-3ab8ac5e31ae
STEP: Creating a pod to test consume secrets
Feb 24 14:03:27.668: INFO: Waiting up to 5m0s for pod "pod-secrets-0fff3383-af7f-4c8d-b8b8-0331e664037d" in namespace "secrets-692" to be "success or failure"
Feb 24 14:03:27.678: INFO: Pod "pod-secrets-0fff3383-af7f-4c8d-b8b8-0331e664037d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.726852ms
Feb 24 14:03:29.687: INFO: Pod "pod-secrets-0fff3383-af7f-4c8d-b8b8-0331e664037d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018921666s
Feb 24 14:03:31.695: INFO: Pod "pod-secrets-0fff3383-af7f-4c8d-b8b8-0331e664037d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02742759s
Feb 24 14:03:33.701: INFO: Pod "pod-secrets-0fff3383-af7f-4c8d-b8b8-0331e664037d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033284664s
Feb 24 14:03:35.716: INFO: Pod "pod-secrets-0fff3383-af7f-4c8d-b8b8-0331e664037d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048397272s
Feb 24 14:03:37.732: INFO: Pod "pod-secrets-0fff3383-af7f-4c8d-b8b8-0331e664037d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.064512058s
STEP: Saw pod success
Feb 24 14:03:37.732: INFO: Pod "pod-secrets-0fff3383-af7f-4c8d-b8b8-0331e664037d" satisfied condition "success or failure"
Feb 24 14:03:37.739: INFO: Trying to get logs from node iruya-node pod pod-secrets-0fff3383-af7f-4c8d-b8b8-0331e664037d container secret-volume-test: 
STEP: delete the pod
Feb 24 14:03:37.932: INFO: Waiting for pod pod-secrets-0fff3383-af7f-4c8d-b8b8-0331e664037d to disappear
Feb 24 14:03:37.955: INFO: Pod pod-secrets-0fff3383-af7f-4c8d-b8b8-0331e664037d no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:03:37.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-692" for this suite.
Feb 24 14:03:44.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:03:44.155: INFO: namespace secrets-692 deletion completed in 6.193514307s

• [SLOW TEST:16.601 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:03:44.156: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-cb1162f7-6411-4e73-a4df-2f051bcca8d1 in namespace container-probe-6114
Feb 24 14:03:52.282: INFO: Started pod test-webserver-cb1162f7-6411-4e73-a4df-2f051bcca8d1 in namespace container-probe-6114
STEP: checking the pod's current state and verifying that restartCount is present
Feb 24 14:03:52.285: INFO: Initial restart count of pod test-webserver-cb1162f7-6411-4e73-a4df-2f051bcca8d1 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:07:52.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6114" for this suite.
Feb 24 14:07:58.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:07:58.941: INFO: namespace container-probe-6114 deletion completed in 6.172381726s

• [SLOW TEST:254.786 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:07:58.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Feb 24 14:07:59.011: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Feb 24 14:07:59.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4414'
Feb 24 14:07:59.333: INFO: stderr: ""
Feb 24 14:07:59.333: INFO: stdout: "service/redis-slave created\n"
Feb 24 14:07:59.333: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Feb 24 14:07:59.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4414'
Feb 24 14:07:59.667: INFO: stderr: ""
Feb 24 14:07:59.667: INFO: stdout: "service/redis-master created\n"
Feb 24 14:07:59.668: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb 24 14:07:59.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4414'
Feb 24 14:08:00.002: INFO: stderr: ""
Feb 24 14:08:00.002: INFO: stdout: "service/frontend created\n"
Feb 24 14:08:00.002: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Feb 24 14:08:00.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4414'
Feb 24 14:08:00.282: INFO: stderr: ""
Feb 24 14:08:00.282: INFO: stdout: "deployment.apps/frontend created\n"
Feb 24 14:08:00.282: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb 24 14:08:00.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4414'
Feb 24 14:08:00.678: INFO: stderr: ""
Feb 24 14:08:00.678: INFO: stdout: "deployment.apps/redis-master created\n"
Feb 24 14:08:00.678: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Feb 24 14:08:00.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4414'
Feb 24 14:08:01.828: INFO: stderr: ""
Feb 24 14:08:01.828: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Feb 24 14:08:01.828: INFO: Waiting for all frontend pods to be Running.
Feb 24 14:08:31.880: INFO: Waiting for frontend to serve content.
Feb 24 14:08:32.146: INFO: Trying to add a new entry to the guestbook.
Feb 24 14:08:32.182: INFO: Verifying that added entry can be retrieved.
Feb 24 14:08:32.197: INFO: Failed to get response from guestbook. err: , response: {"data": ""}
STEP: using delete to clean up resources
Feb 24 14:08:37.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4414'
Feb 24 14:08:37.461: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 24 14:08:37.461: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb 24 14:08:37.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4414'
Feb 24 14:08:37.650: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 24 14:08:37.650: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 24 14:08:37.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4414'
Feb 24 14:08:37.799: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 24 14:08:37.799: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 24 14:08:37.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4414'
Feb 24 14:08:37.948: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 24 14:08:37.948: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 24 14:08:37.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4414'
Feb 24 14:08:38.034: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 24 14:08:38.034: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 24 14:08:38.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4414'
Feb 24 14:08:38.141: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 24 14:08:38.142: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:08:38.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4414" for this suite.
Feb 24 14:09:18.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:09:18.396: INFO: namespace kubectl-4414 deletion completed in 40.175459789s

• [SLOW TEST:79.454 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:09:18.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 24 14:09:18.493: INFO: Waiting up to 5m0s for pod "pod-59748ab0-52db-4235-96d5-0b2f208e51a3" in namespace "emptydir-9522" to be "success or failure"
Feb 24 14:09:18.563: INFO: Pod "pod-59748ab0-52db-4235-96d5-0b2f208e51a3": Phase="Pending", Reason="", readiness=false. Elapsed: 69.978129ms
Feb 24 14:09:20.578: INFO: Pod "pod-59748ab0-52db-4235-96d5-0b2f208e51a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084313757s
Feb 24 14:09:22.593: INFO: Pod "pod-59748ab0-52db-4235-96d5-0b2f208e51a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099952864s
Feb 24 14:09:24.606: INFO: Pod "pod-59748ab0-52db-4235-96d5-0b2f208e51a3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112215451s
Feb 24 14:09:26.625: INFO: Pod "pod-59748ab0-52db-4235-96d5-0b2f208e51a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.131513596s
STEP: Saw pod success
Feb 24 14:09:26.625: INFO: Pod "pod-59748ab0-52db-4235-96d5-0b2f208e51a3" satisfied condition "success or failure"
Feb 24 14:09:26.635: INFO: Trying to get logs from node iruya-node pod pod-59748ab0-52db-4235-96d5-0b2f208e51a3 container test-container: 
STEP: delete the pod
Feb 24 14:09:26.749: INFO: Waiting for pod pod-59748ab0-52db-4235-96d5-0b2f208e51a3 to disappear
Feb 24 14:09:26.768: INFO: Pod pod-59748ab0-52db-4235-96d5-0b2f208e51a3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:09:26.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9522" for this suite.
Feb 24 14:09:32.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:09:33.303: INFO: namespace emptydir-9522 deletion completed in 6.468809806s

• [SLOW TEST:14.907 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:09:33.304: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 24 14:09:33.380: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c115790c-30fe-462c-9135-daa4461b7475" in namespace "downward-api-1599" to be "success or failure"
Feb 24 14:09:33.437: INFO: Pod "downwardapi-volume-c115790c-30fe-462c-9135-daa4461b7475": Phase="Pending", Reason="", readiness=false. Elapsed: 56.180921ms
Feb 24 14:09:35.445: INFO: Pod "downwardapi-volume-c115790c-30fe-462c-9135-daa4461b7475": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064359674s
Feb 24 14:09:37.453: INFO: Pod "downwardapi-volume-c115790c-30fe-462c-9135-daa4461b7475": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072612528s
Feb 24 14:09:39.468: INFO: Pod "downwardapi-volume-c115790c-30fe-462c-9135-daa4461b7475": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087642739s
Feb 24 14:09:41.481: INFO: Pod "downwardapi-volume-c115790c-30fe-462c-9135-daa4461b7475": Phase="Pending", Reason="", readiness=false. Elapsed: 8.1004983s
Feb 24 14:09:43.490: INFO: Pod "downwardapi-volume-c115790c-30fe-462c-9135-daa4461b7475": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.109485067s
STEP: Saw pod success
Feb 24 14:09:43.490: INFO: Pod "downwardapi-volume-c115790c-30fe-462c-9135-daa4461b7475" satisfied condition "success or failure"
Feb 24 14:09:43.495: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c115790c-30fe-462c-9135-daa4461b7475 container client-container: 
STEP: delete the pod
Feb 24 14:09:43.611: INFO: Waiting for pod downwardapi-volume-c115790c-30fe-462c-9135-daa4461b7475 to disappear
Feb 24 14:09:43.622: INFO: Pod downwardapi-volume-c115790c-30fe-462c-9135-daa4461b7475 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:09:43.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1599" for this suite.
Feb 24 14:09:49.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:09:49.831: INFO: namespace downward-api-1599 deletion completed in 6.20260516s

• [SLOW TEST:16.527 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:09:49.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb 24 14:09:49.918: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 24 14:09:49.944: INFO: Waiting for terminating namespaces to be deleted...
Feb 24 14:09:49.947: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb 24 14:09:49.960: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb 24 14:09:49.960: INFO: 	Container weave ready: true, restart count 0
Feb 24 14:09:49.960: INFO: 	Container weave-npc ready: true, restart count 0
Feb 24 14:09:49.960: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded)
Feb 24 14:09:49.960: INFO: 	Container kube-bench ready: false, restart count 0
Feb 24 14:09:49.960: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb 24 14:09:49.960: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 24 14:09:49.960: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb 24 14:09:49.971: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb 24 14:09:49.971: INFO: 	Container kube-controller-manager ready: true, restart count 23
Feb 24 14:09:49.971: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb 24 14:09:49.971: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 24 14:09:49.971: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb 24 14:09:49.971: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb 24 14:09:49.971: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb 24 14:09:49.971: INFO: 	Container kube-scheduler ready: true, restart count 15
Feb 24 14:09:49.971: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 24 14:09:49.971: INFO: 	Container coredns ready: true, restart count 0
Feb 24 14:09:49.971: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb 24 14:09:49.971: INFO: 	Container etcd ready: true, restart count 0
Feb 24 14:09:49.971: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb 24 14:09:49.971: INFO: 	Container weave ready: true, restart count 0
Feb 24 14:09:49.971: INFO: 	Container weave-npc ready: true, restart count 0
Feb 24 14:09:49.971: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 24 14:09:49.971: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-20005331-3f2d-4166-829d-62a8ab4a26c2 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-20005331-3f2d-4166-829d-62a8ab4a26c2 off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-20005331-3f2d-4166-829d-62a8ab4a26c2
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:10:10.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-286" for this suite.
Feb 24 14:10:24.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:10:24.412: INFO: namespace sched-pred-286 deletion completed in 14.223521248s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:34.580 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:10:24.412: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 24 14:10:40.684: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 24 14:10:40.704: INFO: Pod pod-with-poststart-http-hook still exists
Feb 24 14:10:42.705: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 24 14:10:42.745: INFO: Pod pod-with-poststart-http-hook still exists
Feb 24 14:10:44.705: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 24 14:10:44.712: INFO: Pod pod-with-poststart-http-hook still exists
Feb 24 14:10:46.705: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 24 14:10:46.711: INFO: Pod pod-with-poststart-http-hook still exists
Feb 24 14:10:48.705: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 24 14:10:48.720: INFO: Pod pod-with-poststart-http-hook still exists
Feb 24 14:10:50.705: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 24 14:10:50.722: INFO: Pod pod-with-poststart-http-hook still exists
Feb 24 14:10:52.705: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 24 14:10:52.712: INFO: Pod pod-with-poststart-http-hook still exists
Feb 24 14:10:54.705: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 24 14:10:54.719: INFO: Pod pod-with-poststart-http-hook still exists
Feb 24 14:10:56.705: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 24 14:10:56.710: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:10:56.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8792" for this suite.
Feb 24 14:11:18.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:11:18.870: INFO: namespace container-lifecycle-hook-8792 deletion completed in 22.155314075s

• [SLOW TEST:54.459 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:11:18.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 24 14:11:19.173: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"68a8fc04-927e-47fd-80f7-32cd16ef1dc0", Controller:(*bool)(0xc0021a6f6a), BlockOwnerDeletion:(*bool)(0xc0021a6f6b)}}
Feb 24 14:11:19.237: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"3de2ea41-c496-40ea-ac13-feccb067c807", Controller:(*bool)(0xc001b83b1a), BlockOwnerDeletion:(*bool)(0xc001b83b1b)}}
Feb 24 14:11:19.301: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"2925b815-622f-4f9b-bdfb-68970a46f7e6", Controller:(*bool)(0xc0021a7142), BlockOwnerDeletion:(*bool)(0xc0021a7143)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:11:24.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6035" for this suite.
Feb 24 14:11:30.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:11:30.611: INFO: namespace gc-6035 deletion completed in 6.209383628s

• [SLOW TEST:11.740 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:11:30.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb 24 14:11:30.692: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4422,SelfLink:/api/v1/namespaces/watch-4422/configmaps/e2e-watch-test-configmap-a,UID:d376f222-6681-4f47-898a-ec89a1373e8a,ResourceVersion:25582327,Generation:0,CreationTimestamp:2020-02-24 14:11:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 24 14:11:30.692: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4422,SelfLink:/api/v1/namespaces/watch-4422/configmaps/e2e-watch-test-configmap-a,UID:d376f222-6681-4f47-898a-ec89a1373e8a,ResourceVersion:25582327,Generation:0,CreationTimestamp:2020-02-24 14:11:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb 24 14:11:40.707: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4422,SelfLink:/api/v1/namespaces/watch-4422/configmaps/e2e-watch-test-configmap-a,UID:d376f222-6681-4f47-898a-ec89a1373e8a,ResourceVersion:25582341,Generation:0,CreationTimestamp:2020-02-24 14:11:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 24 14:11:40.707: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4422,SelfLink:/api/v1/namespaces/watch-4422/configmaps/e2e-watch-test-configmap-a,UID:d376f222-6681-4f47-898a-ec89a1373e8a,ResourceVersion:25582341,Generation:0,CreationTimestamp:2020-02-24 14:11:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb 24 14:11:50.721: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4422,SelfLink:/api/v1/namespaces/watch-4422/configmaps/e2e-watch-test-configmap-a,UID:d376f222-6681-4f47-898a-ec89a1373e8a,ResourceVersion:25582355,Generation:0,CreationTimestamp:2020-02-24 14:11:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 24 14:11:50.721: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4422,SelfLink:/api/v1/namespaces/watch-4422/configmaps/e2e-watch-test-configmap-a,UID:d376f222-6681-4f47-898a-ec89a1373e8a,ResourceVersion:25582355,Generation:0,CreationTimestamp:2020-02-24 14:11:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb 24 14:12:00.740: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4422,SelfLink:/api/v1/namespaces/watch-4422/configmaps/e2e-watch-test-configmap-a,UID:d376f222-6681-4f47-898a-ec89a1373e8a,ResourceVersion:25582369,Generation:0,CreationTimestamp:2020-02-24 14:11:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 24 14:12:00.740: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4422,SelfLink:/api/v1/namespaces/watch-4422/configmaps/e2e-watch-test-configmap-a,UID:d376f222-6681-4f47-898a-ec89a1373e8a,ResourceVersion:25582369,Generation:0,CreationTimestamp:2020-02-24 14:11:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb 24 14:12:10.760: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4422,SelfLink:/api/v1/namespaces/watch-4422/configmaps/e2e-watch-test-configmap-b,UID:aba4c0df-a51f-42a6-b9e8-08ce22f091e4,ResourceVersion:25582383,Generation:0,CreationTimestamp:2020-02-24 14:12:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 24 14:12:10.760: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4422,SelfLink:/api/v1/namespaces/watch-4422/configmaps/e2e-watch-test-configmap-b,UID:aba4c0df-a51f-42a6-b9e8-08ce22f091e4,ResourceVersion:25582383,Generation:0,CreationTimestamp:2020-02-24 14:12:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb 24 14:12:20.774: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4422,SelfLink:/api/v1/namespaces/watch-4422/configmaps/e2e-watch-test-configmap-b,UID:aba4c0df-a51f-42a6-b9e8-08ce22f091e4,ResourceVersion:25582399,Generation:0,CreationTimestamp:2020-02-24 14:12:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 24 14:12:20.774: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4422,SelfLink:/api/v1/namespaces/watch-4422/configmaps/e2e-watch-test-configmap-b,UID:aba4c0df-a51f-42a6-b9e8-08ce22f091e4,ResourceVersion:25582399,Generation:0,CreationTimestamp:2020-02-24 14:12:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:12:30.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4422" for this suite.
Feb 24 14:12:36.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:12:36.929: INFO: namespace watch-4422 deletion completed in 6.130660193s

• [SLOW TEST:66.318 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:12:36.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-a8c365a3-e635-4d98-b86c-f17251ff4cf5
STEP: Creating a pod to test consume secrets
Feb 24 14:12:37.085: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-87ab4aa4-cbeb-461c-9881-03308649c953" in namespace "projected-7326" to be "success or failure"
Feb 24 14:12:37.105: INFO: Pod "pod-projected-secrets-87ab4aa4-cbeb-461c-9881-03308649c953": Phase="Pending", Reason="", readiness=false. Elapsed: 19.98423ms
Feb 24 14:12:39.110: INFO: Pod "pod-projected-secrets-87ab4aa4-cbeb-461c-9881-03308649c953": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025416713s
Feb 24 14:12:41.115: INFO: Pod "pod-projected-secrets-87ab4aa4-cbeb-461c-9881-03308649c953": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030357694s
Feb 24 14:12:43.120: INFO: Pod "pod-projected-secrets-87ab4aa4-cbeb-461c-9881-03308649c953": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035784117s
Feb 24 14:12:45.133: INFO: Pod "pod-projected-secrets-87ab4aa4-cbeb-461c-9881-03308649c953": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04832343s
STEP: Saw pod success
Feb 24 14:12:45.133: INFO: Pod "pod-projected-secrets-87ab4aa4-cbeb-461c-9881-03308649c953" satisfied condition "success or failure"
Feb 24 14:12:45.137: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-87ab4aa4-cbeb-461c-9881-03308649c953 container projected-secret-volume-test: 
STEP: delete the pod
Feb 24 14:12:45.191: INFO: Waiting for pod pod-projected-secrets-87ab4aa4-cbeb-461c-9881-03308649c953 to disappear
Feb 24 14:12:45.478: INFO: Pod pod-projected-secrets-87ab4aa4-cbeb-461c-9881-03308649c953 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:12:45.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7326" for this suite.
Feb 24 14:12:51.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:12:51.699: INFO: namespace projected-7326 deletion completed in 6.213502706s

• [SLOW TEST:14.769 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:12:51.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb 24 14:12:52.571: INFO: Pod name wrapped-volume-race-a2a7fc25-a481-4a14-bfe9-e6daa866d76e: Found 0 pods out of 5
Feb 24 14:12:57.587: INFO: Pod name wrapped-volume-race-a2a7fc25-a481-4a14-bfe9-e6daa866d76e: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-a2a7fc25-a481-4a14-bfe9-e6daa866d76e in namespace emptydir-wrapper-9788, will wait for the garbage collector to delete the pods
Feb 24 14:13:27.740: INFO: Deleting ReplicationController wrapped-volume-race-a2a7fc25-a481-4a14-bfe9-e6daa866d76e took: 19.068536ms
Feb 24 14:13:28.141: INFO: Terminating ReplicationController wrapped-volume-race-a2a7fc25-a481-4a14-bfe9-e6daa866d76e pods took: 400.532609ms
STEP: Creating RC which spawns configmap-volume pods
Feb 24 14:14:17.702: INFO: Pod name wrapped-volume-race-f05b60da-6bdc-4b8d-be63-2c451ddf758b: Found 0 pods out of 5
Feb 24 14:14:22.718: INFO: Pod name wrapped-volume-race-f05b60da-6bdc-4b8d-be63-2c451ddf758b: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-f05b60da-6bdc-4b8d-be63-2c451ddf758b in namespace emptydir-wrapper-9788, will wait for the garbage collector to delete the pods
Feb 24 14:14:58.835: INFO: Deleting ReplicationController wrapped-volume-race-f05b60da-6bdc-4b8d-be63-2c451ddf758b took: 20.466155ms
Feb 24 14:14:59.235: INFO: Terminating ReplicationController wrapped-volume-race-f05b60da-6bdc-4b8d-be63-2c451ddf758b pods took: 400.284375ms
STEP: Creating RC which spawns configmap-volume pods
Feb 24 14:15:47.033: INFO: Pod name wrapped-volume-race-b2d9ddd5-be3d-4c11-81eb-c6bfb1082cbc: Found 0 pods out of 5
Feb 24 14:15:52.117: INFO: Pod name wrapped-volume-race-b2d9ddd5-be3d-4c11-81eb-c6bfb1082cbc: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-b2d9ddd5-be3d-4c11-81eb-c6bfb1082cbc in namespace emptydir-wrapper-9788, will wait for the garbage collector to delete the pods
Feb 24 14:16:24.269: INFO: Deleting ReplicationController wrapped-volume-race-b2d9ddd5-be3d-4c11-81eb-c6bfb1082cbc took: 19.033738ms
Feb 24 14:16:24.669: INFO: Terminating ReplicationController wrapped-volume-race-b2d9ddd5-be3d-4c11-81eb-c6bfb1082cbc pods took: 400.257677ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:17:17.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9788" for this suite.
Feb 24 14:17:31.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:17:31.594: INFO: namespace emptydir-wrapper-9788 deletion completed in 14.17711984s

• [SLOW TEST:279.895 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:17:31.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-fd11feeb-3c47-4412-aff8-98063db5c98b
STEP: Creating a pod to test consume secrets
Feb 24 14:17:31.709: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f5b7fb3d-cc57-4951-b916-6f23b7a35973" in namespace "projected-1612" to be "success or failure"
Feb 24 14:17:31.716: INFO: Pod "pod-projected-secrets-f5b7fb3d-cc57-4951-b916-6f23b7a35973": Phase="Pending", Reason="", readiness=false. Elapsed: 7.202394ms
Feb 24 14:17:33.729: INFO: Pod "pod-projected-secrets-f5b7fb3d-cc57-4951-b916-6f23b7a35973": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019860871s
Feb 24 14:17:35.736: INFO: Pod "pod-projected-secrets-f5b7fb3d-cc57-4951-b916-6f23b7a35973": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026819083s
Feb 24 14:17:37.744: INFO: Pod "pod-projected-secrets-f5b7fb3d-cc57-4951-b916-6f23b7a35973": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034810305s
Feb 24 14:17:39.752: INFO: Pod "pod-projected-secrets-f5b7fb3d-cc57-4951-b916-6f23b7a35973": Phase="Pending", Reason="", readiness=false. Elapsed: 8.043167131s
Feb 24 14:17:41.761: INFO: Pod "pod-projected-secrets-f5b7fb3d-cc57-4951-b916-6f23b7a35973": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.051689247s
STEP: Saw pod success
Feb 24 14:17:41.761: INFO: Pod "pod-projected-secrets-f5b7fb3d-cc57-4951-b916-6f23b7a35973" satisfied condition "success or failure"
Feb 24 14:17:41.765: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-f5b7fb3d-cc57-4951-b916-6f23b7a35973 container projected-secret-volume-test: 
STEP: delete the pod
Feb 24 14:17:41.862: INFO: Waiting for pod pod-projected-secrets-f5b7fb3d-cc57-4951-b916-6f23b7a35973 to disappear
Feb 24 14:17:41.879: INFO: Pod pod-projected-secrets-f5b7fb3d-cc57-4951-b916-6f23b7a35973 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:17:41.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1612" for this suite.
Feb 24 14:17:48.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:17:48.617: INFO: namespace projected-1612 deletion completed in 6.72495418s

• [SLOW TEST:17.022 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:17:48.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Feb 24 14:17:48.744: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3082" to be "success or failure"
Feb 24 14:17:48.748: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.896369ms
Feb 24 14:17:50.755: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010683107s
Feb 24 14:17:52.764: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019841105s
Feb 24 14:17:54.782: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037439014s
Feb 24 14:17:56.794: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049402789s
Feb 24 14:17:58.800: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.055471685s
Feb 24 14:18:00.810: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.06528854s
STEP: Saw pod success
Feb 24 14:18:00.810: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb 24 14:18:00.815: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb 24 14:18:01.028: INFO: Waiting for pod pod-host-path-test to disappear
Feb 24 14:18:01.034: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:18:01.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-3082" for this suite.
Feb 24 14:18:07.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:18:07.188: INFO: namespace hostpath-3082 deletion completed in 6.137806877s

• [SLOW TEST:18.571 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:18:07.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-0dc0279c-7e7c-49e6-8dbb-062d1bd50193
STEP: Creating a pod to test consume secrets
Feb 24 14:18:07.321: INFO: Waiting up to 5m0s for pod "pod-secrets-cf614118-21b8-477c-92aa-d71bd230e7de" in namespace "secrets-8373" to be "success or failure"
Feb 24 14:18:07.338: INFO: Pod "pod-secrets-cf614118-21b8-477c-92aa-d71bd230e7de": Phase="Pending", Reason="", readiness=false. Elapsed: 16.531255ms
Feb 24 14:18:09.345: INFO: Pod "pod-secrets-cf614118-21b8-477c-92aa-d71bd230e7de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024282701s
Feb 24 14:18:11.422: INFO: Pod "pod-secrets-cf614118-21b8-477c-92aa-d71bd230e7de": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100570658s
Feb 24 14:18:13.432: INFO: Pod "pod-secrets-cf614118-21b8-477c-92aa-d71bd230e7de": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11077097s
Feb 24 14:18:15.443: INFO: Pod "pod-secrets-cf614118-21b8-477c-92aa-d71bd230e7de": Phase="Pending", Reason="", readiness=false. Elapsed: 8.122201667s
Feb 24 14:18:17.454: INFO: Pod "pod-secrets-cf614118-21b8-477c-92aa-d71bd230e7de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.132654334s
STEP: Saw pod success
Feb 24 14:18:17.454: INFO: Pod "pod-secrets-cf614118-21b8-477c-92aa-d71bd230e7de" satisfied condition "success or failure"
Feb 24 14:18:17.459: INFO: Trying to get logs from node iruya-node pod pod-secrets-cf614118-21b8-477c-92aa-d71bd230e7de container secret-volume-test: 
STEP: delete the pod
Feb 24 14:18:17.567: INFO: Waiting for pod pod-secrets-cf614118-21b8-477c-92aa-d71bd230e7de to disappear
Feb 24 14:18:17.606: INFO: Pod pod-secrets-cf614118-21b8-477c-92aa-d71bd230e7de no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:18:17.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8373" for this suite.
Feb 24 14:18:23.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:18:23.725: INFO: namespace secrets-8373 deletion completed in 6.112721084s

• [SLOW TEST:16.537 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:18:23.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 24 14:18:23.838: INFO: Creating deployment "test-recreate-deployment"
Feb 24 14:18:23.850: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb 24 14:18:23.931: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Feb 24 14:18:25.947: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb 24 14:18:25.951: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718150703, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718150703, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718150703, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718150703, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 14:18:27.958: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718150703, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718150703, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718150703, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718150703, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 14:18:29.958: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718150703, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718150703, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718150703, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718150703, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 14:18:31.959: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb 24 14:18:31.971: INFO: Updating deployment test-recreate-deployment
Feb 24 14:18:31.971: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 24 14:18:32.387: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-6812,SelfLink:/apis/apps/v1/namespaces/deployment-6812/deployments/test-recreate-deployment,UID:6c9ca8f8-7ee3-4577-bf73-f935b6309174,ResourceVersion:25583884,Generation:2,CreationTimestamp:2020-02-24 14:18:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-24 14:18:32 +0000 UTC 2020-02-24 14:18:32 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-24 14:18:32 +0000 UTC 2020-02-24 14:18:23 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Feb 24 14:18:32.393: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-6812,SelfLink:/apis/apps/v1/namespaces/deployment-6812/replicasets/test-recreate-deployment-5c8c9cc69d,UID:21f02699-966b-4c17-be27-87f273345bbd,ResourceVersion:25583883,Generation:1,CreationTimestamp:2020-02-24 14:18:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 6c9ca8f8-7ee3-4577-bf73-f935b6309174 0xc001f188d7 0xc001f188d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 24 14:18:32.393: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb 24 14:18:32.393: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-6812,SelfLink:/apis/apps/v1/namespaces/deployment-6812/replicasets/test-recreate-deployment-6df85df6b9,UID:3399cf45-78c4-46bd-bd6a-463148c2df09,ResourceVersion:25583873,Generation:2,CreationTimestamp:2020-02-24 14:18:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 6c9ca8f8-7ee3-4577-bf73-f935b6309174 0xc001f189b7 0xc001f189b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 24 14:18:32.406: INFO: Pod "test-recreate-deployment-5c8c9cc69d-nl7hr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-nl7hr,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-6812,SelfLink:/api/v1/namespaces/deployment-6812/pods/test-recreate-deployment-5c8c9cc69d-nl7hr,UID:5d16b318-f41b-4617-8cd0-6621b76ca260,ResourceVersion:25583885,Generation:0,CreationTimestamp:2020-02-24 14:18:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 21f02699-966b-4c17-be27-87f273345bbd 0xc000d2e227 0xc000d2e228}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cqs6b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cqs6b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-cqs6b true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000d2e2a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000d2e2c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 14:18:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 14:18:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 14:18:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 14:18:32 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-24 14:18:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:18:32.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6812" for this suite.
Feb 24 14:18:38.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:18:38.628: INFO: namespace deployment-6812 deletion completed in 6.217205378s

• [SLOW TEST:14.902 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:18:38.628: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb 24 14:19:02.916: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4080 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 24 14:19:02.916: INFO: >>> kubeConfig: /root/.kube/config
I0224 14:19:02.998607       8 log.go:172] (0xc001dea840) (0xc0024c6f00) Create stream
I0224 14:19:02.998660       8 log.go:172] (0xc001dea840) (0xc0024c6f00) Stream added, broadcasting: 1
I0224 14:19:03.005523       8 log.go:172] (0xc001dea840) Reply frame received for 1
I0224 14:19:03.005553       8 log.go:172] (0xc001dea840) (0xc002eee460) Create stream
I0224 14:19:03.005559       8 log.go:172] (0xc001dea840) (0xc002eee460) Stream added, broadcasting: 3
I0224 14:19:03.007343       8 log.go:172] (0xc001dea840) Reply frame received for 3
I0224 14:19:03.007368       8 log.go:172] (0xc001dea840) (0xc002eee500) Create stream
I0224 14:19:03.007382       8 log.go:172] (0xc001dea840) (0xc002eee500) Stream added, broadcasting: 5
I0224 14:19:03.017706       8 log.go:172] (0xc001dea840) Reply frame received for 5
I0224 14:19:03.173042       8 log.go:172] (0xc001dea840) Data frame received for 3
I0224 14:19:03.173110       8 log.go:172] (0xc002eee460) (3) Data frame handling
I0224 14:19:03.173137       8 log.go:172] (0xc002eee460) (3) Data frame sent
I0224 14:19:03.330344       8 log.go:172] (0xc001dea840) (0xc002eee460) Stream removed, broadcasting: 3
I0224 14:19:03.330464       8 log.go:172] (0xc001dea840) Data frame received for 1
I0224 14:19:03.330501       8 log.go:172] (0xc0024c6f00) (1) Data frame handling
I0224 14:19:03.330529       8 log.go:172] (0xc0024c6f00) (1) Data frame sent
I0224 14:19:03.330578       8 log.go:172] (0xc001dea840) (0xc002eee500) Stream removed, broadcasting: 5
I0224 14:19:03.330648       8 log.go:172] (0xc001dea840) (0xc0024c6f00) Stream removed, broadcasting: 1
I0224 14:19:03.330873       8 log.go:172] (0xc001dea840) (0xc0024c6f00) Stream removed, broadcasting: 1
I0224 14:19:03.330883       8 log.go:172] (0xc001dea840) (0xc002eee460) Stream removed, broadcasting: 3
I0224 14:19:03.330888       8 log.go:172] (0xc001dea840) (0xc002eee500) Stream removed, broadcasting: 5
Feb 24 14:19:03.330: INFO: Exec stderr: ""
Feb 24 14:19:03.330: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4080 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 24 14:19:03.330: INFO: >>> kubeConfig: /root/.kube/config
I0224 14:19:03.331392       8 log.go:172] (0xc001dea840) Go away received
I0224 14:19:03.392698       8 log.go:172] (0xc001952c60) (0xc00037b900) Create stream
I0224 14:19:03.392746       8 log.go:172] (0xc001952c60) (0xc00037b900) Stream added, broadcasting: 1
I0224 14:19:03.400885       8 log.go:172] (0xc001952c60) Reply frame received for 1
I0224 14:19:03.400916       8 log.go:172] (0xc001952c60) (0xc001bd9b80) Create stream
I0224 14:19:03.400928       8 log.go:172] (0xc001952c60) (0xc001bd9b80) Stream added, broadcasting: 3
I0224 14:19:03.403442       8 log.go:172] (0xc001952c60) Reply frame received for 3
I0224 14:19:03.403472       8 log.go:172] (0xc001952c60) (0xc002eee5a0) Create stream
I0224 14:19:03.403501       8 log.go:172] (0xc001952c60) (0xc002eee5a0) Stream added, broadcasting: 5
I0224 14:19:03.408184       8 log.go:172] (0xc001952c60) Reply frame received for 5
I0224 14:19:03.511495       8 log.go:172] (0xc001952c60) Data frame received for 3
I0224 14:19:03.511524       8 log.go:172] (0xc001bd9b80) (3) Data frame handling
I0224 14:19:03.511535       8 log.go:172] (0xc001bd9b80) (3) Data frame sent
I0224 14:19:03.700481       8 log.go:172] (0xc001952c60) Data frame received for 1
I0224 14:19:03.700623       8 log.go:172] (0xc001952c60) (0xc001bd9b80) Stream removed, broadcasting: 3
I0224 14:19:03.700783       8 log.go:172] (0xc00037b900) (1) Data frame handling
I0224 14:19:03.700833       8 log.go:172] (0xc00037b900) (1) Data frame sent
I0224 14:19:03.700884       8 log.go:172] (0xc001952c60) (0xc002eee5a0) Stream removed, broadcasting: 5
I0224 14:19:03.700918       8 log.go:172] (0xc001952c60) (0xc00037b900) Stream removed, broadcasting: 1
I0224 14:19:03.700937       8 log.go:172] (0xc001952c60) Go away received
I0224 14:19:03.701505       8 log.go:172] (0xc001952c60) (0xc00037b900) Stream removed, broadcasting: 1
I0224 14:19:03.701646       8 log.go:172] (0xc001952c60) (0xc001bd9b80) Stream removed, broadcasting: 3
I0224 14:19:03.701666       8 log.go:172] (0xc001952c60) (0xc002eee5a0) Stream removed, broadcasting: 5
Feb 24 14:19:03.701: INFO: Exec stderr: ""
Feb 24 14:19:03.701: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4080 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 24 14:19:03.701: INFO: >>> kubeConfig: /root/.kube/config
I0224 14:19:03.778456       8 log.go:172] (0xc00303e9a0) (0xc002eee8c0) Create stream
I0224 14:19:03.778503       8 log.go:172] (0xc00303e9a0) (0xc002eee8c0) Stream added, broadcasting: 1
I0224 14:19:03.793417       8 log.go:172] (0xc00303e9a0) Reply frame received for 1
I0224 14:19:03.793506       8 log.go:172] (0xc00303e9a0) (0xc00037b9a0) Create stream
I0224 14:19:03.793516       8 log.go:172] (0xc00303e9a0) (0xc00037b9a0) Stream added, broadcasting: 3
I0224 14:19:03.796183       8 log.go:172] (0xc00303e9a0) Reply frame received for 3
I0224 14:19:03.796212       8 log.go:172] (0xc00303e9a0) (0xc0024c6fa0) Create stream
I0224 14:19:03.796236       8 log.go:172] (0xc00303e9a0) (0xc0024c6fa0) Stream added, broadcasting: 5
I0224 14:19:03.798088       8 log.go:172] (0xc00303e9a0) Reply frame received for 5
I0224 14:19:03.945918       8 log.go:172] (0xc00303e9a0) Data frame received for 3
I0224 14:19:03.946186       8 log.go:172] (0xc00037b9a0) (3) Data frame handling
I0224 14:19:03.946215       8 log.go:172] (0xc00037b9a0) (3) Data frame sent
I0224 14:19:04.221234       8 log.go:172] (0xc00303e9a0) (0xc00037b9a0) Stream removed, broadcasting: 3
I0224 14:19:04.221342       8 log.go:172] (0xc00303e9a0) Data frame received for 1
I0224 14:19:04.221376       8 log.go:172] (0xc002eee8c0) (1) Data frame handling
I0224 14:19:04.221402       8 log.go:172] (0xc002eee8c0) (1) Data frame sent
I0224 14:19:04.221415       8 log.go:172] (0xc00303e9a0) (0xc0024c6fa0) Stream removed, broadcasting: 5
I0224 14:19:04.221441       8 log.go:172] (0xc00303e9a0) (0xc002eee8c0) Stream removed, broadcasting: 1
I0224 14:19:04.221460       8 log.go:172] (0xc00303e9a0) Go away received
I0224 14:19:04.221710       8 log.go:172] (0xc00303e9a0) (0xc002eee8c0) Stream removed, broadcasting: 1
I0224 14:19:04.221729       8 log.go:172] (0xc00303e9a0) (0xc00037b9a0) Stream removed, broadcasting: 3
I0224 14:19:04.221737       8 log.go:172] (0xc00303e9a0) (0xc0024c6fa0) Stream removed, broadcasting: 5
Feb 24 14:19:04.221: INFO: Exec stderr: ""
Feb 24 14:19:04.221: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4080 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 24 14:19:04.221: INFO: >>> kubeConfig: /root/.kube/config
I0224 14:19:04.329378       8 log.go:172] (0xc001deb3f0) (0xc0024c72c0) Create stream
I0224 14:19:04.329412       8 log.go:172] (0xc001deb3f0) (0xc0024c72c0) Stream added, broadcasting: 1
I0224 14:19:04.337463       8 log.go:172] (0xc001deb3f0) Reply frame received for 1
I0224 14:19:04.337512       8 log.go:172] (0xc001deb3f0) (0xc002eee960) Create stream
I0224 14:19:04.337524       8 log.go:172] (0xc001deb3f0) (0xc002eee960) Stream added, broadcasting: 3
I0224 14:19:04.339429       8 log.go:172] (0xc001deb3f0) Reply frame received for 3
I0224 14:19:04.339454       8 log.go:172] (0xc001deb3f0) (0xc001bd9c20) Create stream
I0224 14:19:04.339466       8 log.go:172] (0xc001deb3f0) (0xc001bd9c20) Stream added, broadcasting: 5
I0224 14:19:04.342162       8 log.go:172] (0xc001deb3f0) Reply frame received for 5
I0224 14:19:04.529293       8 log.go:172] (0xc001deb3f0) Data frame received for 3
I0224 14:19:04.529364       8 log.go:172] (0xc002eee960) (3) Data frame handling
I0224 14:19:04.529380       8 log.go:172] (0xc002eee960) (3) Data frame sent
I0224 14:19:04.680824       8 log.go:172] (0xc001deb3f0) (0xc002eee960) Stream removed, broadcasting: 3
I0224 14:19:04.680882       8 log.go:172] (0xc001deb3f0) Data frame received for 1
I0224 14:19:04.680891       8 log.go:172] (0xc0024c72c0) (1) Data frame handling
I0224 14:19:04.680898       8 log.go:172] (0xc0024c72c0) (1) Data frame sent
I0224 14:19:04.680904       8 log.go:172] (0xc001deb3f0) (0xc0024c72c0) Stream removed, broadcasting: 1
I0224 14:19:04.680984       8 log.go:172] (0xc001deb3f0) (0xc001bd9c20) Stream removed, broadcasting: 5
I0224 14:19:04.681004       8 log.go:172] (0xc001deb3f0) (0xc0024c72c0) Stream removed, broadcasting: 1
I0224 14:19:04.681010       8 log.go:172] (0xc001deb3f0) (0xc002eee960) Stream removed, broadcasting: 3
I0224 14:19:04.681015       8 log.go:172] (0xc001deb3f0) (0xc001bd9c20) Stream removed, broadcasting: 5
I0224 14:19:04.681115       8 log.go:172] (0xc001deb3f0) Go away received
Feb 24 14:19:04.681: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb 24 14:19:04.681: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4080 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 24 14:19:04.681: INFO: >>> kubeConfig: /root/.kube/config
I0224 14:19:04.720048       8 log.go:172] (0xc0032a40b0) (0xc0024c75e0) Create stream
I0224 14:19:04.720089       8 log.go:172] (0xc0032a40b0) (0xc0024c75e0) Stream added, broadcasting: 1
I0224 14:19:04.727231       8 log.go:172] (0xc0032a40b0) Reply frame received for 1
I0224 14:19:04.727310       8 log.go:172] (0xc0032a40b0) (0xc0019a83c0) Create stream
I0224 14:19:04.727351       8 log.go:172] (0xc0032a40b0) (0xc0019a83c0) Stream added, broadcasting: 3
I0224 14:19:04.728435       8 log.go:172] (0xc0032a40b0) Reply frame received for 3
I0224 14:19:04.728460       8 log.go:172] (0xc0032a40b0) (0xc00037bc20) Create stream
I0224 14:19:04.728465       8 log.go:172] (0xc0032a40b0) (0xc00037bc20) Stream added, broadcasting: 5
I0224 14:19:04.729352       8 log.go:172] (0xc0032a40b0) Reply frame received for 5
I0224 14:19:04.812608       8 log.go:172] (0xc0032a40b0) Data frame received for 3
I0224 14:19:04.812673       8 log.go:172] (0xc0019a83c0) (3) Data frame handling
I0224 14:19:04.812702       8 log.go:172] (0xc0019a83c0) (3) Data frame sent
I0224 14:19:04.906676       8 log.go:172] (0xc0032a40b0) Data frame received for 1
I0224 14:19:04.906741       8 log.go:172] (0xc0032a40b0) (0xc00037bc20) Stream removed, broadcasting: 5
I0224 14:19:04.906771       8 log.go:172] (0xc0024c75e0) (1) Data frame handling
I0224 14:19:04.906783       8 log.go:172] (0xc0024c75e0) (1) Data frame sent
I0224 14:19:04.906800       8 log.go:172] (0xc0032a40b0) (0xc0019a83c0) Stream removed, broadcasting: 3
I0224 14:19:04.906834       8 log.go:172] (0xc0032a40b0) (0xc0024c75e0) Stream removed, broadcasting: 1
I0224 14:19:04.906845       8 log.go:172] (0xc0032a40b0) Go away received
I0224 14:19:04.906943       8 log.go:172] (0xc0032a40b0) (0xc0024c75e0) Stream removed, broadcasting: 1
I0224 14:19:04.906953       8 log.go:172] (0xc0032a40b0) (0xc0019a83c0) Stream removed, broadcasting: 3
I0224 14:19:04.906961       8 log.go:172] (0xc0032a40b0) (0xc00037bc20) Stream removed, broadcasting: 5
Feb 24 14:19:04.906: INFO: Exec stderr: ""
Feb 24 14:19:04.907: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4080 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 24 14:19:04.907: INFO: >>> kubeConfig: /root/.kube/config
I0224 14:19:04.956696       8 log.go:172] (0xc0032a4b00) (0xc0024c79a0) Create stream
I0224 14:19:04.956726       8 log.go:172] (0xc0032a4b00) (0xc0024c79a0) Stream added, broadcasting: 1
I0224 14:19:04.962296       8 log.go:172] (0xc0032a4b00) Reply frame received for 1
I0224 14:19:04.962337       8 log.go:172] (0xc0032a4b00) (0xc001bd9cc0) Create stream
I0224 14:19:04.962344       8 log.go:172] (0xc0032a4b00) (0xc001bd9cc0) Stream added, broadcasting: 3
I0224 14:19:04.963733       8 log.go:172] (0xc0032a4b00) Reply frame received for 3
I0224 14:19:04.963754       8 log.go:172] (0xc0032a4b00) (0xc001bd9d60) Create stream
I0224 14:19:04.963778       8 log.go:172] (0xc0032a4b00) (0xc001bd9d60) Stream added, broadcasting: 5
I0224 14:19:04.964871       8 log.go:172] (0xc0032a4b00) Reply frame received for 5
I0224 14:19:05.072970       8 log.go:172] (0xc0032a4b00) Data frame received for 3
I0224 14:19:05.073065       8 log.go:172] (0xc001bd9cc0) (3) Data frame handling
I0224 14:19:05.073112       8 log.go:172] (0xc001bd9cc0) (3) Data frame sent
I0224 14:19:05.170758       8 log.go:172] (0xc0032a4b00) (0xc001bd9cc0) Stream removed, broadcasting: 3
I0224 14:19:05.170909       8 log.go:172] (0xc0032a4b00) Data frame received for 1
I0224 14:19:05.170953       8 log.go:172] (0xc0032a4b00) (0xc001bd9d60) Stream removed, broadcasting: 5
I0224 14:19:05.171266       8 log.go:172] (0xc0024c79a0) (1) Data frame handling
I0224 14:19:05.171300       8 log.go:172] (0xc0024c79a0) (1) Data frame sent
I0224 14:19:05.171340       8 log.go:172] (0xc0032a4b00) (0xc0024c79a0) Stream removed, broadcasting: 1
I0224 14:19:05.171369       8 log.go:172] (0xc0032a4b00) Go away received
I0224 14:19:05.171566       8 log.go:172] (0xc0032a4b00) (0xc0024c79a0) Stream removed, broadcasting: 1
I0224 14:19:05.171595       8 log.go:172] (0xc0032a4b00) (0xc001bd9cc0) Stream removed, broadcasting: 3
I0224 14:19:05.171605       8 log.go:172] (0xc0032a4b00) (0xc001bd9d60) Stream removed, broadcasting: 5
Feb 24 14:19:05.171: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb 24 14:19:05.171: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4080 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 24 14:19:05.171: INFO: >>> kubeConfig: /root/.kube/config
I0224 14:19:05.230226       8 log.go:172] (0xc0032a53f0) (0xc0024c7d60) Create stream
I0224 14:19:05.230313       8 log.go:172] (0xc0032a53f0) (0xc0024c7d60) Stream added, broadcasting: 1
I0224 14:19:05.237232       8 log.go:172] (0xc0032a53f0) Reply frame received for 1
I0224 14:19:05.237261       8 log.go:172] (0xc0032a53f0) (0xc002eeea00) Create stream
I0224 14:19:05.237281       8 log.go:172] (0xc0032a53f0) (0xc002eeea00) Stream added, broadcasting: 3
I0224 14:19:05.238450       8 log.go:172] (0xc0032a53f0) Reply frame received for 3
I0224 14:19:05.238504       8 log.go:172] (0xc0032a53f0) (0xc0019a8460) Create stream
I0224 14:19:05.238522       8 log.go:172] (0xc0032a53f0) (0xc0019a8460) Stream added, broadcasting: 5
I0224 14:19:05.240772       8 log.go:172] (0xc0032a53f0) Reply frame received for 5
I0224 14:19:05.348667       8 log.go:172] (0xc0032a53f0) Data frame received for 3
I0224 14:19:05.348718       8 log.go:172] (0xc002eeea00) (3) Data frame handling
I0224 14:19:05.348732       8 log.go:172] (0xc002eeea00) (3) Data frame sent
I0224 14:19:05.431955       8 log.go:172] (0xc0032a53f0) Data frame received for 1
I0224 14:19:05.432089       8 log.go:172] (0xc0032a53f0) (0xc0019a8460) Stream removed, broadcasting: 5
I0224 14:19:05.432177       8 log.go:172] (0xc0024c7d60) (1) Data frame handling
I0224 14:19:05.432196       8 log.go:172] (0xc0024c7d60) (1) Data frame sent
I0224 14:19:05.432212       8 log.go:172] (0xc0032a53f0) (0xc002eeea00) Stream removed, broadcasting: 3
I0224 14:19:05.432274       8 log.go:172] (0xc0032a53f0) (0xc0024c7d60) Stream removed, broadcasting: 1
I0224 14:19:05.432306       8 log.go:172] (0xc0032a53f0) Go away received
I0224 14:19:05.432511       8 log.go:172] (0xc0032a53f0) (0xc0024c7d60) Stream removed, broadcasting: 1
I0224 14:19:05.432541       8 log.go:172] (0xc0032a53f0) (0xc002eeea00) Stream removed, broadcasting: 3
I0224 14:19:05.432594       8 log.go:172] (0xc0032a53f0) (0xc0019a8460) Stream removed, broadcasting: 5
Feb 24 14:19:05.432: INFO: Exec stderr: ""
Feb 24 14:19:05.432: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4080 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 24 14:19:05.432: INFO: >>> kubeConfig: /root/.kube/config
I0224 14:19:05.495848       8 log.go:172] (0xc0036520b0) (0xc0019a8780) Create stream
I0224 14:19:05.495884       8 log.go:172] (0xc0036520b0) (0xc0019a8780) Stream added, broadcasting: 1
I0224 14:19:05.512071       8 log.go:172] (0xc0036520b0) Reply frame received for 1
I0224 14:19:05.512123       8 log.go:172] (0xc0036520b0) (0xc001d86000) Create stream
I0224 14:19:05.512136       8 log.go:172] (0xc0036520b0) (0xc001d86000) Stream added, broadcasting: 3
I0224 14:19:05.514700       8 log.go:172] (0xc0036520b0) Reply frame received for 3
I0224 14:19:05.514732       8 log.go:172] (0xc0036520b0) (0xc001d860a0) Create stream
I0224 14:19:05.514742       8 log.go:172] (0xc0036520b0) (0xc001d860a0) Stream added, broadcasting: 5
I0224 14:19:05.517040       8 log.go:172] (0xc0036520b0) Reply frame received for 5
I0224 14:19:05.615588       8 log.go:172] (0xc0036520b0) Data frame received for 3
I0224 14:19:05.615620       8 log.go:172] (0xc001d86000) (3) Data frame handling
I0224 14:19:05.615642       8 log.go:172] (0xc001d86000) (3) Data frame sent
I0224 14:19:05.754362       8 log.go:172] (0xc0036520b0) Data frame received for 1
I0224 14:19:05.754421       8 log.go:172] (0xc0036520b0) (0xc001d86000) Stream removed, broadcasting: 3
I0224 14:19:05.754473       8 log.go:172] (0xc0019a8780) (1) Data frame handling
I0224 14:19:05.754503       8 log.go:172] (0xc0019a8780) (1) Data frame sent
I0224 14:19:05.754542       8 log.go:172] (0xc0036520b0) (0xc001d860a0) Stream removed, broadcasting: 5
I0224 14:19:05.754599       8 log.go:172] (0xc0036520b0) (0xc0019a8780) Stream removed, broadcasting: 1
I0224 14:19:05.754715       8 log.go:172] (0xc0036520b0) Go away received
I0224 14:19:05.754757       8 log.go:172] (0xc0036520b0) (0xc0019a8780) Stream removed, broadcasting: 1
I0224 14:19:05.754771       8 log.go:172] (0xc0036520b0) (0xc001d86000) Stream removed, broadcasting: 3
I0224 14:19:05.754778       8 log.go:172] (0xc0036520b0) (0xc001d860a0) Stream removed, broadcasting: 5
Feb 24 14:19:05.754: INFO: Exec stderr: ""
Feb 24 14:19:05.754: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4080 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 24 14:19:05.754: INFO: >>> kubeConfig: /root/.kube/config
I0224 14:19:05.800042       8 log.go:172] (0xc00099b760) (0xc001bf4820) Create stream
I0224 14:19:05.800065       8 log.go:172] (0xc00099b760) (0xc001bf4820) Stream added, broadcasting: 1
I0224 14:19:05.803218       8 log.go:172] (0xc00099b760) Reply frame received for 1
I0224 14:19:05.803303       8 log.go:172] (0xc00099b760) (0xc001bdc140) Create stream
I0224 14:19:05.803321       8 log.go:172] (0xc00099b760) (0xc001bdc140) Stream added, broadcasting: 3
I0224 14:19:05.804888       8 log.go:172] (0xc00099b760) Reply frame received for 3
I0224 14:19:05.804916       8 log.go:172] (0xc00099b760) (0xc0016fe000) Create stream
I0224 14:19:05.804927       8 log.go:172] (0xc00099b760) (0xc0016fe000) Stream added, broadcasting: 5
I0224 14:19:05.806273       8 log.go:172] (0xc00099b760) Reply frame received for 5
I0224 14:19:05.895006       8 log.go:172] (0xc00099b760) Data frame received for 3
I0224 14:19:05.895050       8 log.go:172] (0xc001bdc140) (3) Data frame handling
I0224 14:19:05.895062       8 log.go:172] (0xc001bdc140) (3) Data frame sent
I0224 14:19:06.012457       8 log.go:172] (0xc00099b760) Data frame received for 1
I0224 14:19:06.012522       8 log.go:172] (0xc001bf4820) (1) Data frame handling
I0224 14:19:06.012562       8 log.go:172] (0xc001bf4820) (1) Data frame sent
I0224 14:19:06.012597       8 log.go:172] (0xc00099b760) (0xc001bf4820) Stream removed, broadcasting: 1
I0224 14:19:06.013711       8 log.go:172] (0xc00099b760) (0xc001bdc140) Stream removed, broadcasting: 3
I0224 14:19:06.013787       8 log.go:172] (0xc00099b760) (0xc0016fe000) Stream removed, broadcasting: 5
I0224 14:19:06.013809       8 log.go:172] (0xc00099b760) Go away received
I0224 14:19:06.014135       8 log.go:172] (0xc00099b760) (0xc001bf4820) Stream removed, broadcasting: 1
I0224 14:19:06.014150       8 log.go:172] (0xc00099b760) (0xc001bdc140) Stream removed, broadcasting: 3
I0224 14:19:06.014155       8 log.go:172] (0xc00099b760) (0xc0016fe000) Stream removed, broadcasting: 5
Feb 24 14:19:06.014: INFO: Exec stderr: ""
Feb 24 14:19:06.014: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4080 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 24 14:19:06.014: INFO: >>> kubeConfig: /root/.kube/config
I0224 14:19:06.053378       8 log.go:172] (0xc00052f6b0) (0xc001bdc640) Create stream
I0224 14:19:06.053403       8 log.go:172] (0xc00052f6b0) (0xc001bdc640) Stream added, broadcasting: 1
I0224 14:19:06.056943       8 log.go:172] (0xc00052f6b0) Reply frame received for 1
I0224 14:19:06.056967       8 log.go:172] (0xc00052f6b0) (0xc001d86140) Create stream
I0224 14:19:06.056976       8 log.go:172] (0xc00052f6b0) (0xc001d86140) Stream added, broadcasting: 3
I0224 14:19:06.059445       8 log.go:172] (0xc00052f6b0) Reply frame received for 3
I0224 14:19:06.059464       8 log.go:172] (0xc00052f6b0) (0xc0016fe0a0) Create stream
I0224 14:19:06.059472       8 log.go:172] (0xc00052f6b0) (0xc0016fe0a0) Stream added, broadcasting: 5
I0224 14:19:06.061295       8 log.go:172] (0xc00052f6b0) Reply frame received for 5
I0224 14:19:06.142850       8 log.go:172] (0xc00052f6b0) Data frame received for 3
I0224 14:19:06.142904       8 log.go:172] (0xc001d86140) (3) Data frame handling
I0224 14:19:06.142920       8 log.go:172] (0xc001d86140) (3) Data frame sent
I0224 14:19:06.245907       8 log.go:172] (0xc00052f6b0) Data frame received for 1
I0224 14:19:06.245934       8 log.go:172] (0xc001bdc640) (1) Data frame handling
I0224 14:19:06.245942       8 log.go:172] (0xc001bdc640) (1) Data frame sent
I0224 14:19:06.245962       8 log.go:172] (0xc00052f6b0) (0xc001d86140) Stream removed, broadcasting: 3
I0224 14:19:06.245981       8 log.go:172] (0xc00052f6b0) (0xc001bdc640) Stream removed, broadcasting: 1
I0224 14:19:06.246753       8 log.go:172] (0xc00052f6b0) (0xc0016fe0a0) Stream removed, broadcasting: 5
I0224 14:19:06.246772       8 log.go:172] (0xc00052f6b0) Go away received
I0224 14:19:06.246803       8 log.go:172] (0xc00052f6b0) (0xc001bdc640) Stream removed, broadcasting: 1
I0224 14:19:06.246821       8 log.go:172] (0xc00052f6b0) (0xc001d86140) Stream removed, broadcasting: 3
I0224 14:19:06.246849       8 log.go:172] (0xc00052f6b0) (0xc0016fe0a0) Stream removed, broadcasting: 5
Feb 24 14:19:06.246: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:19:06.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-4080" for this suite.
Feb 24 14:19:58.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:19:58.474: INFO: namespace e2e-kubelet-etc-hosts-4080 deletion completed in 52.218278894s

• [SLOW TEST:79.846 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:19:58.475: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:19:58.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6991" for this suite.
Feb 24 14:20:04.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:20:04.692: INFO: namespace services-6991 deletion completed in 6.084917007s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.217 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:20:04.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 24 14:20:14.472: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:20:14.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4284" for this suite.
Feb 24 14:20:20.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:20:20.849: INFO: namespace container-runtime-4284 deletion completed in 6.294709831s

• [SLOW TEST:16.158 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:20:20.851: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 24 14:20:20.986: INFO: Waiting up to 5m0s for pod "downwardapi-volume-16a93090-fed8-4b9a-8f2a-4664657a1f26" in namespace "projected-3973" to be "success or failure"
Feb 24 14:20:21.006: INFO: Pod "downwardapi-volume-16a93090-fed8-4b9a-8f2a-4664657a1f26": Phase="Pending", Reason="", readiness=false. Elapsed: 19.762586ms
Feb 24 14:20:23.012: INFO: Pod "downwardapi-volume-16a93090-fed8-4b9a-8f2a-4664657a1f26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026276926s
Feb 24 14:20:25.023: INFO: Pod "downwardapi-volume-16a93090-fed8-4b9a-8f2a-4664657a1f26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036776206s
Feb 24 14:20:27.039: INFO: Pod "downwardapi-volume-16a93090-fed8-4b9a-8f2a-4664657a1f26": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052592609s
Feb 24 14:20:29.044: INFO: Pod "downwardapi-volume-16a93090-fed8-4b9a-8f2a-4664657a1f26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057889177s
STEP: Saw pod success
Feb 24 14:20:29.044: INFO: Pod "downwardapi-volume-16a93090-fed8-4b9a-8f2a-4664657a1f26" satisfied condition "success or failure"
Feb 24 14:20:29.047: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-16a93090-fed8-4b9a-8f2a-4664657a1f26 container client-container: 
STEP: delete the pod
Feb 24 14:20:29.164: INFO: Waiting for pod downwardapi-volume-16a93090-fed8-4b9a-8f2a-4664657a1f26 to disappear
Feb 24 14:20:29.170: INFO: Pod downwardapi-volume-16a93090-fed8-4b9a-8f2a-4664657a1f26 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:20:29.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3973" for this suite.
Feb 24 14:20:35.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:20:35.313: INFO: namespace projected-3973 deletion completed in 6.138250906s

• [SLOW TEST:14.462 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:20:35.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb 24 14:20:35.402: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 24 14:20:35.410: INFO: Waiting for terminating namespaces to be deleted...
Feb 24 14:20:35.412: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb 24 14:20:35.420: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb 24 14:20:35.420: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 24 14:20:35.420: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb 24 14:20:35.420: INFO: 	Container weave ready: true, restart count 0
Feb 24 14:20:35.420: INFO: 	Container weave-npc ready: true, restart count 0
Feb 24 14:20:35.420: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded)
Feb 24 14:20:35.420: INFO: 	Container kube-bench ready: false, restart count 0
Feb 24 14:20:35.420: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb 24 14:20:35.432: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb 24 14:20:35.432: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb 24 14:20:35.432: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb 24 14:20:35.432: INFO: 	Container kube-scheduler ready: true, restart count 15
Feb 24 14:20:35.432: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 24 14:20:35.432: INFO: 	Container coredns ready: true, restart count 0
Feb 24 14:20:35.432: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb 24 14:20:35.432: INFO: 	Container etcd ready: true, restart count 0
Feb 24 14:20:35.432: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb 24 14:20:35.432: INFO: 	Container weave ready: true, restart count 0
Feb 24 14:20:35.432: INFO: 	Container weave-npc ready: true, restart count 0
Feb 24 14:20:35.432: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 24 14:20:35.432: INFO: 	Container coredns ready: true, restart count 0
Feb 24 14:20:35.432: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb 24 14:20:35.432: INFO: 	Container kube-controller-manager ready: true, restart count 23
Feb 24 14:20:35.432: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb 24 14:20:35.432: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Feb 24 14:20:35.604: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb 24 14:20:35.604: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb 24 14:20:35.604: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Feb 24 14:20:35.604: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Feb 24 14:20:35.604: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Feb 24 14:20:35.604: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Feb 24 14:20:35.604: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Feb 24 14:20:35.604: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb 24 14:20:35.604: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Feb 24 14:20:35.604: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0d6b3f15-45c0-4c97-bea3-b95bb3a01ce0.15f65c82bc190199], Reason = [Scheduled], Message = [Successfully assigned sched-pred-636/filler-pod-0d6b3f15-45c0-4c97-bea3-b95bb3a01ce0 to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0d6b3f15-45c0-4c97-bea3-b95bb3a01ce0.15f65c840a92afe7], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0d6b3f15-45c0-4c97-bea3-b95bb3a01ce0.15f65c84fffe970c], Reason = [Created], Message = [Created container filler-pod-0d6b3f15-45c0-4c97-bea3-b95bb3a01ce0]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0d6b3f15-45c0-4c97-bea3-b95bb3a01ce0.15f65c8518f9fb9c], Reason = [Started], Message = [Started container filler-pod-0d6b3f15-45c0-4c97-bea3-b95bb3a01ce0]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-24974042-42f6-4114-a83d-4c811da92fa9.15f65c82bbabf910], Reason = [Scheduled], Message = [Successfully assigned sched-pred-636/filler-pod-24974042-42f6-4114-a83d-4c811da92fa9 to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-24974042-42f6-4114-a83d-4c811da92fa9.15f65c84080d3b35], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-24974042-42f6-4114-a83d-4c811da92fa9.15f65c84e1096131], Reason = [Created], Message = [Created container filler-pod-24974042-42f6-4114-a83d-4c811da92fa9]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-24974042-42f6-4114-a83d-4c811da92fa9.15f65c850abce598], Reason = [Started], Message = [Started container filler-pod-24974042-42f6-4114-a83d-4c811da92fa9]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f65c85897f9c03], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:20:48.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-636" for this suite.
Feb 24 14:20:54.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:20:54.965: INFO: namespace sched-pred-636 deletion completed in 6.179265509s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:19.652 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:20:54.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:21:04.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9336" for this suite.
Feb 24 14:21:10.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:21:10.687: INFO: namespace watch-9336 deletion completed in 6.224621103s

• [SLOW TEST:15.721 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:21:10.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 24 14:21:10.822: INFO: Waiting up to 5m0s for pod "pod-d1eb4866-9f4d-4700-bda2-b5b0788f57b7" in namespace "emptydir-3697" to be "success or failure"
Feb 24 14:21:10.832: INFO: Pod "pod-d1eb4866-9f4d-4700-bda2-b5b0788f57b7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022097ms
Feb 24 14:21:12.843: INFO: Pod "pod-d1eb4866-9f4d-4700-bda2-b5b0788f57b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02058544s
Feb 24 14:21:14.851: INFO: Pod "pod-d1eb4866-9f4d-4700-bda2-b5b0788f57b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028447226s
Feb 24 14:21:16.860: INFO: Pod "pod-d1eb4866-9f4d-4700-bda2-b5b0788f57b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037988631s
Feb 24 14:21:18.870: INFO: Pod "pod-d1eb4866-9f4d-4700-bda2-b5b0788f57b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047179453s
STEP: Saw pod success
Feb 24 14:21:18.870: INFO: Pod "pod-d1eb4866-9f4d-4700-bda2-b5b0788f57b7" satisfied condition "success or failure"
Feb 24 14:21:18.874: INFO: Trying to get logs from node iruya-node pod pod-d1eb4866-9f4d-4700-bda2-b5b0788f57b7 container test-container: 
STEP: delete the pod
Feb 24 14:21:19.240: INFO: Waiting for pod pod-d1eb4866-9f4d-4700-bda2-b5b0788f57b7 to disappear
Feb 24 14:21:19.263: INFO: Pod pod-d1eb4866-9f4d-4700-bda2-b5b0788f57b7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:21:19.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3697" for this suite.
Feb 24 14:21:25.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:21:25.455: INFO: namespace emptydir-3697 deletion completed in 6.179020872s

• [SLOW TEST:14.768 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:21:25.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 24 14:21:25.588: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb 24 14:21:30.603: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 24 14:21:34.615: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 24 14:21:34.880: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-7914,SelfLink:/apis/apps/v1/namespaces/deployment-7914/deployments/test-cleanup-deployment,UID:33b77f66-43dd-42d8-b34d-d4702f80e6f5,ResourceVersion:25584461,Generation:1,CreationTimestamp:2020-02-24 14:21:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Feb 24 14:21:34.943: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Feb 24 14:21:34.944: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Feb 24 14:21:34.944: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-7914,SelfLink:/apis/apps/v1/namespaces/deployment-7914/replicasets/test-cleanup-controller,UID:f69f97e1-2295-47a4-8d24-afeab7e580e8,ResourceVersion:25584462,Generation:1,CreationTimestamp:2020-02-24 14:21:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 33b77f66-43dd-42d8-b34d-d4702f80e6f5 0xc00257c4e7 0xc00257c4e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 24 14:21:35.063: INFO: Pod "test-cleanup-controller-4lcms" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-4lcms,GenerateName:test-cleanup-controller-,Namespace:deployment-7914,SelfLink:/api/v1/namespaces/deployment-7914/pods/test-cleanup-controller-4lcms,UID:3d6287ec-6ae6-477c-8e1c-e267b986994d,ResourceVersion:25584456,Generation:0,CreationTimestamp:2020-02-24 14:21:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller f69f97e1-2295-47a4-8d24-afeab7e580e8 0xc002f3cab7 0xc002f3cab8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gvn2w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gvn2w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gvn2w true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002f3cb50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002f3cb80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 14:21:25 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 14:21:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 14:21:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 14:21:25 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-24 14:21:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-24 14:21:32 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ab11333b7abc409c0492c1eba6ba60c1c7b5322ea697eb9aeec2d1b4751a1823}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:21:35.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7914" for this suite.
Feb 24 14:21:44.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:21:44.430: INFO: namespace deployment-7914 deletion completed in 9.32274232s

• [SLOW TEST:18.974 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:21:44.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-74bc278d-e873-457b-b718-eae4e3a41f37
STEP: Creating secret with name s-test-opt-upd-c7e67a13-cd36-4ffc-8382-136ba111d7d0
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-74bc278d-e873-457b-b718-eae4e3a41f37
STEP: Updating secret s-test-opt-upd-c7e67a13-cd36-4ffc-8382-136ba111d7d0
STEP: Creating secret with name s-test-opt-create-a3f3b99f-c46f-4e37-a96e-c57deae98979
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:22:00.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9521" for this suite.
Feb 24 14:22:22.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:22:23.047: INFO: namespace secrets-9521 deletion completed in 22.105191317s

• [SLOW TEST:38.616 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:22:23.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Feb 24 14:22:23.184: INFO: Waiting up to 5m0s for pod "client-containers-c78b7961-9024-4a15-8af0-9d41c5b6e520" in namespace "containers-241" to be "success or failure"
Feb 24 14:22:23.283: INFO: Pod "client-containers-c78b7961-9024-4a15-8af0-9d41c5b6e520": Phase="Pending", Reason="", readiness=false. Elapsed: 99.77281ms
Feb 24 14:22:25.289: INFO: Pod "client-containers-c78b7961-9024-4a15-8af0-9d41c5b6e520": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105496301s
Feb 24 14:22:27.298: INFO: Pod "client-containers-c78b7961-9024-4a15-8af0-9d41c5b6e520": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114686445s
Feb 24 14:22:29.334: INFO: Pod "client-containers-c78b7961-9024-4a15-8af0-9d41c5b6e520": Phase="Pending", Reason="", readiness=false. Elapsed: 6.150108915s
Feb 24 14:22:31.342: INFO: Pod "client-containers-c78b7961-9024-4a15-8af0-9d41c5b6e520": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.158525671s
STEP: Saw pod success
Feb 24 14:22:31.342: INFO: Pod "client-containers-c78b7961-9024-4a15-8af0-9d41c5b6e520" satisfied condition "success or failure"
Feb 24 14:22:31.347: INFO: Trying to get logs from node iruya-node pod client-containers-c78b7961-9024-4a15-8af0-9d41c5b6e520 container test-container: 
STEP: delete the pod
Feb 24 14:22:31.775: INFO: Waiting for pod client-containers-c78b7961-9024-4a15-8af0-9d41c5b6e520 to disappear
Feb 24 14:22:31.793: INFO: Pod client-containers-c78b7961-9024-4a15-8af0-9d41c5b6e520 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:22:31.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-241" for this suite.
Feb 24 14:22:37.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:22:38.055: INFO: namespace containers-241 deletion completed in 6.243206927s

• [SLOW TEST:15.007 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:22:38.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Feb 24 14:22:38.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb 24 14:22:40.515: INFO: stderr: ""
Feb 24 14:22:40.515: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:22:40.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9999" for this suite.
Feb 24 14:22:46.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:22:46.787: INFO: namespace kubectl-9999 deletion completed in 6.264243233s

• [SLOW TEST:8.732 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:22:46.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 24 14:22:46.915: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb 24 14:22:47.006: INFO: Number of nodes with available pods: 0
Feb 24 14:22:47.006: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb 24 14:22:47.204: INFO: Number of nodes with available pods: 0
Feb 24 14:22:47.204: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:22:48.213: INFO: Number of nodes with available pods: 0
Feb 24 14:22:48.214: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:22:49.211: INFO: Number of nodes with available pods: 0
Feb 24 14:22:49.211: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:22:50.214: INFO: Number of nodes with available pods: 0
Feb 24 14:22:50.214: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:22:51.246: INFO: Number of nodes with available pods: 0
Feb 24 14:22:51.246: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:22:52.210: INFO: Number of nodes with available pods: 0
Feb 24 14:22:52.210: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:22:53.265: INFO: Number of nodes with available pods: 0
Feb 24 14:22:53.265: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:22:54.214: INFO: Number of nodes with available pods: 0
Feb 24 14:22:54.214: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:22:55.210: INFO: Number of nodes with available pods: 1
Feb 24 14:22:55.210: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb 24 14:22:55.288: INFO: Number of nodes with available pods: 1
Feb 24 14:22:55.288: INFO: Number of running nodes: 0, number of available pods: 1
Feb 24 14:22:56.297: INFO: Number of nodes with available pods: 0
Feb 24 14:22:56.297: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb 24 14:22:56.378: INFO: Number of nodes with available pods: 0
Feb 24 14:22:56.378: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:22:57.386: INFO: Number of nodes with available pods: 0
Feb 24 14:22:57.386: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:22:58.390: INFO: Number of nodes with available pods: 0
Feb 24 14:22:58.390: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:22:59.386: INFO: Number of nodes with available pods: 0
Feb 24 14:22:59.386: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:23:00.388: INFO: Number of nodes with available pods: 0
Feb 24 14:23:00.388: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:23:01.386: INFO: Number of nodes with available pods: 0
Feb 24 14:23:01.386: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:23:02.390: INFO: Number of nodes with available pods: 0
Feb 24 14:23:02.390: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:23:03.387: INFO: Number of nodes with available pods: 0
Feb 24 14:23:03.387: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:23:04.449: INFO: Number of nodes with available pods: 0
Feb 24 14:23:04.449: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:23:05.387: INFO: Number of nodes with available pods: 0
Feb 24 14:23:05.387: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:23:06.386: INFO: Number of nodes with available pods: 0
Feb 24 14:23:06.386: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:23:07.392: INFO: Number of nodes with available pods: 0
Feb 24 14:23:07.392: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:23:08.391: INFO: Number of nodes with available pods: 0
Feb 24 14:23:08.391: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:23:09.394: INFO: Number of nodes with available pods: 0
Feb 24 14:23:09.394: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:23:10.384: INFO: Number of nodes with available pods: 0
Feb 24 14:23:10.384: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:23:11.386: INFO: Number of nodes with available pods: 0
Feb 24 14:23:11.386: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:23:12.385: INFO: Number of nodes with available pods: 0
Feb 24 14:23:12.385: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:23:13.385: INFO: Number of nodes with available pods: 0
Feb 24 14:23:13.385: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:23:14.384: INFO: Number of nodes with available pods: 0
Feb 24 14:23:14.384: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:23:15.387: INFO: Number of nodes with available pods: 0
Feb 24 14:23:15.387: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:23:16.392: INFO: Number of nodes with available pods: 1
Feb 24 14:23:16.392: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2834, will wait for the garbage collector to delete the pods
Feb 24 14:23:16.491: INFO: Deleting DaemonSet.extensions daemon-set took: 35.533188ms
Feb 24 14:23:16.792: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.372115ms
Feb 24 14:23:26.605: INFO: Number of nodes with available pods: 0
Feb 24 14:23:26.605: INFO: Number of running nodes: 0, number of available pods: 0
Feb 24 14:23:26.613: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2834/daemonsets","resourceVersion":"25584772"},"items":null}

Feb 24 14:23:26.626: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2834/pods","resourceVersion":"25584772"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:23:26.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2834" for this suite.
Feb 24 14:23:32.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:23:32.903: INFO: namespace daemonsets-2834 deletion completed in 6.191076094s

• [SLOW TEST:46.115 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:23:32.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 24 14:23:33.027: INFO: Waiting up to 5m0s for pod "pod-896649f0-b20d-4054-8b9b-b4ee28572092" in namespace "emptydir-368" to be "success or failure"
Feb 24 14:23:33.052: INFO: Pod "pod-896649f0-b20d-4054-8b9b-b4ee28572092": Phase="Pending", Reason="", readiness=false. Elapsed: 24.283753ms
Feb 24 14:23:35.069: INFO: Pod "pod-896649f0-b20d-4054-8b9b-b4ee28572092": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041462585s
Feb 24 14:23:37.088: INFO: Pod "pod-896649f0-b20d-4054-8b9b-b4ee28572092": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061109896s
Feb 24 14:23:39.100: INFO: Pod "pod-896649f0-b20d-4054-8b9b-b4ee28572092": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072950052s
Feb 24 14:23:41.106: INFO: Pod "pod-896649f0-b20d-4054-8b9b-b4ee28572092": Phase="Pending", Reason="", readiness=false. Elapsed: 8.078819913s
Feb 24 14:23:43.112: INFO: Pod "pod-896649f0-b20d-4054-8b9b-b4ee28572092": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.08451868s
STEP: Saw pod success
Feb 24 14:23:43.112: INFO: Pod "pod-896649f0-b20d-4054-8b9b-b4ee28572092" satisfied condition "success or failure"
Feb 24 14:23:43.115: INFO: Trying to get logs from node iruya-node pod pod-896649f0-b20d-4054-8b9b-b4ee28572092 container test-container: 
STEP: delete the pod
Feb 24 14:23:43.177: INFO: Waiting for pod pod-896649f0-b20d-4054-8b9b-b4ee28572092 to disappear
Feb 24 14:23:43.183: INFO: Pod pod-896649f0-b20d-4054-8b9b-b4ee28572092 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:23:43.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-368" for this suite.
Feb 24 14:23:49.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:23:49.333: INFO: namespace emptydir-368 deletion completed in 6.145839149s

• [SLOW TEST:16.429 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:23:49.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-5673
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 24 14:23:49.391: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 24 14:24:25.612: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5673 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 24 14:24:25.612: INFO: >>> kubeConfig: /root/.kube/config
I0224 14:24:25.685846       8 log.go:172] (0xc0013f66e0) (0xc00095d900) Create stream
I0224 14:24:25.685890       8 log.go:172] (0xc0013f66e0) (0xc00095d900) Stream added, broadcasting: 1
I0224 14:24:25.696022       8 log.go:172] (0xc0013f66e0) Reply frame received for 1
I0224 14:24:25.696073       8 log.go:172] (0xc0013f66e0) (0xc00083f2c0) Create stream
I0224 14:24:25.696085       8 log.go:172] (0xc0013f66e0) (0xc00083f2c0) Stream added, broadcasting: 3
I0224 14:24:25.698904       8 log.go:172] (0xc0013f66e0) Reply frame received for 3
I0224 14:24:25.698936       8 log.go:172] (0xc0013f66e0) (0xc00095dc20) Create stream
I0224 14:24:25.698947       8 log.go:172] (0xc0013f66e0) (0xc00095dc20) Stream added, broadcasting: 5
I0224 14:24:25.701765       8 log.go:172] (0xc0013f66e0) Reply frame received for 5
I0224 14:24:26.877975       8 log.go:172] (0xc0013f66e0) Data frame received for 3
I0224 14:24:26.878103       8 log.go:172] (0xc00083f2c0) (3) Data frame handling
I0224 14:24:26.878128       8 log.go:172] (0xc00083f2c0) (3) Data frame sent
I0224 14:24:27.025640       8 log.go:172] (0xc0013f66e0) (0xc00083f2c0) Stream removed, broadcasting: 3
I0224 14:24:27.026025       8 log.go:172] (0xc0013f66e0) Data frame received for 1
I0224 14:24:27.026273       8 log.go:172] (0xc0013f66e0) (0xc00095dc20) Stream removed, broadcasting: 5
I0224 14:24:27.026496       8 log.go:172] (0xc00095d900) (1) Data frame handling
I0224 14:24:27.026543       8 log.go:172] (0xc00095d900) (1) Data frame sent
I0224 14:24:27.026633       8 log.go:172] (0xc0013f66e0) (0xc00095d900) Stream removed, broadcasting: 1
I0224 14:24:27.026666       8 log.go:172] (0xc0013f66e0) Go away received
I0224 14:24:27.027428       8 log.go:172] (0xc0013f66e0) (0xc00095d900) Stream removed, broadcasting: 1
I0224 14:24:27.027479       8 log.go:172] (0xc0013f66e0) (0xc00083f2c0) Stream removed, broadcasting: 3
I0224 14:24:27.027493       8 log.go:172] (0xc0013f66e0) (0xc00095dc20) Stream removed, broadcasting: 5
Feb 24 14:24:27.027: INFO: Found all expected endpoints: [netserver-0]
Feb 24 14:24:27.035: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5673 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 24 14:24:27.035: INFO: >>> kubeConfig: /root/.kube/config
I0224 14:24:27.154737       8 log.go:172] (0xc0013f7290) (0xc001504000) Create stream
I0224 14:24:27.154862       8 log.go:172] (0xc0013f7290) (0xc001504000) Stream added, broadcasting: 1
I0224 14:24:27.168832       8 log.go:172] (0xc0013f7290) Reply frame received for 1
I0224 14:24:27.168988       8 log.go:172] (0xc0013f7290) (0xc001bdcfa0) Create stream
I0224 14:24:27.169001       8 log.go:172] (0xc0013f7290) (0xc001bdcfa0) Stream added, broadcasting: 3
I0224 14:24:27.175050       8 log.go:172] (0xc0013f7290) Reply frame received for 3
I0224 14:24:27.175124       8 log.go:172] (0xc0013f7290) (0xc001bdd040) Create stream
I0224 14:24:27.175137       8 log.go:172] (0xc0013f7290) (0xc001bdd040) Stream added, broadcasting: 5
I0224 14:24:27.176844       8 log.go:172] (0xc0013f7290) Reply frame received for 5
I0224 14:24:28.329746       8 log.go:172] (0xc0013f7290) Data frame received for 3
I0224 14:24:28.329849       8 log.go:172] (0xc001bdcfa0) (3) Data frame handling
I0224 14:24:28.329903       8 log.go:172] (0xc001bdcfa0) (3) Data frame sent
I0224 14:24:28.549191       8 log.go:172] (0xc0013f7290) Data frame received for 1
I0224 14:24:28.549264       8 log.go:172] (0xc001504000) (1) Data frame handling
I0224 14:24:28.549286       8 log.go:172] (0xc001504000) (1) Data frame sent
I0224 14:24:28.549375       8 log.go:172] (0xc0013f7290) (0xc001504000) Stream removed, broadcasting: 1
I0224 14:24:28.549574       8 log.go:172] (0xc0013f7290) (0xc001bdd040) Stream removed, broadcasting: 5
I0224 14:24:28.549611       8 log.go:172] (0xc0013f7290) (0xc001bdcfa0) Stream removed, broadcasting: 3
I0224 14:24:28.549660       8 log.go:172] (0xc0013f7290) (0xc001504000) Stream removed, broadcasting: 1
I0224 14:24:28.549676       8 log.go:172] (0xc0013f7290) (0xc001bdcfa0) Stream removed, broadcasting: 3
I0224 14:24:28.549684       8 log.go:172] (0xc0013f7290) (0xc001bdd040) Stream removed, broadcasting: 5
I0224 14:24:28.550593       8 log.go:172] (0xc0013f7290) Go away received
Feb 24 14:24:28.550: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:24:28.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5673" for this suite.
Feb 24 14:24:52.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:24:52.962: INFO: namespace pod-network-test-5673 deletion completed in 24.206918166s

• [SLOW TEST:63.629 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:24:52.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb 24 14:24:53.079: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 24 14:24:53.086: INFO: Waiting for terminating namespaces to be deleted...
Feb 24 14:24:53.088: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb 24 14:24:53.102: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb 24 14:24:53.102: INFO: 	Container weave ready: true, restart count 0
Feb 24 14:24:53.102: INFO: 	Container weave-npc ready: true, restart count 0
Feb 24 14:24:53.102: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded)
Feb 24 14:24:53.102: INFO: 	Container kube-bench ready: false, restart count 0
Feb 24 14:24:53.102: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb 24 14:24:53.102: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 24 14:24:53.102: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb 24 14:24:53.137: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb 24 14:24:53.137: INFO: 	Container etcd ready: true, restart count 0
Feb 24 14:24:53.137: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb 24 14:24:53.137: INFO: 	Container weave ready: true, restart count 0
Feb 24 14:24:53.137: INFO: 	Container weave-npc ready: true, restart count 0
Feb 24 14:24:53.137: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 24 14:24:53.137: INFO: 	Container coredns ready: true, restart count 0
Feb 24 14:24:53.137: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb 24 14:24:53.137: INFO: 	Container kube-controller-manager ready: true, restart count 23
Feb 24 14:24:53.137: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb 24 14:24:53.137: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 24 14:24:53.137: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb 24 14:24:53.137: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb 24 14:24:53.137: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb 24 14:24:53.137: INFO: 	Container kube-scheduler ready: true, restart count 15
Feb 24 14:24:53.137: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 24 14:24:53.137: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f65cbeb31233a5], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:24:54.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2579" for this suite.
Feb 24 14:25:00.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:25:00.345: INFO: namespace sched-pred-2579 deletion completed in 6.150252194s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.383 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:25:00.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-8b4be877-cee9-4d6c-a32a-4aa6155a4569 in namespace container-probe-6294
Feb 24 14:25:10.470: INFO: Started pod liveness-8b4be877-cee9-4d6c-a32a-4aa6155a4569 in namespace container-probe-6294
STEP: checking the pod's current state and verifying that restartCount is present
Feb 24 14:25:10.474: INFO: Initial restart count of pod liveness-8b4be877-cee9-4d6c-a32a-4aa6155a4569 is 0
Feb 24 14:25:52.769: INFO: Restart count of pod container-probe-6294/liveness-8b4be877-cee9-4d6c-a32a-4aa6155a4569 is now 1 (42.294598432s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:25:52.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6294" for this suite.
Feb 24 14:26:33.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:26:33.129: INFO: namespace container-probe-6294 deletion completed in 40.293594553s

• [SLOW TEST:92.783 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:26:33.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 24 14:26:51.266: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:26:51.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6435" for this suite.
Feb 24 14:26:57.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:26:57.555: INFO: namespace container-runtime-6435 deletion completed in 6.213103441s

• [SLOW TEST:24.426 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:26:57.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb 24 14:26:57.792: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4480,SelfLink:/api/v1/namespaces/watch-4480/configmaps/e2e-watch-test-watch-closed,UID:5a34280e-7a14-43d6-b0e4-6d0173ced179,ResourceVersion:25585208,Generation:0,CreationTimestamp:2020-02-24 14:26:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 24 14:26:57.792: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4480,SelfLink:/api/v1/namespaces/watch-4480/configmaps/e2e-watch-test-watch-closed,UID:5a34280e-7a14-43d6-b0e4-6d0173ced179,ResourceVersion:25585209,Generation:0,CreationTimestamp:2020-02-24 14:26:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb 24 14:26:57.949: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4480,SelfLink:/api/v1/namespaces/watch-4480/configmaps/e2e-watch-test-watch-closed,UID:5a34280e-7a14-43d6-b0e4-6d0173ced179,ResourceVersion:25585210,Generation:0,CreationTimestamp:2020-02-24 14:26:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 24 14:26:57.949: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4480,SelfLink:/api/v1/namespaces/watch-4480/configmaps/e2e-watch-test-watch-closed,UID:5a34280e-7a14-43d6-b0e4-6d0173ced179,ResourceVersion:25585211,Generation:0,CreationTimestamp:2020-02-24 14:26:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:26:57.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4480" for this suite.
Feb 24 14:27:04.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:27:04.099: INFO: namespace watch-4480 deletion completed in 6.136141805s

• [SLOW TEST:6.544 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:27:04.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 24 14:27:04.248: INFO: Waiting up to 5m0s for pod "pod-ee663ced-f7ee-45d0-90cb-2f09db879b7d" in namespace "emptydir-2182" to be "success or failure"
Feb 24 14:27:04.261: INFO: Pod "pod-ee663ced-f7ee-45d0-90cb-2f09db879b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.580728ms
Feb 24 14:27:06.274: INFO: Pod "pod-ee663ced-f7ee-45d0-90cb-2f09db879b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026446694s
Feb 24 14:27:08.280: INFO: Pod "pod-ee663ced-f7ee-45d0-90cb-2f09db879b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032024175s
Feb 24 14:27:10.289: INFO: Pod "pod-ee663ced-f7ee-45d0-90cb-2f09db879b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04080998s
Feb 24 14:27:12.295: INFO: Pod "pod-ee663ced-f7ee-45d0-90cb-2f09db879b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046757646s
Feb 24 14:27:14.304: INFO: Pod "pod-ee663ced-f7ee-45d0-90cb-2f09db879b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.055775486s
Feb 24 14:27:16.339: INFO: Pod "pod-ee663ced-f7ee-45d0-90cb-2f09db879b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.090837189s
Feb 24 14:27:18.347: INFO: Pod "pod-ee663ced-f7ee-45d0-90cb-2f09db879b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.098828321s
Feb 24 14:27:20.705: INFO: Pod "pod-ee663ced-f7ee-45d0-90cb-2f09db879b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.456942588s
Feb 24 14:27:22.796: INFO: Pod "pod-ee663ced-f7ee-45d0-90cb-2f09db879b7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.547699073s
STEP: Saw pod success
Feb 24 14:27:22.796: INFO: Pod "pod-ee663ced-f7ee-45d0-90cb-2f09db879b7d" satisfied condition "success or failure"
Feb 24 14:27:22.802: INFO: Trying to get logs from node iruya-node pod pod-ee663ced-f7ee-45d0-90cb-2f09db879b7d container test-container: 
STEP: delete the pod
Feb 24 14:27:22.965: INFO: Waiting for pod pod-ee663ced-f7ee-45d0-90cb-2f09db879b7d to disappear
Feb 24 14:27:22.987: INFO: Pod pod-ee663ced-f7ee-45d0-90cb-2f09db879b7d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:27:22.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2182" for this suite.
Feb 24 14:27:29.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:27:29.222: INFO: namespace emptydir-2182 deletion completed in 6.228897101s

• [SLOW TEST:25.122 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:27:29.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-cbb39c5f-6d38-4321-a088-050206621dae
STEP: Creating a pod to test consume configMaps
Feb 24 14:27:29.427: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3adcfa81-4a0f-481f-bb0b-f3a9eea8bca9" in namespace "projected-7864" to be "success or failure"
Feb 24 14:27:29.531: INFO: Pod "pod-projected-configmaps-3adcfa81-4a0f-481f-bb0b-f3a9eea8bca9": Phase="Pending", Reason="", readiness=false. Elapsed: 104.039538ms
Feb 24 14:27:31.538: INFO: Pod "pod-projected-configmaps-3adcfa81-4a0f-481f-bb0b-f3a9eea8bca9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111921531s
Feb 24 14:27:33.546: INFO: Pod "pod-projected-configmaps-3adcfa81-4a0f-481f-bb0b-f3a9eea8bca9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119827683s
Feb 24 14:27:35.555: INFO: Pod "pod-projected-configmaps-3adcfa81-4a0f-481f-bb0b-f3a9eea8bca9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.128015757s
Feb 24 14:27:37.565: INFO: Pod "pod-projected-configmaps-3adcfa81-4a0f-481f-bb0b-f3a9eea8bca9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.138325414s
Feb 24 14:27:39.576: INFO: Pod "pod-projected-configmaps-3adcfa81-4a0f-481f-bb0b-f3a9eea8bca9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.14907418s
Feb 24 14:27:41.586: INFO: Pod "pod-projected-configmaps-3adcfa81-4a0f-481f-bb0b-f3a9eea8bca9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.159344091s
Feb 24 14:27:43.596: INFO: Pod "pod-projected-configmaps-3adcfa81-4a0f-481f-bb0b-f3a9eea8bca9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.169089279s
Feb 24 14:27:45.605: INFO: Pod "pod-projected-configmaps-3adcfa81-4a0f-481f-bb0b-f3a9eea8bca9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.178010185s
Feb 24 14:27:47.624: INFO: Pod "pod-projected-configmaps-3adcfa81-4a0f-481f-bb0b-f3a9eea8bca9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.197630387s
STEP: Saw pod success
Feb 24 14:27:47.624: INFO: Pod "pod-projected-configmaps-3adcfa81-4a0f-481f-bb0b-f3a9eea8bca9" satisfied condition "success or failure"
Feb 24 14:27:47.638: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-3adcfa81-4a0f-481f-bb0b-f3a9eea8bca9 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 24 14:27:47.815: INFO: Waiting for pod pod-projected-configmaps-3adcfa81-4a0f-481f-bb0b-f3a9eea8bca9 to disappear
Feb 24 14:27:47.829: INFO: Pod pod-projected-configmaps-3adcfa81-4a0f-481f-bb0b-f3a9eea8bca9 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:27:47.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7864" for this suite.
Feb 24 14:27:53.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:27:53.973: INFO: namespace projected-7864 deletion completed in 6.136072923s

• [SLOW TEST:24.751 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:27:53.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Feb 24 14:27:54.159: INFO: Waiting up to 5m0s for pod "var-expansion-ac1fb019-bd45-40af-a9bd-954f2c6447c3" in namespace "var-expansion-3242" to be "success or failure"
Feb 24 14:27:54.172: INFO: Pod "var-expansion-ac1fb019-bd45-40af-a9bd-954f2c6447c3": Phase="Pending", Reason="", readiness=false. Elapsed: 12.963189ms
Feb 24 14:27:56.184: INFO: Pod "var-expansion-ac1fb019-bd45-40af-a9bd-954f2c6447c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024610272s
Feb 24 14:27:58.189: INFO: Pod "var-expansion-ac1fb019-bd45-40af-a9bd-954f2c6447c3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030068789s
Feb 24 14:28:00.199: INFO: Pod "var-expansion-ac1fb019-bd45-40af-a9bd-954f2c6447c3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039819649s
Feb 24 14:28:02.207: INFO: Pod "var-expansion-ac1fb019-bd45-40af-a9bd-954f2c6447c3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047275592s
Feb 24 14:28:04.224: INFO: Pod "var-expansion-ac1fb019-bd45-40af-a9bd-954f2c6447c3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.064295705s
Feb 24 14:28:06.295: INFO: Pod "var-expansion-ac1fb019-bd45-40af-a9bd-954f2c6447c3": Phase="Running", Reason="", readiness=true. Elapsed: 12.135629037s
Feb 24 14:28:08.301: INFO: Pod "var-expansion-ac1fb019-bd45-40af-a9bd-954f2c6447c3": Phase="Running", Reason="", readiness=true. Elapsed: 14.141745253s
Feb 24 14:28:10.322: INFO: Pod "var-expansion-ac1fb019-bd45-40af-a9bd-954f2c6447c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.163235427s
STEP: Saw pod success
Feb 24 14:28:10.323: INFO: Pod "var-expansion-ac1fb019-bd45-40af-a9bd-954f2c6447c3" satisfied condition "success or failure"
Feb 24 14:28:10.331: INFO: Trying to get logs from node iruya-node pod var-expansion-ac1fb019-bd45-40af-a9bd-954f2c6447c3 container dapi-container: 
STEP: delete the pod
Feb 24 14:28:10.413: INFO: Waiting for pod var-expansion-ac1fb019-bd45-40af-a9bd-954f2c6447c3 to disappear
Feb 24 14:28:10.841: INFO: Pod var-expansion-ac1fb019-bd45-40af-a9bd-954f2c6447c3 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:28:10.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3242" for this suite.
Feb 24 14:28:17.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:28:17.722: INFO: namespace var-expansion-3242 deletion completed in 6.866002794s

• [SLOW TEST:23.748 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:28:17.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 24 14:28:17.898: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb 24 14:28:22.908: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 24 14:28:35.677: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb 24 14:28:37.684: INFO: Creating deployment "test-rollover-deployment"
Feb 24 14:28:37.702: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb 24 14:28:39.720: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb 24 14:28:39.828: INFO: Ensure that both replica sets have 1 created replica
Feb 24 14:28:39.852: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb 24 14:28:39.909: INFO: Updating deployment test-rollover-deployment
Feb 24 14:28:39.909: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb 24 14:28:41.933: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb 24 14:28:41.942: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb 24 14:28:41.950: INFO: all replica sets need to contain the pod-template-hash label
Feb 24 14:28:41.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151318, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151318, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151320, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151317, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 14:28:43.969: INFO: all replica sets need to contain the pod-template-hash label
Feb 24 14:28:43.969: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151318, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151318, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151320, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151317, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 14:28:45.961: INFO: all replica sets need to contain the pod-template-hash label
Feb 24 14:28:45.961: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151318, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151318, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151320, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151317, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 14:28:47.985: INFO: all replica sets need to contain the pod-template-hash label
Feb 24 14:28:47.985: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151318, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151318, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151320, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151317, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 14:28:50.106: INFO: all replica sets need to contain the pod-template-hash label
Feb 24 14:28:50.106: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151318, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151318, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151320, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151317, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 14:28:52.091: INFO: all replica sets need to contain the pod-template-hash label
Feb 24 14:28:52.091: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151318, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151318, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151320, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151317, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 14:28:54.663: INFO: all replica sets need to contain the pod-template-hash label
Feb 24 14:28:54.663: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151318, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151318, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151320, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151317, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 14:28:55.958: INFO: all replica sets need to contain the pod-template-hash label
Feb 24 14:28:55.958: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151318, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151318, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151320, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151317, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 14:28:57.984: INFO: all replica sets need to contain the pod-template-hash label
Feb 24 14:28:57.984: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151318, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151318, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151320, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151317, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 14:28:59.975: INFO: all replica sets need to contain the pod-template-hash label
Feb 24 14:28:59.975: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151318, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151318, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151338, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151317, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 14:29:01.965: INFO: all replica sets need to contain the pod-template-hash label
Feb 24 14:29:01.965: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151318, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151318, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151338, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151317, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 14:29:03.962: INFO: all replica sets need to contain the pod-template-hash label
Feb 24 14:29:03.962: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151318, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151318, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151338, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151317, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 14:29:07.145: INFO: all replica sets need to contain the pod-template-hash label
Feb 24 14:29:07.146: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151318, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151318, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151338, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151317, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 14:29:07.962: INFO: all replica sets need to contain the pod-template-hash label
Feb 24 14:29:07.962: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151318, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151318, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151338, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718151317, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 14:29:10.147: INFO: 
Feb 24 14:29:10.147: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 24 14:29:10.216: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-7594,SelfLink:/apis/apps/v1/namespaces/deployment-7594/deployments/test-rollover-deployment,UID:b3715966-ca08-42f6-bf1b-386fa253fbad,ResourceVersion:25585527,Generation:2,CreationTimestamp:2020-02-24 14:28:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-24 14:28:38 +0000 UTC 2020-02-24 14:28:38 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-24 14:29:09 +0000 UTC 2020-02-24 14:28:37 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb 24 14:29:10.221: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-7594,SelfLink:/apis/apps/v1/namespaces/deployment-7594/replicasets/test-rollover-deployment-854595fc44,UID:ebf28709-ea4b-4306-bc38-91e53e453db3,ResourceVersion:25585516,Generation:2,CreationTimestamp:2020-02-24 14:28:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b3715966-ca08-42f6-bf1b-386fa253fbad 0xc0023b3f27 0xc0023b3f28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 24 14:29:10.221: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb 24 14:29:10.221: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-7594,SelfLink:/apis/apps/v1/namespaces/deployment-7594/replicasets/test-rollover-controller,UID:5826e7dd-c870-4139-b869-1c52fa1f1f17,ResourceVersion:25585525,Generation:2,CreationTimestamp:2020-02-24 14:28:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b3715966-ca08-42f6-bf1b-386fa253fbad 0xc0023b3e47 0xc0023b3e48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 24 14:29:10.221: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-7594,SelfLink:/apis/apps/v1/namespaces/deployment-7594/replicasets/test-rollover-deployment-9b8b997cf,UID:08dc115b-cc8c-42ec-9110-44a8cf9ba75e,ResourceVersion:25585469,Generation:2,CreationTimestamp:2020-02-24 14:28:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b3715966-ca08-42f6-bf1b-386fa253fbad 0xc002970000 0xc002970001}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 24 14:29:10.319: INFO: Pod "test-rollover-deployment-854595fc44-9vftd" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-9vftd,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-7594,SelfLink:/api/v1/namespaces/deployment-7594/pods/test-rollover-deployment-854595fc44-9vftd,UID:a1a9c4cb-ceb1-4239-b8a3-2d844bcaf801,ResourceVersion:25585502,Generation:0,CreationTimestamp:2020-02-24 14:28:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 ebf28709-ea4b-4306-bc38-91e53e453db3 0xc002970d97 0xc002970d98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-x7l8d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x7l8d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-x7l8d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002970e10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002970e30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 14:28:41 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 14:28:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 14:28:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 14:28:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-24 14:28:41 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-24 14:28:58 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://d0d55e4aebeb5ed55fcf609850ef7aff77bc35fb8f56d98bded9a7a32f5dcc6f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:29:10.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7594" for this suite.
Feb 24 14:29:18.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:29:18.560: INFO: namespace deployment-7594 deletion completed in 8.232831738s

• [SLOW TEST:60.839 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:29:18.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Feb 24 14:29:18.821: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:29:57.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8187" for this suite.
Feb 24 14:30:03.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:30:03.627: INFO: namespace pods-8187 deletion completed in 6.214065716s

• [SLOW TEST:45.066 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:30:03.628: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-3da34f00-41f9-416d-8478-50c4089f5ca2
STEP: Creating a pod to test consume secrets
Feb 24 14:30:03.830: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f14a8c4c-f16c-430b-8dfc-cbec74069066" in namespace "projected-2185" to be "success or failure"
Feb 24 14:30:03.849: INFO: Pod "pod-projected-secrets-f14a8c4c-f16c-430b-8dfc-cbec74069066": Phase="Pending", Reason="", readiness=false. Elapsed: 18.631342ms
Feb 24 14:30:05.944: INFO: Pod "pod-projected-secrets-f14a8c4c-f16c-430b-8dfc-cbec74069066": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114160881s
Feb 24 14:30:07.955: INFO: Pod "pod-projected-secrets-f14a8c4c-f16c-430b-8dfc-cbec74069066": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125315346s
Feb 24 14:30:09.970: INFO: Pod "pod-projected-secrets-f14a8c4c-f16c-430b-8dfc-cbec74069066": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139572132s
Feb 24 14:30:12.092: INFO: Pod "pod-projected-secrets-f14a8c4c-f16c-430b-8dfc-cbec74069066": Phase="Pending", Reason="", readiness=false. Elapsed: 8.262149681s
Feb 24 14:30:14.101: INFO: Pod "pod-projected-secrets-f14a8c4c-f16c-430b-8dfc-cbec74069066": Phase="Pending", Reason="", readiness=false. Elapsed: 10.270845481s
Feb 24 14:30:16.108: INFO: Pod "pod-projected-secrets-f14a8c4c-f16c-430b-8dfc-cbec74069066": Phase="Pending", Reason="", readiness=false. Elapsed: 12.2775788s
Feb 24 14:30:18.114: INFO: Pod "pod-projected-secrets-f14a8c4c-f16c-430b-8dfc-cbec74069066": Phase="Running", Reason="", readiness=true. Elapsed: 14.284226507s
Feb 24 14:30:20.124: INFO: Pod "pod-projected-secrets-f14a8c4c-f16c-430b-8dfc-cbec74069066": Phase="Running", Reason="", readiness=true. Elapsed: 16.293773783s
Feb 24 14:30:22.131: INFO: Pod "pod-projected-secrets-f14a8c4c-f16c-430b-8dfc-cbec74069066": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.300738149s
STEP: Saw pod success
Feb 24 14:30:22.131: INFO: Pod "pod-projected-secrets-f14a8c4c-f16c-430b-8dfc-cbec74069066" satisfied condition "success or failure"
Feb 24 14:30:22.133: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-f14a8c4c-f16c-430b-8dfc-cbec74069066 container secret-volume-test: 
STEP: delete the pod
Feb 24 14:30:22.336: INFO: Waiting for pod pod-projected-secrets-f14a8c4c-f16c-430b-8dfc-cbec74069066 to disappear
Feb 24 14:30:22.366: INFO: Pod pod-projected-secrets-f14a8c4c-f16c-430b-8dfc-cbec74069066 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:30:22.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2185" for this suite.
Feb 24 14:30:28.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:30:28.642: INFO: namespace projected-2185 deletion completed in 6.271485662s

• [SLOW TEST:25.015 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:30:28.644: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-9ffc6e5a-3559-4f4a-a7ac-8056c6163dd7
STEP: Creating a pod to test consume secrets
Feb 24 14:30:28.927: INFO: Waiting up to 5m0s for pod "pod-secrets-550c8db4-f63d-4a3d-adae-b16deb02d307" in namespace "secrets-4674" to be "success or failure"
Feb 24 14:30:29.017: INFO: Pod "pod-secrets-550c8db4-f63d-4a3d-adae-b16deb02d307": Phase="Pending", Reason="", readiness=false. Elapsed: 89.925543ms
Feb 24 14:30:31.047: INFO: Pod "pod-secrets-550c8db4-f63d-4a3d-adae-b16deb02d307": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119537722s
Feb 24 14:30:33.057: INFO: Pod "pod-secrets-550c8db4-f63d-4a3d-adae-b16deb02d307": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129087523s
Feb 24 14:30:35.097: INFO: Pod "pod-secrets-550c8db4-f63d-4a3d-adae-b16deb02d307": Phase="Pending", Reason="", readiness=false. Elapsed: 6.169895001s
Feb 24 14:30:37.147: INFO: Pod "pod-secrets-550c8db4-f63d-4a3d-adae-b16deb02d307": Phase="Pending", Reason="", readiness=false. Elapsed: 8.219630989s
Feb 24 14:30:39.156: INFO: Pod "pod-secrets-550c8db4-f63d-4a3d-adae-b16deb02d307": Phase="Pending", Reason="", readiness=false. Elapsed: 10.228381737s
Feb 24 14:30:41.181: INFO: Pod "pod-secrets-550c8db4-f63d-4a3d-adae-b16deb02d307": Phase="Pending", Reason="", readiness=false. Elapsed: 12.253236526s
Feb 24 14:30:43.678: INFO: Pod "pod-secrets-550c8db4-f63d-4a3d-adae-b16deb02d307": Phase="Pending", Reason="", readiness=false. Elapsed: 14.750580856s
Feb 24 14:30:45.687: INFO: Pod "pod-secrets-550c8db4-f63d-4a3d-adae-b16deb02d307": Phase="Pending", Reason="", readiness=false. Elapsed: 16.759145518s
Feb 24 14:30:47.692: INFO: Pod "pod-secrets-550c8db4-f63d-4a3d-adae-b16deb02d307": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.76445117s
STEP: Saw pod success
Feb 24 14:30:47.692: INFO: Pod "pod-secrets-550c8db4-f63d-4a3d-adae-b16deb02d307" satisfied condition "success or failure"
Feb 24 14:30:47.695: INFO: Trying to get logs from node iruya-node pod pod-secrets-550c8db4-f63d-4a3d-adae-b16deb02d307 container secret-volume-test: 
STEP: delete the pod
Feb 24 14:30:47.834: INFO: Waiting for pod pod-secrets-550c8db4-f63d-4a3d-adae-b16deb02d307 to disappear
Feb 24 14:30:47.845: INFO: Pod pod-secrets-550c8db4-f63d-4a3d-adae-b16deb02d307 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:30:47.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4674" for this suite.
Feb 24 14:30:53.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:30:54.053: INFO: namespace secrets-4674 deletion completed in 6.165078855s

• [SLOW TEST:25.409 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:30:54.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 24 14:30:54.297: INFO: Create a RollingUpdate DaemonSet
Feb 24 14:30:54.304: INFO: Check that daemon pods launch on every node of the cluster
Feb 24 14:30:54.336: INFO: Number of nodes with available pods: 0
Feb 24 14:30:54.336: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:30:55.349: INFO: Number of nodes with available pods: 0
Feb 24 14:30:55.349: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:30:56.876: INFO: Number of nodes with available pods: 0
Feb 24 14:30:56.876: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:30:57.695: INFO: Number of nodes with available pods: 0
Feb 24 14:30:57.695: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:30:58.352: INFO: Number of nodes with available pods: 0
Feb 24 14:30:58.352: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:30:59.390: INFO: Number of nodes with available pods: 0
Feb 24 14:30:59.390: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:31:00.346: INFO: Number of nodes with available pods: 0
Feb 24 14:31:00.346: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:31:01.395: INFO: Number of nodes with available pods: 0
Feb 24 14:31:01.395: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:31:02.354: INFO: Number of nodes with available pods: 0
Feb 24 14:31:02.354: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:31:06.475: INFO: Number of nodes with available pods: 0
Feb 24 14:31:06.475: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:31:07.406: INFO: Number of nodes with available pods: 0
Feb 24 14:31:07.406: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:31:09.499: INFO: Number of nodes with available pods: 0
Feb 24 14:31:09.499: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:31:10.436: INFO: Number of nodes with available pods: 0
Feb 24 14:31:10.436: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:31:11.350: INFO: Number of nodes with available pods: 0
Feb 24 14:31:11.350: INFO: Node iruya-node is running more than one daemon pod
Feb 24 14:31:12.506: INFO: Number of nodes with available pods: 2
Feb 24 14:31:12.506: INFO: Number of running nodes: 2, number of available pods: 2
Feb 24 14:31:12.506: INFO: Update the DaemonSet to trigger a rollout
Feb 24 14:31:12.692: INFO: Updating DaemonSet daemon-set
Feb 24 14:31:24.298: INFO: Roll back the DaemonSet before rollout is complete
Feb 24 14:31:25.025: INFO: Updating DaemonSet daemon-set
Feb 24 14:31:25.025: INFO: Make sure DaemonSet rollback is complete
Feb 24 14:31:25.645: INFO: Wrong image for pod: daemon-set-4ssgw. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 24 14:31:25.645: INFO: Pod daemon-set-4ssgw is not available
Feb 24 14:31:29.665: INFO: Pod daemon-set-z6kgg is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4478, will wait for the garbage collector to delete the pods
Feb 24 14:31:29.734: INFO: Deleting DaemonSet.extensions daemon-set took: 8.321066ms
Feb 24 14:31:33.535: INFO: Terminating DaemonSet.extensions daemon-set pods took: 3.800542373s
Feb 24 14:31:46.541: INFO: Number of nodes with available pods: 0
Feb 24 14:31:46.541: INFO: Number of running nodes: 0, number of available pods: 0
Feb 24 14:31:46.546: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4478/daemonsets","resourceVersion":"25585907"},"items":null}

Feb 24 14:31:46.549: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4478/pods","resourceVersion":"25585907"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:31:46.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4478" for this suite.
Feb 24 14:31:54.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:31:54.818: INFO: namespace daemonsets-4478 deletion completed in 8.186718622s

• [SLOW TEST:60.764 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:31:54.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 24 14:31:54.995: INFO: Waiting up to 5m0s for pod "pod-7e76d1d7-9e57-4c5e-9483-91a36c5110b8" in namespace "emptydir-8170" to be "success or failure"
Feb 24 14:31:55.008: INFO: Pod "pod-7e76d1d7-9e57-4c5e-9483-91a36c5110b8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.529608ms
Feb 24 14:31:57.014: INFO: Pod "pod-7e76d1d7-9e57-4c5e-9483-91a36c5110b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018823036s
Feb 24 14:31:59.028: INFO: Pod "pod-7e76d1d7-9e57-4c5e-9483-91a36c5110b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032034867s
Feb 24 14:32:01.035: INFO: Pod "pod-7e76d1d7-9e57-4c5e-9483-91a36c5110b8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039292516s
Feb 24 14:32:03.040: INFO: Pod "pod-7e76d1d7-9e57-4c5e-9483-91a36c5110b8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044913804s
Feb 24 14:32:05.049: INFO: Pod "pod-7e76d1d7-9e57-4c5e-9483-91a36c5110b8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.053077717s
Feb 24 14:32:07.079: INFO: Pod "pod-7e76d1d7-9e57-4c5e-9483-91a36c5110b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.083446242s
STEP: Saw pod success
Feb 24 14:32:07.079: INFO: Pod "pod-7e76d1d7-9e57-4c5e-9483-91a36c5110b8" satisfied condition "success or failure"
Feb 24 14:32:07.093: INFO: Trying to get logs from node iruya-node pod pod-7e76d1d7-9e57-4c5e-9483-91a36c5110b8 container test-container: 
STEP: delete the pod
Feb 24 14:32:07.164: INFO: Waiting for pod pod-7e76d1d7-9e57-4c5e-9483-91a36c5110b8 to disappear
Feb 24 14:32:07.172: INFO: Pod pod-7e76d1d7-9e57-4c5e-9483-91a36c5110b8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:32:07.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8170" for this suite.
Feb 24 14:32:15.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:32:15.310: INFO: namespace emptydir-8170 deletion completed in 8.132194669s

• [SLOW TEST:20.491 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:32:15.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-8483149b-0f6a-461a-83cb-da4f7496bf66
STEP: Creating a pod to test consume configMaps
Feb 24 14:32:15.528: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8ee70219-6e8c-4b60-9d3b-53831a095820" in namespace "projected-1327" to be "success or failure"
Feb 24 14:32:15.543: INFO: Pod "pod-projected-configmaps-8ee70219-6e8c-4b60-9d3b-53831a095820": Phase="Pending", Reason="", readiness=false. Elapsed: 14.932876ms
Feb 24 14:32:18.883: INFO: Pod "pod-projected-configmaps-8ee70219-6e8c-4b60-9d3b-53831a095820": Phase="Pending", Reason="", readiness=false. Elapsed: 3.35466368s
Feb 24 14:32:20.890: INFO: Pod "pod-projected-configmaps-8ee70219-6e8c-4b60-9d3b-53831a095820": Phase="Pending", Reason="", readiness=false. Elapsed: 5.36211083s
Feb 24 14:32:22.904: INFO: Pod "pod-projected-configmaps-8ee70219-6e8c-4b60-9d3b-53831a095820": Phase="Pending", Reason="", readiness=false. Elapsed: 7.375152651s
Feb 24 14:32:24.916: INFO: Pod "pod-projected-configmaps-8ee70219-6e8c-4b60-9d3b-53831a095820": Phase="Pending", Reason="", readiness=false. Elapsed: 9.387457485s
Feb 24 14:32:26.932: INFO: Pod "pod-projected-configmaps-8ee70219-6e8c-4b60-9d3b-53831a095820": Phase="Pending", Reason="", readiness=false. Elapsed: 11.403314517s
Feb 24 14:32:29.062: INFO: Pod "pod-projected-configmaps-8ee70219-6e8c-4b60-9d3b-53831a095820": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.533948434s
STEP: Saw pod success
Feb 24 14:32:29.062: INFO: Pod "pod-projected-configmaps-8ee70219-6e8c-4b60-9d3b-53831a095820" satisfied condition "success or failure"
Feb 24 14:32:29.087: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-8ee70219-6e8c-4b60-9d3b-53831a095820 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 24 14:32:29.261: INFO: Waiting for pod pod-projected-configmaps-8ee70219-6e8c-4b60-9d3b-53831a095820 to disappear
Feb 24 14:32:29.273: INFO: Pod pod-projected-configmaps-8ee70219-6e8c-4b60-9d3b-53831a095820 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:32:29.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1327" for this suite.
Feb 24 14:32:35.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:32:35.550: INFO: namespace projected-1327 deletion completed in 6.264819488s

• [SLOW TEST:20.240 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:32:35.551: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:32:35.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8445" for this suite.
Feb 24 14:32:57.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:32:58.045: INFO: namespace pods-8445 deletion completed in 22.27424245s

• [SLOW TEST:22.494 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:32:58.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 24 14:32:58.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7264'
Feb 24 14:33:01.026: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 24 14:33:01.026: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Feb 24 14:33:03.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-7264'
Feb 24 14:33:03.201: INFO: stderr: ""
Feb 24 14:33:03.201: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:33:03.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7264" for this suite.
Feb 24 14:33:09.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:33:09.539: INFO: namespace kubectl-7264 deletion completed in 6.334120362s

• [SLOW TEST:11.493 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:33:09.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:33:21.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-343" for this suite.
Feb 24 14:34:05.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:34:05.996: INFO: namespace kubelet-test-343 deletion completed in 44.197156725s

• [SLOW TEST:56.456 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:34:05.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-65938e24-79ca-4a4a-8a8e-372ace5edd66
STEP: Creating a pod to test consume configMaps
Feb 24 14:34:06.159: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-56d96e38-70b4-4dcf-ad3b-71c61b2ee82f" in namespace "projected-9081" to be "success or failure"
Feb 24 14:34:06.168: INFO: Pod "pod-projected-configmaps-56d96e38-70b4-4dcf-ad3b-71c61b2ee82f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.67281ms
Feb 24 14:34:08.177: INFO: Pod "pod-projected-configmaps-56d96e38-70b4-4dcf-ad3b-71c61b2ee82f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017917481s
Feb 24 14:34:10.187: INFO: Pod "pod-projected-configmaps-56d96e38-70b4-4dcf-ad3b-71c61b2ee82f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027965192s
Feb 24 14:34:12.192: INFO: Pod "pod-projected-configmaps-56d96e38-70b4-4dcf-ad3b-71c61b2ee82f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033300233s
Feb 24 14:34:14.200: INFO: Pod "pod-projected-configmaps-56d96e38-70b4-4dcf-ad3b-71c61b2ee82f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.041194105s
Feb 24 14:34:16.351: INFO: Pod "pod-projected-configmaps-56d96e38-70b4-4dcf-ad3b-71c61b2ee82f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.192276452s
Feb 24 14:34:18.358: INFO: Pod "pod-projected-configmaps-56d96e38-70b4-4dcf-ad3b-71c61b2ee82f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.19910437s
Feb 24 14:34:20.368: INFO: Pod "pod-projected-configmaps-56d96e38-70b4-4dcf-ad3b-71c61b2ee82f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.208710865s
STEP: Saw pod success
Feb 24 14:34:20.368: INFO: Pod "pod-projected-configmaps-56d96e38-70b4-4dcf-ad3b-71c61b2ee82f" satisfied condition "success or failure"
Feb 24 14:34:20.376: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-56d96e38-70b4-4dcf-ad3b-71c61b2ee82f container projected-configmap-volume-test: 
STEP: delete the pod
Feb 24 14:34:20.823: INFO: Waiting for pod pod-projected-configmaps-56d96e38-70b4-4dcf-ad3b-71c61b2ee82f to disappear
Feb 24 14:34:20.835: INFO: Pod pod-projected-configmaps-56d96e38-70b4-4dcf-ad3b-71c61b2ee82f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:34:20.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9081" for this suite.
Feb 24 14:34:26.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:34:27.006: INFO: namespace projected-9081 deletion completed in 6.132509835s

• [SLOW TEST:21.010 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:34:27.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 24 14:34:27.096: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:34:50.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2599" for this suite.
Feb 24 14:34:56.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:34:56.768: INFO: namespace init-container-2599 deletion completed in 6.110967858s

• [SLOW TEST:29.762 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:34:56.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:35:08.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7580" for this suite.
Feb 24 14:35:57.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:35:57.130: INFO: namespace kubelet-test-7580 deletion completed in 48.192958853s

• [SLOW TEST:60.361 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:35:57.130: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 24 14:35:57.523: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:36:16.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8927" for this suite.
Feb 24 14:36:24.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:36:24.630: INFO: namespace init-container-8927 deletion completed in 8.130068104s

• [SLOW TEST:27.501 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:36:24.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-db000501-f154-423a-af32-0c3a3087457f
STEP: Creating a pod to test consume configMaps
Feb 24 14:36:24.820: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-111e772c-3ccc-43ab-a17b-c22ad1989463" in namespace "projected-1679" to be "success or failure"
Feb 24 14:36:24.826: INFO: Pod "pod-projected-configmaps-111e772c-3ccc-43ab-a17b-c22ad1989463": Phase="Pending", Reason="", readiness=false. Elapsed: 6.573195ms
Feb 24 14:36:26.835: INFO: Pod "pod-projected-configmaps-111e772c-3ccc-43ab-a17b-c22ad1989463": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015510385s
Feb 24 14:36:28.852: INFO: Pod "pod-projected-configmaps-111e772c-3ccc-43ab-a17b-c22ad1989463": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03172347s
Feb 24 14:36:30.988: INFO: Pod "pod-projected-configmaps-111e772c-3ccc-43ab-a17b-c22ad1989463": Phase="Pending", Reason="", readiness=false. Elapsed: 6.168657503s
Feb 24 14:36:32.998: INFO: Pod "pod-projected-configmaps-111e772c-3ccc-43ab-a17b-c22ad1989463": Phase="Pending", Reason="", readiness=false. Elapsed: 8.178562973s
Feb 24 14:36:35.006: INFO: Pod "pod-projected-configmaps-111e772c-3ccc-43ab-a17b-c22ad1989463": Phase="Pending", Reason="", readiness=false. Elapsed: 10.185777071s
Feb 24 14:36:37.018: INFO: Pod "pod-projected-configmaps-111e772c-3ccc-43ab-a17b-c22ad1989463": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.198193116s
STEP: Saw pod success
Feb 24 14:36:37.018: INFO: Pod "pod-projected-configmaps-111e772c-3ccc-43ab-a17b-c22ad1989463" satisfied condition "success or failure"
Feb 24 14:36:37.025: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-111e772c-3ccc-43ab-a17b-c22ad1989463 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 24 14:36:37.117: INFO: Waiting for pod pod-projected-configmaps-111e772c-3ccc-43ab-a17b-c22ad1989463 to disappear
Feb 24 14:36:37.138: INFO: Pod pod-projected-configmaps-111e772c-3ccc-43ab-a17b-c22ad1989463 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:36:37.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1679" for this suite.
Feb 24 14:36:45.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:36:45.401: INFO: namespace projected-1679 deletion completed in 8.248971493s

• [SLOW TEST:20.771 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:36:45.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Feb 24 14:36:45.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2783 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb 24 14:36:56.889: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0224 14:36:55.320193    2877 log.go:172] (0xc00088e0b0) (0xc0008a6140) Create stream\nI0224 14:36:55.320404    2877 log.go:172] (0xc00088e0b0) (0xc0008a6140) Stream added, broadcasting: 1\nI0224 14:36:55.334995    2877 log.go:172] (0xc00088e0b0) Reply frame received for 1\nI0224 14:36:55.335027    2877 log.go:172] (0xc00088e0b0) (0xc0008a6000) Create stream\nI0224 14:36:55.335039    2877 log.go:172] (0xc00088e0b0) (0xc0008a6000) Stream added, broadcasting: 3\nI0224 14:36:55.336103    2877 log.go:172] (0xc00088e0b0) Reply frame received for 3\nI0224 14:36:55.336126    2877 log.go:172] (0xc00088e0b0) (0xc00063a0a0) Create stream\nI0224 14:36:55.336140    2877 log.go:172] (0xc00088e0b0) (0xc00063a0a0) Stream added, broadcasting: 5\nI0224 14:36:55.339127    2877 log.go:172] (0xc00088e0b0) Reply frame received for 5\nI0224 14:36:55.339273    2877 log.go:172] (0xc00088e0b0) (0xc0008a60a0) Create stream\nI0224 14:36:55.339308    2877 log.go:172] (0xc00088e0b0) (0xc0008a60a0) Stream added, broadcasting: 7\nI0224 14:36:55.342067    2877 log.go:172] (0xc00088e0b0) Reply frame received for 7\nI0224 14:36:55.342575    2877 log.go:172] (0xc0008a6000) (3) Writing data frame\nI0224 14:36:55.343170    2877 log.go:172] (0xc0008a6000) (3) Writing data frame\nI0224 14:36:55.356147    2877 log.go:172] (0xc00088e0b0) Data frame received for 5\nI0224 14:36:55.356201    2877 log.go:172] (0xc00063a0a0) (5) Data frame handling\nI0224 14:36:55.356227    2877 log.go:172] (0xc00063a0a0) (5) Data frame sent\nI0224 14:36:55.359981    2877 log.go:172] (0xc00088e0b0) Data frame received for 5\nI0224 14:36:55.360001    2877 log.go:172] (0xc00063a0a0) (5) Data frame handling\nI0224 14:36:55.360009    2877 log.go:172] (0xc00063a0a0) (5) Data frame sent\nI0224 14:36:56.826235    2877 log.go:172] (0xc00088e0b0) (0xc0008a6000) Stream removed, broadcasting: 3\nI0224 14:36:56.826390    2877 log.go:172] (0xc00088e0b0) Data frame received for 1\nI0224 14:36:56.826455    2877 log.go:172] (0xc00088e0b0) (0xc0008a60a0) Stream removed, broadcasting: 7\nI0224 14:36:56.826497    2877 log.go:172] (0xc0008a6140) (1) Data frame handling\nI0224 14:36:56.826592    2877 log.go:172] (0xc0008a6140) (1) Data frame sent\nI0224 14:36:56.826643    2877 log.go:172] (0xc00088e0b0) (0xc00063a0a0) Stream removed, broadcasting: 5\nI0224 14:36:56.826699    2877 log.go:172] (0xc00088e0b0) (0xc0008a6140) Stream removed, broadcasting: 1\nI0224 14:36:56.826808    2877 log.go:172] (0xc00088e0b0) (0xc0008a6140) Stream removed, broadcasting: 1\nI0224 14:36:56.826825    2877 log.go:172] (0xc00088e0b0) (0xc0008a6000) Stream removed, broadcasting: 3\nI0224 14:36:56.826834    2877 log.go:172] (0xc00088e0b0) (0xc00063a0a0) Stream removed, broadcasting: 5\nI0224 14:36:56.826865    2877 log.go:172] (0xc00088e0b0) (0xc0008a60a0) Stream removed, broadcasting: 7\nI0224 14:36:56.827187    2877 log.go:172] (0xc00088e0b0) Go away received\n"
Feb 24 14:36:56.889: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:36:58.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2783" for this suite.
Feb 24 14:37:04.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:37:05.084: INFO: namespace kubectl-2783 deletion completed in 6.180496944s

• [SLOW TEST:19.683 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:37:05.086: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 24 14:37:05.353: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e03250d-354c-46ab-b5cd-cf5bb0b80629" in namespace "projected-3883" to be "success or failure"
Feb 24 14:37:05.383: INFO: Pod "downwardapi-volume-1e03250d-354c-46ab-b5cd-cf5bb0b80629": Phase="Pending", Reason="", readiness=false. Elapsed: 29.820978ms
Feb 24 14:37:07.453: INFO: Pod "downwardapi-volume-1e03250d-354c-46ab-b5cd-cf5bb0b80629": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099653773s
Feb 24 14:37:09.517: INFO: Pod "downwardapi-volume-1e03250d-354c-46ab-b5cd-cf5bb0b80629": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163818807s
Feb 24 14:37:11.528: INFO: Pod "downwardapi-volume-1e03250d-354c-46ab-b5cd-cf5bb0b80629": Phase="Pending", Reason="", readiness=false. Elapsed: 6.175305703s
Feb 24 14:37:14.542: INFO: Pod "downwardapi-volume-1e03250d-354c-46ab-b5cd-cf5bb0b80629": Phase="Pending", Reason="", readiness=false. Elapsed: 9.188974161s
Feb 24 14:37:16.558: INFO: Pod "downwardapi-volume-1e03250d-354c-46ab-b5cd-cf5bb0b80629": Phase="Pending", Reason="", readiness=false. Elapsed: 11.204659545s
Feb 24 14:37:18.570: INFO: Pod "downwardapi-volume-1e03250d-354c-46ab-b5cd-cf5bb0b80629": Phase="Pending", Reason="", readiness=false. Elapsed: 13.217153824s
Feb 24 14:37:20.582: INFO: Pod "downwardapi-volume-1e03250d-354c-46ab-b5cd-cf5bb0b80629": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.228839767s
STEP: Saw pod success
Feb 24 14:37:20.582: INFO: Pod "downwardapi-volume-1e03250d-354c-46ab-b5cd-cf5bb0b80629" satisfied condition "success or failure"
Feb 24 14:37:20.588: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-1e03250d-354c-46ab-b5cd-cf5bb0b80629 container client-container: 
STEP: delete the pod
Feb 24 14:37:21.371: INFO: Waiting for pod downwardapi-volume-1e03250d-354c-46ab-b5cd-cf5bb0b80629 to disappear
Feb 24 14:37:21.376: INFO: Pod downwardapi-volume-1e03250d-354c-46ab-b5cd-cf5bb0b80629 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:37:21.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3883" for this suite.
Feb 24 14:37:27.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:37:27.821: INFO: namespace projected-3883 deletion completed in 6.43796743s

• [SLOW TEST:22.736 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:37:27.822: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-tpmgw in namespace proxy-2336
I0224 14:37:28.142895       8 runners.go:180] Created replication controller with name: proxy-service-tpmgw, namespace: proxy-2336, replica count: 1
I0224 14:37:29.193493       8 runners.go:180] proxy-service-tpmgw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 14:37:30.193796       8 runners.go:180] proxy-service-tpmgw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 14:37:31.194013       8 runners.go:180] proxy-service-tpmgw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 14:37:32.194228       8 runners.go:180] proxy-service-tpmgw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 14:37:33.194430       8 runners.go:180] proxy-service-tpmgw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 14:37:34.194701       8 runners.go:180] proxy-service-tpmgw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 14:37:35.194982       8 runners.go:180] proxy-service-tpmgw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 14:37:36.195235       8 runners.go:180] proxy-service-tpmgw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 14:37:37.195531       8 runners.go:180] proxy-service-tpmgw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 14:37:38.195776       8 runners.go:180] proxy-service-tpmgw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 14:37:39.196054       8 runners.go:180] proxy-service-tpmgw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0224 14:37:40.196387       8 runners.go:180] proxy-service-tpmgw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0224 14:37:41.196646       8 runners.go:180] proxy-service-tpmgw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0224 14:37:42.196850       8 runners.go:180] proxy-service-tpmgw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0224 14:37:43.197110       8 runners.go:180] proxy-service-tpmgw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0224 14:37:44.197297       8 runners.go:180] proxy-service-tpmgw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0224 14:37:45.197620       8 runners.go:180] proxy-service-tpmgw Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 24 14:37:45.205: INFO: setup took 17.249898968s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb 24 14:37:45.238: INFO: (0) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z/proxy/: test (200; 33.322153ms)
Feb 24 14:37:45.239: INFO: (0) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 33.692777ms)
Feb 24 14:37:45.239: INFO: (0) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:160/proxy/: foo (200; 33.907361ms)
Feb 24 14:37:45.239: INFO: (0) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:1080/proxy/: ... (200; 34.124745ms)
Feb 24 14:37:45.244: INFO: (0) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:1080/proxy/: test<... (200; 39.377485ms)
Feb 24 14:37:45.245: INFO: (0) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname2/proxy/: bar (200; 40.019789ms)
Feb 24 14:37:45.245: INFO: (0) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 40.299479ms)
Feb 24 14:37:45.245: INFO: (0) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname1/proxy/: foo (200; 40.347504ms)
Feb 24 14:37:45.247: INFO: (0) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname2/proxy/: bar (200; 41.490797ms)
Feb 24 14:37:45.247: INFO: (0) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:160/proxy/: foo (200; 41.769671ms)
Feb 24 14:37:45.247: INFO: (0) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname1/proxy/: foo (200; 41.60831ms)
Feb 24 14:37:45.260: INFO: (0) /api/v1/namespaces/proxy-2336/services/https:proxy-service-tpmgw:tlsportname1/proxy/: tls baz (200; 54.630729ms)
Feb 24 14:37:45.260: INFO: (0) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:443/proxy/: ... (200; 15.5547ms)
Feb 24 14:37:45.280: INFO: (1) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 15.500022ms)
Feb 24 14:37:45.282: INFO: (1) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname1/proxy/: foo (200; 17.889974ms)
Feb 24 14:37:45.283: INFO: (1) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z/proxy/: test (200; 19.261773ms)
Feb 24 14:37:45.283: INFO: (1) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname2/proxy/: bar (200; 19.108465ms)
Feb 24 14:37:45.284: INFO: (1) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:443/proxy/: test<... (200; 25.395229ms)
Feb 24 14:37:45.290: INFO: (1) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname2/proxy/: bar (200; 25.403805ms)
Feb 24 14:37:45.290: INFO: (1) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname1/proxy/: foo (200; 25.417548ms)
Feb 24 14:37:45.290: INFO: (1) /api/v1/namespaces/proxy-2336/services/https:proxy-service-tpmgw:tlsportname1/proxy/: tls baz (200; 25.730471ms)
Feb 24 14:37:45.297: INFO: (2) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:460/proxy/: tls baz (200; 6.770023ms)
Feb 24 14:37:45.302: INFO: (2) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:462/proxy/: tls qux (200; 11.49171ms)
Feb 24 14:37:45.302: INFO: (2) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:1080/proxy/: ... (200; 11.58144ms)
Feb 24 14:37:45.302: INFO: (2) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:160/proxy/: foo (200; 11.844883ms)
Feb 24 14:37:45.302: INFO: (2) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:1080/proxy/: test<... (200; 12.201931ms)
Feb 24 14:37:45.302: INFO: (2) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 11.763569ms)
Feb 24 14:37:45.305: INFO: (2) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z/proxy/: test (200; 14.269968ms)
Feb 24 14:37:45.305: INFO: (2) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 15.463722ms)
Feb 24 14:37:45.306: INFO: (2) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:160/proxy/: foo (200; 16.295021ms)
Feb 24 14:37:45.307: INFO: (2) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:443/proxy/: ... (200; 7.080498ms)
Feb 24 14:37:45.324: INFO: (3) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:443/proxy/: test<... (200; 14.285608ms)
Feb 24 14:37:45.328: INFO: (3) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z/proxy/: test (200; 14.401914ms)
Feb 24 14:37:45.328: INFO: (3) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 14.390606ms)
Feb 24 14:37:45.331: INFO: (3) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname1/proxy/: foo (200; 16.77583ms)
Feb 24 14:37:45.331: INFO: (3) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname2/proxy/: bar (200; 16.729558ms)
Feb 24 14:37:45.331: INFO: (3) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname2/proxy/: bar (200; 17.105307ms)
Feb 24 14:37:45.331: INFO: (3) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname1/proxy/: foo (200; 17.238898ms)
Feb 24 14:37:45.331: INFO: (3) /api/v1/namespaces/proxy-2336/services/https:proxy-service-tpmgw:tlsportname2/proxy/: tls qux (200; 17.388046ms)
Feb 24 14:37:45.351: INFO: (4) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:443/proxy/: ... (200; 19.587479ms)
Feb 24 14:37:45.351: INFO: (4) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:462/proxy/: tls qux (200; 19.764852ms)
Feb 24 14:37:45.351: INFO: (4) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z/proxy/: test (200; 19.597857ms)
Feb 24 14:37:45.351: INFO: (4) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 19.65686ms)
Feb 24 14:37:45.351: INFO: (4) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:160/proxy/: foo (200; 19.598276ms)
Feb 24 14:37:45.351: INFO: (4) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 19.546021ms)
Feb 24 14:37:45.351: INFO: (4) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:1080/proxy/: test<... (200; 19.831904ms)
Feb 24 14:37:45.365: INFO: (4) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname2/proxy/: bar (200; 33.015643ms)
Feb 24 14:37:45.365: INFO: (4) /api/v1/namespaces/proxy-2336/services/https:proxy-service-tpmgw:tlsportname1/proxy/: tls baz (200; 33.378916ms)
Feb 24 14:37:45.365: INFO: (4) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname1/proxy/: foo (200; 33.513451ms)
Feb 24 14:37:45.365: INFO: (4) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname1/proxy/: foo (200; 33.503746ms)
Feb 24 14:37:45.366: INFO: (4) /api/v1/namespaces/proxy-2336/services/https:proxy-service-tpmgw:tlsportname2/proxy/: tls qux (200; 34.189001ms)
Feb 24 14:37:45.366: INFO: (4) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname2/proxy/: bar (200; 34.338543ms)
Feb 24 14:37:45.378: INFO: (5) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:443/proxy/: ... (200; 18.19505ms)
Feb 24 14:37:45.385: INFO: (5) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z/proxy/: test (200; 18.397339ms)
Feb 24 14:37:45.385: INFO: (5) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:1080/proxy/: test<... (200; 18.755727ms)
Feb 24 14:37:45.386: INFO: (5) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:462/proxy/: tls qux (200; 19.670087ms)
Feb 24 14:37:45.387: INFO: (5) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname2/proxy/: bar (200; 21.194717ms)
Feb 24 14:37:45.388: INFO: (5) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname1/proxy/: foo (200; 21.815378ms)
Feb 24 14:37:45.388: INFO: (5) /api/v1/namespaces/proxy-2336/services/https:proxy-service-tpmgw:tlsportname1/proxy/: tls baz (200; 21.822548ms)
Feb 24 14:37:45.388: INFO: (5) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname2/proxy/: bar (200; 22.334282ms)
Feb 24 14:37:45.389: INFO: (5) /api/v1/namespaces/proxy-2336/services/https:proxy-service-tpmgw:tlsportname2/proxy/: tls qux (200; 22.4067ms)
Feb 24 14:37:45.389: INFO: (5) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 22.646007ms)
Feb 24 14:37:45.390: INFO: (5) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname1/proxy/: foo (200; 24.035577ms)
Feb 24 14:37:45.397: INFO: (6) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:460/proxy/: tls baz (200; 7.187575ms)
Feb 24 14:37:45.398: INFO: (6) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname1/proxy/: foo (200; 7.401544ms)
Feb 24 14:37:45.401: INFO: (6) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:1080/proxy/: test<... (200; 9.967453ms)
Feb 24 14:37:45.402: INFO: (6) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname2/proxy/: bar (200; 10.926771ms)
Feb 24 14:37:45.402: INFO: (6) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:160/proxy/: foo (200; 11.440426ms)
Feb 24 14:37:45.405: INFO: (6) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:1080/proxy/: ... (200; 13.561793ms)
Feb 24 14:37:45.406: INFO: (6) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 14.933316ms)
Feb 24 14:37:45.406: INFO: (6) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z/proxy/: test (200; 14.861987ms)
Feb 24 14:37:45.407: INFO: (6) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:462/proxy/: tls qux (200; 15.297636ms)
Feb 24 14:37:45.407: INFO: (6) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:443/proxy/: ... (200; 13.953235ms)
Feb 24 14:37:45.427: INFO: (7) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:160/proxy/: foo (200; 14.572604ms)
Feb 24 14:37:45.427: INFO: (7) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:160/proxy/: foo (200; 14.811367ms)
Feb 24 14:37:45.427: INFO: (7) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:1080/proxy/: test<... (200; 14.774465ms)
Feb 24 14:37:45.427: INFO: (7) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 14.953739ms)
Feb 24 14:37:45.428: INFO: (7) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 15.429279ms)
Feb 24 14:37:45.428: INFO: (7) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:443/proxy/: test (200; 15.895974ms)
Feb 24 14:37:45.430: INFO: (7) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname1/proxy/: foo (200; 17.240073ms)
Feb 24 14:37:45.430: INFO: (7) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname2/proxy/: bar (200; 17.845421ms)
Feb 24 14:37:45.431: INFO: (7) /api/v1/namespaces/proxy-2336/services/https:proxy-service-tpmgw:tlsportname1/proxy/: tls baz (200; 18.555838ms)
Feb 24 14:37:45.431: INFO: (7) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname1/proxy/: foo (200; 18.41942ms)
Feb 24 14:37:45.431: INFO: (7) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname2/proxy/: bar (200; 19.090448ms)
Feb 24 14:37:45.432: INFO: (7) /api/v1/namespaces/proxy-2336/services/https:proxy-service-tpmgw:tlsportname2/proxy/: tls qux (200; 19.736988ms)
Feb 24 14:37:45.447: INFO: (8) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:443/proxy/: ... (200; 14.456163ms)
Feb 24 14:37:45.447: INFO: (8) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:1080/proxy/: test<... (200; 14.453491ms)
Feb 24 14:37:45.447: INFO: (8) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 14.555905ms)
Feb 24 14:37:45.447: INFO: (8) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:462/proxy/: tls qux (200; 14.608158ms)
Feb 24 14:37:45.448: INFO: (8) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:160/proxy/: foo (200; 16.020851ms)
Feb 24 14:37:45.448: INFO: (8) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:460/proxy/: tls baz (200; 16.069185ms)
Feb 24 14:37:45.449: INFO: (8) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z/proxy/: test (200; 16.74198ms)
Feb 24 14:37:45.449: INFO: (8) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:160/proxy/: foo (200; 16.496238ms)
Feb 24 14:37:45.449: INFO: (8) /api/v1/namespaces/proxy-2336/services/https:proxy-service-tpmgw:tlsportname1/proxy/: tls baz (200; 16.994073ms)
Feb 24 14:37:45.450: INFO: (8) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 17.209264ms)
Feb 24 14:37:45.450: INFO: (8) /api/v1/namespaces/proxy-2336/services/https:proxy-service-tpmgw:tlsportname2/proxy/: tls qux (200; 17.621159ms)
Feb 24 14:37:45.452: INFO: (8) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname2/proxy/: bar (200; 19.903818ms)
Feb 24 14:37:45.452: INFO: (8) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname1/proxy/: foo (200; 20.126617ms)
Feb 24 14:37:45.453: INFO: (8) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname1/proxy/: foo (200; 20.902065ms)
Feb 24 14:37:45.454: INFO: (8) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname2/proxy/: bar (200; 21.93355ms)
Feb 24 14:37:45.470: INFO: (9) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:460/proxy/: tls baz (200; 15.659088ms)
Feb 24 14:37:45.470: INFO: (9) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname1/proxy/: foo (200; 16.048813ms)
Feb 24 14:37:45.471: INFO: (9) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname2/proxy/: bar (200; 16.682863ms)
Feb 24 14:37:45.471: INFO: (9) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname1/proxy/: foo (200; 16.490242ms)
Feb 24 14:37:45.471: INFO: (9) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:443/proxy/: test (200; 16.646453ms)
Feb 24 14:37:45.471: INFO: (9) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 16.849043ms)
Feb 24 14:37:45.472: INFO: (9) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:160/proxy/: foo (200; 16.90522ms)
Feb 24 14:37:45.472: INFO: (9) /api/v1/namespaces/proxy-2336/services/https:proxy-service-tpmgw:tlsportname2/proxy/: tls qux (200; 17.369345ms)
Feb 24 14:37:45.472: INFO: (9) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:160/proxy/: foo (200; 17.954372ms)
Feb 24 14:37:45.473: INFO: (9) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname2/proxy/: bar (200; 18.34841ms)
Feb 24 14:37:45.473: INFO: (9) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:1080/proxy/: test<... (200; 18.351888ms)
Feb 24 14:37:45.473: INFO: (9) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:462/proxy/: tls qux (200; 18.24622ms)
Feb 24 14:37:45.473: INFO: (9) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:1080/proxy/: ... (200; 18.28653ms)
Feb 24 14:37:45.473: INFO: (9) /api/v1/namespaces/proxy-2336/services/https:proxy-service-tpmgw:tlsportname1/proxy/: tls baz (200; 18.631625ms)
Feb 24 14:37:45.473: INFO: (9) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 18.553981ms)
Feb 24 14:37:45.482: INFO: (10) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:160/proxy/: foo (200; 8.727792ms)
Feb 24 14:37:45.485: INFO: (10) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:160/proxy/: foo (200; 11.692673ms)
Feb 24 14:37:45.485: INFO: (10) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:1080/proxy/: ... (200; 12.041876ms)
Feb 24 14:37:45.486: INFO: (10) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:443/proxy/: test<... (200; 13.946764ms)
Feb 24 14:37:45.488: INFO: (10) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z/proxy/: test (200; 14.553928ms)
Feb 24 14:37:45.489: INFO: (10) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname1/proxy/: foo (200; 15.48771ms)
Feb 24 14:37:45.490: INFO: (10) /api/v1/namespaces/proxy-2336/services/https:proxy-service-tpmgw:tlsportname2/proxy/: tls qux (200; 16.74236ms)
Feb 24 14:37:45.490: INFO: (10) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname2/proxy/: bar (200; 16.810789ms)
Feb 24 14:37:45.499: INFO: (11) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname1/proxy/: foo (200; 9.332672ms)
Feb 24 14:37:45.500: INFO: (11) /api/v1/namespaces/proxy-2336/services/https:proxy-service-tpmgw:tlsportname2/proxy/: tls qux (200; 9.734434ms)
Feb 24 14:37:45.501: INFO: (11) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname2/proxy/: bar (200; 11.219237ms)
Feb 24 14:37:45.502: INFO: (11) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:462/proxy/: tls qux (200; 12.233664ms)
Feb 24 14:37:45.502: INFO: (11) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname1/proxy/: foo (200; 12.36889ms)
Feb 24 14:37:45.502: INFO: (11) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z/proxy/: test (200; 12.439466ms)
Feb 24 14:37:45.503: INFO: (11) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:160/proxy/: foo (200; 12.687393ms)
Feb 24 14:37:45.503: INFO: (11) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 12.725186ms)
Feb 24 14:37:45.503: INFO: (11) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:1080/proxy/: test<... (200; 12.666031ms)
Feb 24 14:37:45.503: INFO: (11) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname2/proxy/: bar (200; 12.709475ms)
Feb 24 14:37:45.503: INFO: (11) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:443/proxy/: ... (200; 13.170895ms)
Feb 24 14:37:45.503: INFO: (11) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:460/proxy/: tls baz (200; 13.456243ms)
Feb 24 14:37:45.504: INFO: (11) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:160/proxy/: foo (200; 13.919386ms)
Feb 24 14:37:45.515: INFO: (12) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 10.710754ms)
Feb 24 14:37:45.515: INFO: (12) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:460/proxy/: tls baz (200; 10.856197ms)
Feb 24 14:37:45.515: INFO: (12) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:1080/proxy/: test<... (200; 11.142786ms)
Feb 24 14:37:45.515: INFO: (12) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:1080/proxy/: ... (200; 11.208469ms)
Feb 24 14:37:45.515: INFO: (12) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:443/proxy/: test (200; 12.150293ms)
Feb 24 14:37:45.517: INFO: (12) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname2/proxy/: bar (200; 12.654125ms)
Feb 24 14:37:45.517: INFO: (12) /api/v1/namespaces/proxy-2336/services/https:proxy-service-tpmgw:tlsportname2/proxy/: tls qux (200; 13.226381ms)
Feb 24 14:37:45.517: INFO: (12) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:160/proxy/: foo (200; 13.344784ms)
Feb 24 14:37:45.517: INFO: (12) /api/v1/namespaces/proxy-2336/services/https:proxy-service-tpmgw:tlsportname1/proxy/: tls baz (200; 13.414622ms)
Feb 24 14:37:45.518: INFO: (12) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname1/proxy/: foo (200; 13.480952ms)
Feb 24 14:37:45.519: INFO: (12) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname2/proxy/: bar (200; 14.896144ms)
Feb 24 14:37:45.519: INFO: (12) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:160/proxy/: foo (200; 14.935459ms)
Feb 24 14:37:45.519: INFO: (12) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname1/proxy/: foo (200; 15.037861ms)
Feb 24 14:37:45.528: INFO: (13) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z/proxy/: test (200; 8.496915ms)
Feb 24 14:37:45.529: INFO: (13) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 9.836207ms)
Feb 24 14:37:45.529: INFO: (13) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:1080/proxy/: test<... (200; 9.846208ms)
Feb 24 14:37:45.530: INFO: (13) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:1080/proxy/: ... (200; 11.088816ms)
Feb 24 14:37:45.531: INFO: (13) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:160/proxy/: foo (200; 11.550506ms)
Feb 24 14:37:45.531: INFO: (13) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:160/proxy/: foo (200; 11.538117ms)
Feb 24 14:37:45.531: INFO: (13) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname2/proxy/: bar (200; 12.030097ms)
Feb 24 14:37:45.531: INFO: (13) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname2/proxy/: bar (200; 12.002219ms)
Feb 24 14:37:45.531: INFO: (13) /api/v1/namespaces/proxy-2336/services/https:proxy-service-tpmgw:tlsportname2/proxy/: tls qux (200; 12.079502ms)
Feb 24 14:37:45.532: INFO: (13) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname1/proxy/: foo (200; 12.715613ms)
Feb 24 14:37:45.532: INFO: (13) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname1/proxy/: foo (200; 13.216063ms)
Feb 24 14:37:45.534: INFO: (13) /api/v1/namespaces/proxy-2336/services/https:proxy-service-tpmgw:tlsportname1/proxy/: tls baz (200; 15.238532ms)
Feb 24 14:37:45.534: INFO: (13) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 15.391038ms)
Feb 24 14:37:45.535: INFO: (13) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:443/proxy/: test (200; 8.588898ms)
Feb 24 14:37:45.544: INFO: (14) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 7.633479ms)
Feb 24 14:37:45.544: INFO: (14) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:462/proxy/: tls qux (200; 8.490758ms)
Feb 24 14:37:45.544: INFO: (14) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:1080/proxy/: test<... (200; 9.247724ms)
Feb 24 14:37:45.544: INFO: (14) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:1080/proxy/: ... (200; 8.624278ms)
Feb 24 14:37:45.545: INFO: (14) /api/v1/namespaces/proxy-2336/services/https:proxy-service-tpmgw:tlsportname1/proxy/: tls baz (200; 9.523356ms)
Feb 24 14:37:45.545: INFO: (14) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 9.720261ms)
Feb 24 14:37:45.545: INFO: (14) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:460/proxy/: tls baz (200; 9.523918ms)
Feb 24 14:37:45.545: INFO: (14) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:160/proxy/: foo (200; 10.24023ms)
Feb 24 14:37:45.546: INFO: (14) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname2/proxy/: bar (200; 10.988656ms)
Feb 24 14:37:45.547: INFO: (14) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname1/proxy/: foo (200; 10.505195ms)
Feb 24 14:37:45.547: INFO: (14) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname2/proxy/: bar (200; 10.701457ms)
Feb 24 14:37:45.548: INFO: (14) /api/v1/namespaces/proxy-2336/services/https:proxy-service-tpmgw:tlsportname2/proxy/: tls qux (200; 12.030533ms)
Feb 24 14:37:45.548: INFO: (14) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname1/proxy/: foo (200; 12.458093ms)
Feb 24 14:37:45.555: INFO: (15) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:460/proxy/: tls baz (200; 6.764204ms)
Feb 24 14:37:45.555: INFO: (15) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:1080/proxy/: ... (200; 6.735513ms)
Feb 24 14:37:45.555: INFO: (15) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:160/proxy/: foo (200; 6.977628ms)
Feb 24 14:37:45.555: INFO: (15) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:1080/proxy/: test<... (200; 7.46523ms)
Feb 24 14:37:45.557: INFO: (15) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname1/proxy/: foo (200; 8.955318ms)
Feb 24 14:37:45.557: INFO: (15) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 9.078188ms)
Feb 24 14:37:45.558: INFO: (15) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z/proxy/: test (200; 9.681133ms)
Feb 24 14:37:45.558: INFO: (15) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 9.803841ms)
Feb 24 14:37:45.558: INFO: (15) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname1/proxy/: foo (200; 9.784342ms)
Feb 24 14:37:45.558: INFO: (15) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:160/proxy/: foo (200; 9.85385ms)
Feb 24 14:37:45.558: INFO: (15) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:443/proxy/: test<... (200; 14.198205ms)
Feb 24 14:37:45.579: INFO: (16) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:1080/proxy/: ... (200; 14.389429ms)
Feb 24 14:37:45.579: INFO: (16) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z/proxy/: test (200; 14.540815ms)
Feb 24 14:37:45.579: INFO: (16) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname1/proxy/: foo (200; 14.941428ms)
Feb 24 14:37:45.579: INFO: (16) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname2/proxy/: bar (200; 14.648303ms)
Feb 24 14:37:45.579: INFO: (16) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname1/proxy/: foo (200; 14.406996ms)
Feb 24 14:37:45.579: INFO: (16) /api/v1/namespaces/proxy-2336/services/https:proxy-service-tpmgw:tlsportname2/proxy/: tls qux (200; 15.102928ms)
Feb 24 14:37:45.579: INFO: (16) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname2/proxy/: bar (200; 15.348341ms)
Feb 24 14:37:45.580: INFO: (16) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:160/proxy/: foo (200; 15.32752ms)
Feb 24 14:37:45.580: INFO: (16) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 15.338836ms)
Feb 24 14:37:45.580: INFO: (16) /api/v1/namespaces/proxy-2336/services/https:proxy-service-tpmgw:tlsportname1/proxy/: tls baz (200; 16.140091ms)
Feb 24 14:37:45.580: INFO: (16) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:462/proxy/: tls qux (200; 15.451377ms)
Feb 24 14:37:45.580: INFO: (16) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:460/proxy/: tls baz (200; 16.191355ms)
Feb 24 14:37:45.591: INFO: (17) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:160/proxy/: foo (200; 10.390219ms)
Feb 24 14:37:45.591: INFO: (17) /api/v1/namespaces/proxy-2336/services/https:proxy-service-tpmgw:tlsportname1/proxy/: tls baz (200; 10.502762ms)
Feb 24 14:37:45.591: INFO: (17) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:460/proxy/: tls baz (200; 10.452003ms)
Feb 24 14:37:45.591: INFO: (17) /api/v1/namespaces/proxy-2336/services/https:proxy-service-tpmgw:tlsportname2/proxy/: tls qux (200; 10.633449ms)
Feb 24 14:37:45.591: INFO: (17) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:1080/proxy/: ... (200; 10.646714ms)
Feb 24 14:37:45.591: INFO: (17) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 10.672402ms)
Feb 24 14:37:45.591: INFO: (17) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname1/proxy/: foo (200; 11.001049ms)
Feb 24 14:37:45.592: INFO: (17) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname2/proxy/: bar (200; 11.632885ms)
Feb 24 14:37:45.592: INFO: (17) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname2/proxy/: bar (200; 12.155309ms)
Feb 24 14:37:45.592: INFO: (17) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:443/proxy/: test<... (200; 12.631193ms)
Feb 24 14:37:45.593: INFO: (17) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 12.633397ms)
Feb 24 14:37:45.593: INFO: (17) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:160/proxy/: foo (200; 12.72017ms)
Feb 24 14:37:45.594: INFO: (17) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z/proxy/: test (200; 13.713272ms)
Feb 24 14:37:45.601: INFO: (18) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 7.009892ms)
Feb 24 14:37:45.601: INFO: (18) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:1080/proxy/: ... (200; 7.107157ms)
Feb 24 14:37:45.602: INFO: (18) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:462/proxy/: tls qux (200; 7.632266ms)
Feb 24 14:37:45.602: INFO: (18) /api/v1/namespaces/proxy-2336/services/https:proxy-service-tpmgw:tlsportname2/proxy/: tls qux (200; 7.783491ms)
Feb 24 14:37:45.602: INFO: (18) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z/proxy/: test (200; 8.093978ms)
Feb 24 14:37:45.602: INFO: (18) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:160/proxy/: foo (200; 8.270195ms)
Feb 24 14:37:45.602: INFO: (18) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 8.15857ms)
Feb 24 14:37:45.603: INFO: (18) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:443/proxy/: test<... (200; 8.692242ms)
Feb 24 14:37:45.603: INFO: (18) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:460/proxy/: tls baz (200; 8.929495ms)
Feb 24 14:37:45.603: INFO: (18) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:160/proxy/: foo (200; 9.330522ms)
Feb 24 14:37:45.605: INFO: (18) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname2/proxy/: bar (200; 11.271835ms)
Feb 24 14:37:45.605: INFO: (18) /api/v1/namespaces/proxy-2336/services/https:proxy-service-tpmgw:tlsportname1/proxy/: tls baz (200; 11.419864ms)
Feb 24 14:37:45.606: INFO: (18) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname1/proxy/: foo (200; 11.575132ms)
Feb 24 14:37:45.607: INFO: (18) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname1/proxy/: foo (200; 12.951365ms)
Feb 24 14:37:45.614: INFO: (19) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:1080/proxy/: test<... (200; 7.307217ms)
Feb 24 14:37:45.615: INFO: (19) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:160/proxy/: foo (200; 7.313649ms)
Feb 24 14:37:45.615: INFO: (19) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:462/proxy/: tls qux (200; 7.469769ms)
Feb 24 14:37:45.615: INFO: (19) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:443/proxy/: ... (200; 7.749562ms)
Feb 24 14:37:45.615: INFO: (19) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:160/proxy/: foo (200; 7.919997ms)
Feb 24 14:37:45.615: INFO: (19) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z/proxy/: test (200; 7.885836ms)
Feb 24 14:37:45.615: INFO: (19) /api/v1/namespaces/proxy-2336/pods/proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 7.864065ms)
Feb 24 14:37:45.615: INFO: (19) /api/v1/namespaces/proxy-2336/services/https:proxy-service-tpmgw:tlsportname1/proxy/: tls baz (200; 8.090329ms)
Feb 24 14:37:45.615: INFO: (19) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname2/proxy/: bar (200; 8.124004ms)
Feb 24 14:37:45.615: INFO: (19) /api/v1/namespaces/proxy-2336/pods/https:proxy-service-tpmgw-8dm4z:460/proxy/: tls baz (200; 8.230795ms)
Feb 24 14:37:45.616: INFO: (19) /api/v1/namespaces/proxy-2336/pods/http:proxy-service-tpmgw-8dm4z:162/proxy/: bar (200; 8.445686ms)
Feb 24 14:37:45.616: INFO: (19) /api/v1/namespaces/proxy-2336/services/proxy-service-tpmgw:portname1/proxy/: foo (200; 8.666492ms)
Feb 24 14:37:45.616: INFO: (19) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname2/proxy/: bar (200; 8.988244ms)
Feb 24 14:37:45.616: INFO: (19) /api/v1/namespaces/proxy-2336/services/https:proxy-service-tpmgw:tlsportname2/proxy/: tls qux (200; 9.073491ms)
Feb 24 14:37:45.616: INFO: (19) /api/v1/namespaces/proxy-2336/services/http:proxy-service-tpmgw:portname1/proxy/: foo (200; 9.259493ms)
STEP: deleting ReplicationController proxy-service-tpmgw in namespace proxy-2336, will wait for the garbage collector to delete the pods
Feb 24 14:37:45.677: INFO: Deleting ReplicationController proxy-service-tpmgw took: 8.194542ms
Feb 24 14:37:47.278: INFO: Terminating ReplicationController proxy-service-tpmgw pods took: 1.600387272s
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:37:53.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2336" for this suite.
Feb 24 14:37:59.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:37:59.444: INFO: namespace proxy-2336 deletion completed in 6.214457825s

• [SLOW TEST:31.622 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:37:59.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-6570
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-6570
STEP: Deleting pre-stop pod
Feb 24 14:38:28.720: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:38:28.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-6570" for this suite.
Feb 24 14:39:08.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:39:09.038: INFO: namespace prestop-6570 deletion completed in 40.187281615s

• [SLOW TEST:69.593 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:39:09.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb 24 14:39:09.231: INFO: Pod name pod-release: Found 0 pods out of 1
Feb 24 14:39:14.243: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:39:15.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3633" for this suite.
Feb 24 14:39:23.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:39:23.448: INFO: namespace replication-controller-3633 deletion completed in 8.158111128s

• [SLOW TEST:14.409 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:39:23.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:39:40.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6546" for this suite.
Feb 24 14:40:02.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:40:02.684: INFO: namespace replication-controller-6546 deletion completed in 22.42377263s

• [SLOW TEST:39.236 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:40:02.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 24 14:40:02.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-8403'
Feb 24 14:40:02.975: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 24 14:40:02.975: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb 24 14:40:03.263: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-md7qs]
Feb 24 14:40:03.263: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-md7qs" in namespace "kubectl-8403" to be "running and ready"
Feb 24 14:40:03.273: INFO: Pod "e2e-test-nginx-rc-md7qs": Phase="Pending", Reason="", readiness=false. Elapsed: 9.834488ms
Feb 24 14:40:05.283: INFO: Pod "e2e-test-nginx-rc-md7qs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019687085s
Feb 24 14:40:07.297: INFO: Pod "e2e-test-nginx-rc-md7qs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033900166s
Feb 24 14:40:09.311: INFO: Pod "e2e-test-nginx-rc-md7qs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048315362s
Feb 24 14:40:11.331: INFO: Pod "e2e-test-nginx-rc-md7qs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06841466s
Feb 24 14:40:13.340: INFO: Pod "e2e-test-nginx-rc-md7qs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.076691398s
Feb 24 14:40:15.359: INFO: Pod "e2e-test-nginx-rc-md7qs": Phase="Running", Reason="", readiness=true. Elapsed: 12.096021486s
Feb 24 14:40:15.359: INFO: Pod "e2e-test-nginx-rc-md7qs" satisfied condition "running and ready"
Feb 24 14:40:15.359: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-md7qs]
Feb 24 14:40:15.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-8403'
Feb 24 14:40:15.577: INFO: stderr: ""
Feb 24 14:40:15.577: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Feb 24 14:40:15.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-8403'
Feb 24 14:40:15.767: INFO: stderr: ""
Feb 24 14:40:15.767: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:40:15.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8403" for this suite.
Feb 24 14:40:38.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:40:38.402: INFO: namespace kubectl-8403 deletion completed in 22.625621863s

• [SLOW TEST:35.718 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:40:38.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-f613b771-44a1-4a50-a7f1-fe9d352b80e5
STEP: Creating a pod to test consume configMaps
Feb 24 14:40:38.658: INFO: Waiting up to 5m0s for pod "pod-configmaps-0d180063-a36b-4eee-bcfc-de5b8e13022c" in namespace "configmap-4651" to be "success or failure"
Feb 24 14:40:38.668: INFO: Pod "pod-configmaps-0d180063-a36b-4eee-bcfc-de5b8e13022c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.800771ms
Feb 24 14:40:40.682: INFO: Pod "pod-configmaps-0d180063-a36b-4eee-bcfc-de5b8e13022c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024244551s
Feb 24 14:40:42.691: INFO: Pod "pod-configmaps-0d180063-a36b-4eee-bcfc-de5b8e13022c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033582627s
Feb 24 14:40:44.702: INFO: Pod "pod-configmaps-0d180063-a36b-4eee-bcfc-de5b8e13022c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04385498s
Feb 24 14:40:46.713: INFO: Pod "pod-configmaps-0d180063-a36b-4eee-bcfc-de5b8e13022c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055368185s
Feb 24 14:40:48.721: INFO: Pod "pod-configmaps-0d180063-a36b-4eee-bcfc-de5b8e13022c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.062932538s
STEP: Saw pod success
Feb 24 14:40:48.721: INFO: Pod "pod-configmaps-0d180063-a36b-4eee-bcfc-de5b8e13022c" satisfied condition "success or failure"
Feb 24 14:40:48.725: INFO: Trying to get logs from node iruya-node pod pod-configmaps-0d180063-a36b-4eee-bcfc-de5b8e13022c container configmap-volume-test: 
STEP: delete the pod
Feb 24 14:40:48.798: INFO: Waiting for pod pod-configmaps-0d180063-a36b-4eee-bcfc-de5b8e13022c to disappear
Feb 24 14:40:49.258: INFO: Pod pod-configmaps-0d180063-a36b-4eee-bcfc-de5b8e13022c no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:40:49.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4651" for this suite.
Feb 24 14:40:55.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:40:55.558: INFO: namespace configmap-4651 deletion completed in 6.2872764s

• [SLOW TEST:17.155 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:40:55.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-76e6fb09-4f09-4968-99d8-8bad1b64992d
STEP: Creating secret with name s-test-opt-upd-0f1f21bc-bd11-480a-b83f-4ce0300f636c
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-76e6fb09-4f09-4968-99d8-8bad1b64992d
STEP: Updating secret s-test-opt-upd-0f1f21bc-bd11-480a-b83f-4ce0300f636c
STEP: Creating secret with name s-test-opt-create-2b63165b-a33c-45c9-9536-1dc9b84e3922
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:42:38.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9589" for this suite.
Feb 24 14:43:02.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:43:02.673: INFO: namespace projected-9589 deletion completed in 24.576169323s

• [SLOW TEST:127.115 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:43:02.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-9334/configmap-test-1c05cfeb-e231-4646-963e-b815b3fff718
STEP: Creating a pod to test consume configMaps
Feb 24 14:43:02.922: INFO: Waiting up to 5m0s for pod "pod-configmaps-efdfd044-57a6-4d12-8d4a-262140c382be" in namespace "configmap-9334" to be "success or failure"
Feb 24 14:43:02.944: INFO: Pod "pod-configmaps-efdfd044-57a6-4d12-8d4a-262140c382be": Phase="Pending", Reason="", readiness=false. Elapsed: 21.273681ms
Feb 24 14:43:04.959: INFO: Pod "pod-configmaps-efdfd044-57a6-4d12-8d4a-262140c382be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037007317s
Feb 24 14:43:06.976: INFO: Pod "pod-configmaps-efdfd044-57a6-4d12-8d4a-262140c382be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053757475s
Feb 24 14:43:09.876: INFO: Pod "pod-configmaps-efdfd044-57a6-4d12-8d4a-262140c382be": Phase="Pending", Reason="", readiness=false. Elapsed: 6.953500346s
Feb 24 14:43:11.898: INFO: Pod "pod-configmaps-efdfd044-57a6-4d12-8d4a-262140c382be": Phase="Pending", Reason="", readiness=false. Elapsed: 8.975672393s
Feb 24 14:43:13.908: INFO: Pod "pod-configmaps-efdfd044-57a6-4d12-8d4a-262140c382be": Phase="Pending", Reason="", readiness=false. Elapsed: 10.985945023s
Feb 24 14:43:15.923: INFO: Pod "pod-configmaps-efdfd044-57a6-4d12-8d4a-262140c382be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.000129471s
STEP: Saw pod success
Feb 24 14:43:15.923: INFO: Pod "pod-configmaps-efdfd044-57a6-4d12-8d4a-262140c382be" satisfied condition "success or failure"
Feb 24 14:43:15.928: INFO: Trying to get logs from node iruya-node pod pod-configmaps-efdfd044-57a6-4d12-8d4a-262140c382be container env-test: 
STEP: delete the pod
Feb 24 14:43:16.065: INFO: Waiting for pod pod-configmaps-efdfd044-57a6-4d12-8d4a-262140c382be to disappear
Feb 24 14:43:16.077: INFO: Pod pod-configmaps-efdfd044-57a6-4d12-8d4a-262140c382be no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:43:16.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9334" for this suite.
Feb 24 14:43:22.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:43:22.312: INFO: namespace configmap-9334 deletion completed in 6.225284534s

• [SLOW TEST:19.637 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:43:22.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0224 14:44:15.333300       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 24 14:44:15.333: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:44:15.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2733" for this suite.
Feb 24 14:44:45.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:44:45.491: INFO: namespace gc-2733 deletion completed in 30.151275475s

• [SLOW TEST:83.179 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:44:45.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-9c9653e1-4868-4800-b162-51f736142059
STEP: Creating a pod to test consume secrets
Feb 24 14:44:45.772: INFO: Waiting up to 5m0s for pod "pod-secrets-aeea56c9-662d-4d54-a5ed-cef18ae119da" in namespace "secrets-7945" to be "success or failure"
Feb 24 14:44:45.791: INFO: Pod "pod-secrets-aeea56c9-662d-4d54-a5ed-cef18ae119da": Phase="Pending", Reason="", readiness=false. Elapsed: 19.019864ms
Feb 24 14:44:47.808: INFO: Pod "pod-secrets-aeea56c9-662d-4d54-a5ed-cef18ae119da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035276962s
Feb 24 14:44:49.816: INFO: Pod "pod-secrets-aeea56c9-662d-4d54-a5ed-cef18ae119da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043793174s
Feb 24 14:44:51.834: INFO: Pod "pod-secrets-aeea56c9-662d-4d54-a5ed-cef18ae119da": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061821126s
Feb 24 14:44:53.847: INFO: Pod "pod-secrets-aeea56c9-662d-4d54-a5ed-cef18ae119da": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074747389s
Feb 24 14:44:55.857: INFO: Pod "pod-secrets-aeea56c9-662d-4d54-a5ed-cef18ae119da": Phase="Pending", Reason="", readiness=false. Elapsed: 10.084874099s
Feb 24 14:44:57.876: INFO: Pod "pod-secrets-aeea56c9-662d-4d54-a5ed-cef18ae119da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.103205215s
STEP: Saw pod success
Feb 24 14:44:57.876: INFO: Pod "pod-secrets-aeea56c9-662d-4d54-a5ed-cef18ae119da" satisfied condition "success or failure"
Feb 24 14:44:57.883: INFO: Trying to get logs from node iruya-node pod pod-secrets-aeea56c9-662d-4d54-a5ed-cef18ae119da container secret-volume-test: 
STEP: delete the pod
Feb 24 14:44:57.966: INFO: Waiting for pod pod-secrets-aeea56c9-662d-4d54-a5ed-cef18ae119da to disappear
Feb 24 14:44:57.972: INFO: Pod pod-secrets-aeea56c9-662d-4d54-a5ed-cef18ae119da no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:44:57.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7945" for this suite.
Feb 24 14:45:04.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:45:04.207: INFO: namespace secrets-7945 deletion completed in 6.22967078s

• [SLOW TEST:18.714 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:45:04.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 24 14:45:04.361: INFO: PodSpec: initContainers in spec.initContainers
Feb 24 14:46:13.848: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-eb852db2-1018-4da1-bd1c-44f6b8e61014", GenerateName:"", Namespace:"init-container-8947", SelfLink:"/api/v1/namespaces/init-container-8947/pods/pod-init-eb852db2-1018-4da1-bd1c-44f6b8e61014", UID:"fcc08425-b9c7-441e-a6ae-d0547f54be5f", ResourceVersion:"25587862", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63718152304, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"361080797"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-sm9hd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0030b4280), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-sm9hd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-sm9hd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-sm9hd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002d68288), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00224aae0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002d68310)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002d68330)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002d68338), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002d6833c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718152304, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718152304, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718152304, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718152304, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc002642100), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002236150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0022361c0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://a0acc29842fcd3f6f91281aa9bf6c26ce75f39392d52cee0bee87afacf468c19"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002642140), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002642120), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:46:13.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8947" for this suite.
Feb 24 14:46:32.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:46:32.225: INFO: namespace init-container-8947 deletion completed in 18.241073494s

• [SLOW TEST:88.018 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:46:32.226: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-5p2s
STEP: Creating a pod to test atomic-volume-subpath
Feb 24 14:46:32.562: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-5p2s" in namespace "subpath-8710" to be "success or failure"
Feb 24 14:46:32.573: INFO: Pod "pod-subpath-test-secret-5p2s": Phase="Pending", Reason="", readiness=false. Elapsed: 11.311193ms
Feb 24 14:46:34.580: INFO: Pod "pod-subpath-test-secret-5p2s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018200265s
Feb 24 14:46:36.600: INFO: Pod "pod-subpath-test-secret-5p2s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038247037s
Feb 24 14:46:38.611: INFO: Pod "pod-subpath-test-secret-5p2s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049321663s
Feb 24 14:46:40.632: INFO: Pod "pod-subpath-test-secret-5p2s": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070352568s
Feb 24 14:46:42.645: INFO: Pod "pod-subpath-test-secret-5p2s": Phase="Pending", Reason="", readiness=false. Elapsed: 10.082885153s
Feb 24 14:46:46.478: INFO: Pod "pod-subpath-test-secret-5p2s": Phase="Running", Reason="", readiness=true. Elapsed: 13.916120561s
Feb 24 14:46:48.491: INFO: Pod "pod-subpath-test-secret-5p2s": Phase="Running", Reason="", readiness=true. Elapsed: 15.929656803s
Feb 24 14:46:50.509: INFO: Pod "pod-subpath-test-secret-5p2s": Phase="Running", Reason="", readiness=true. Elapsed: 17.946851746s
Feb 24 14:46:52.520: INFO: Pod "pod-subpath-test-secret-5p2s": Phase="Running", Reason="", readiness=true. Elapsed: 19.958542073s
Feb 24 14:46:54.532: INFO: Pod "pod-subpath-test-secret-5p2s": Phase="Running", Reason="", readiness=true. Elapsed: 21.970740583s
Feb 24 14:46:56.542: INFO: Pod "pod-subpath-test-secret-5p2s": Phase="Running", Reason="", readiness=true. Elapsed: 23.980480705s
Feb 24 14:46:58.557: INFO: Pod "pod-subpath-test-secret-5p2s": Phase="Running", Reason="", readiness=true. Elapsed: 25.994852343s
Feb 24 14:47:00.571: INFO: Pod "pod-subpath-test-secret-5p2s": Phase="Running", Reason="", readiness=true. Elapsed: 28.009176067s
Feb 24 14:47:02.583: INFO: Pod "pod-subpath-test-secret-5p2s": Phase="Running", Reason="", readiness=true. Elapsed: 30.021582647s
Feb 24 14:47:04.601: INFO: Pod "pod-subpath-test-secret-5p2s": Phase="Running", Reason="", readiness=true. Elapsed: 32.039506146s
Feb 24 14:47:06.613: INFO: Pod "pod-subpath-test-secret-5p2s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.05112785s
STEP: Saw pod success
Feb 24 14:47:06.613: INFO: Pod "pod-subpath-test-secret-5p2s" satisfied condition "success or failure"
Feb 24 14:47:06.620: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-5p2s container test-container-subpath-secret-5p2s: 
STEP: delete the pod
Feb 24 14:47:06.913: INFO: Waiting for pod pod-subpath-test-secret-5p2s to disappear
Feb 24 14:47:06.942: INFO: Pod pod-subpath-test-secret-5p2s no longer exists
STEP: Deleting pod pod-subpath-test-secret-5p2s
Feb 24 14:47:06.942: INFO: Deleting pod "pod-subpath-test-secret-5p2s" in namespace "subpath-8710"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:47:06.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8710" for this suite.
Feb 24 14:47:15.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:47:15.143: INFO: namespace subpath-8710 deletion completed in 8.177855678s

• [SLOW TEST:42.917 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:47:15.144: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-bd04a2f8-9d41-425b-b42c-1b90f3571488
STEP: Creating a pod to test consume configMaps
Feb 24 14:47:15.423: INFO: Waiting up to 5m0s for pod "pod-configmaps-901a1d67-d274-4d97-8479-a4056eb76a99" in namespace "configmap-7405" to be "success or failure"
Feb 24 14:47:15.616: INFO: Pod "pod-configmaps-901a1d67-d274-4d97-8479-a4056eb76a99": Phase="Pending", Reason="", readiness=false. Elapsed: 192.908181ms
Feb 24 14:47:18.179: INFO: Pod "pod-configmaps-901a1d67-d274-4d97-8479-a4056eb76a99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.755868724s
Feb 24 14:47:20.196: INFO: Pod "pod-configmaps-901a1d67-d274-4d97-8479-a4056eb76a99": Phase="Pending", Reason="", readiness=false. Elapsed: 4.772638633s
Feb 24 14:47:22.320: INFO: Pod "pod-configmaps-901a1d67-d274-4d97-8479-a4056eb76a99": Phase="Pending", Reason="", readiness=false. Elapsed: 6.896321804s
Feb 24 14:47:24.328: INFO: Pod "pod-configmaps-901a1d67-d274-4d97-8479-a4056eb76a99": Phase="Pending", Reason="", readiness=false. Elapsed: 8.905140581s
Feb 24 14:47:26.340: INFO: Pod "pod-configmaps-901a1d67-d274-4d97-8479-a4056eb76a99": Phase="Pending", Reason="", readiness=false. Elapsed: 10.916593751s
Feb 24 14:47:28.380: INFO: Pod "pod-configmaps-901a1d67-d274-4d97-8479-a4056eb76a99": Phase="Pending", Reason="", readiness=false. Elapsed: 12.956613793s
Feb 24 14:47:30.437: INFO: Pod "pod-configmaps-901a1d67-d274-4d97-8479-a4056eb76a99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.013879666s
STEP: Saw pod success
Feb 24 14:47:30.437: INFO: Pod "pod-configmaps-901a1d67-d274-4d97-8479-a4056eb76a99" satisfied condition "success or failure"
Feb 24 14:47:30.441: INFO: Trying to get logs from node iruya-node pod pod-configmaps-901a1d67-d274-4d97-8479-a4056eb76a99 container configmap-volume-test: 
STEP: delete the pod
Feb 24 14:47:30.694: INFO: Waiting for pod pod-configmaps-901a1d67-d274-4d97-8479-a4056eb76a99 to disappear
Feb 24 14:47:30.705: INFO: Pod pod-configmaps-901a1d67-d274-4d97-8479-a4056eb76a99 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:47:30.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7405" for this suite.
Feb 24 14:47:36.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:47:36.965: INFO: namespace configmap-7405 deletion completed in 6.253585788s

• [SLOW TEST:21.822 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:47:36.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 24 14:47:51.466: INFO: Successfully updated pod "annotationupdate98608d9f-bf1b-43d6-9346-4dbea50716c1"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:47:53.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9126" for this suite.
Feb 24 14:48:33.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:48:33.860: INFO: namespace projected-9126 deletion completed in 40.186535668s

• [SLOW TEST:56.894 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:48:33.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-bbe7b0df-1ca3-4c6e-9d12-a612bda8e47c
STEP: Creating a pod to test consume secrets
Feb 24 14:48:34.171: INFO: Waiting up to 5m0s for pod "pod-secrets-5b42ecf8-760f-4e87-9319-0915e953592c" in namespace "secrets-3710" to be "success or failure"
Feb 24 14:48:34.237: INFO: Pod "pod-secrets-5b42ecf8-760f-4e87-9319-0915e953592c": Phase="Pending", Reason="", readiness=false. Elapsed: 66.009931ms
Feb 24 14:48:36.284: INFO: Pod "pod-secrets-5b42ecf8-760f-4e87-9319-0915e953592c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112458425s
Feb 24 14:48:38.292: INFO: Pod "pod-secrets-5b42ecf8-760f-4e87-9319-0915e953592c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120663198s
Feb 24 14:48:40.298: INFO: Pod "pod-secrets-5b42ecf8-760f-4e87-9319-0915e953592c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126879653s
Feb 24 14:48:42.307: INFO: Pod "pod-secrets-5b42ecf8-760f-4e87-9319-0915e953592c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.135447527s
Feb 24 14:48:44.326: INFO: Pod "pod-secrets-5b42ecf8-760f-4e87-9319-0915e953592c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.155003683s
Feb 24 14:48:46.375: INFO: Pod "pod-secrets-5b42ecf8-760f-4e87-9319-0915e953592c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.203781741s
STEP: Saw pod success
Feb 24 14:48:46.375: INFO: Pod "pod-secrets-5b42ecf8-760f-4e87-9319-0915e953592c" satisfied condition "success or failure"
Feb 24 14:48:46.381: INFO: Trying to get logs from node iruya-node pod pod-secrets-5b42ecf8-760f-4e87-9319-0915e953592c container secret-env-test: 
STEP: delete the pod
Feb 24 14:48:46.441: INFO: Waiting for pod pod-secrets-5b42ecf8-760f-4e87-9319-0915e953592c to disappear
Feb 24 14:48:46.609: INFO: Pod pod-secrets-5b42ecf8-760f-4e87-9319-0915e953592c no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:48:46.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3710" for this suite.
Feb 24 14:48:52.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:48:52.769: INFO: namespace secrets-3710 deletion completed in 6.145545715s

• [SLOW TEST:18.908 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:48:52.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:49:08.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-52" for this suite.
Feb 24 14:49:15.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:49:15.115: INFO: namespace kubelet-test-52 deletion completed in 6.145940454s

• [SLOW TEST:22.345 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:49:15.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-4722
I0224 14:49:15.270927       8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4722, replica count: 1
I0224 14:49:16.321452       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 14:49:17.321859       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 14:49:18.322111       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 14:49:19.322506       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 14:49:20.322832       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 14:49:21.323070       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 14:49:22.323396       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 14:49:23.323808       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 14:49:24.324136       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 14:49:25.324359       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 14:49:26.324604       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 14:49:27.324886       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 14:49:28.325102       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 14:49:29.325378       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 24 14:49:29.496: INFO: Created: latency-svc-lzdv5
Feb 24 14:49:29.515: INFO: Got endpoints: latency-svc-lzdv5 [89.78172ms]
Feb 24 14:49:29.687: INFO: Created: latency-svc-269r7
Feb 24 14:49:29.701: INFO: Got endpoints: latency-svc-269r7 [185.143603ms]
Feb 24 14:49:29.894: INFO: Created: latency-svc-jrrgf
Feb 24 14:49:29.926: INFO: Got endpoints: latency-svc-jrrgf [410.18ms]
Feb 24 14:49:30.073: INFO: Created: latency-svc-qvsw2
Feb 24 14:49:30.085: INFO: Got endpoints: latency-svc-qvsw2 [568.218949ms]
Feb 24 14:49:30.166: INFO: Created: latency-svc-rfp4s
Feb 24 14:49:30.166: INFO: Got endpoints: latency-svc-rfp4s [649.256104ms]
Feb 24 14:49:30.345: INFO: Created: latency-svc-ggvt5
Feb 24 14:49:30.355: INFO: Got endpoints: latency-svc-ggvt5 [837.975286ms]
Feb 24 14:49:30.541: INFO: Created: latency-svc-27x8r
Feb 24 14:49:30.562: INFO: Got endpoints: latency-svc-27x8r [1.045150574s]
Feb 24 14:49:30.636: INFO: Created: latency-svc-bnsfd
Feb 24 14:49:30.839: INFO: Got endpoints: latency-svc-bnsfd [1.321584282s]
Feb 24 14:49:30.865: INFO: Created: latency-svc-dlpfp
Feb 24 14:49:30.878: INFO: Got endpoints: latency-svc-dlpfp [1.361573675s]
Feb 24 14:49:31.044: INFO: Created: latency-svc-ffcrs
Feb 24 14:49:31.059: INFO: Got endpoints: latency-svc-ffcrs [1.542913418s]
Feb 24 14:49:31.256: INFO: Created: latency-svc-lsflq
Feb 24 14:49:31.273: INFO: Got endpoints: latency-svc-lsflq [1.756854547s]
Feb 24 14:49:31.311: INFO: Created: latency-svc-zb4m4
Feb 24 14:49:31.661: INFO: Got endpoints: latency-svc-zb4m4 [2.143493753s]
Feb 24 14:49:31.795: INFO: Created: latency-svc-9qcgj
Feb 24 14:49:31.795: INFO: Got endpoints: latency-svc-9qcgj [2.277457563s]
Feb 24 14:49:32.014: INFO: Created: latency-svc-qgnfw
Feb 24 14:49:32.024: INFO: Got endpoints: latency-svc-qgnfw [2.506558401s]
Feb 24 14:49:32.186: INFO: Created: latency-svc-6zn6q
Feb 24 14:49:32.253: INFO: Got endpoints: latency-svc-6zn6q [2.735455178s]
Feb 24 14:49:32.257: INFO: Created: latency-svc-q5mlf
Feb 24 14:49:32.270: INFO: Got endpoints: latency-svc-q5mlf [2.752745005s]
Feb 24 14:49:32.448: INFO: Created: latency-svc-2lgxb
Feb 24 14:49:32.481: INFO: Got endpoints: latency-svc-2lgxb [2.78001978s]
Feb 24 14:49:32.678: INFO: Created: latency-svc-ctvwh
Feb 24 14:49:32.722: INFO: Got endpoints: latency-svc-ctvwh [2.796190972s]
Feb 24 14:49:32.730: INFO: Created: latency-svc-6th6t
Feb 24 14:49:32.747: INFO: Got endpoints: latency-svc-6th6t [2.661409224s]
Feb 24 14:49:32.860: INFO: Created: latency-svc-vsp9n
Feb 24 14:49:32.887: INFO: Got endpoints: latency-svc-vsp9n [2.721404581s]
Feb 24 14:49:32.941: INFO: Created: latency-svc-q7zqm
Feb 24 14:49:33.043: INFO: Got endpoints: latency-svc-q7zqm [2.687678585s]
Feb 24 14:49:33.093: INFO: Created: latency-svc-dtn6p
Feb 24 14:49:33.110: INFO: Got endpoints: latency-svc-dtn6p [2.547078257s]
Feb 24 14:49:33.294: INFO: Created: latency-svc-6d9cm
Feb 24 14:49:33.319: INFO: Got endpoints: latency-svc-6d9cm [2.479903164s]
Feb 24 14:49:33.500: INFO: Created: latency-svc-vx89h
Feb 24 14:49:33.500: INFO: Got endpoints: latency-svc-vx89h [2.621023602s]
Feb 24 14:49:33.571: INFO: Created: latency-svc-prhrh
Feb 24 14:49:33.584: INFO: Got endpoints: latency-svc-prhrh [2.524038315s]
Feb 24 14:49:33.747: INFO: Created: latency-svc-q9bwh
Feb 24 14:49:33.766: INFO: Got endpoints: latency-svc-q9bwh [2.493041155s]
Feb 24 14:49:34.031: INFO: Created: latency-svc-fz8vp
Feb 24 14:49:34.112: INFO: Got endpoints: latency-svc-fz8vp [2.450894014s]
Feb 24 14:49:34.114: INFO: Created: latency-svc-ggvhf
Feb 24 14:49:34.316: INFO: Got endpoints: latency-svc-ggvhf [2.521396514s]
Feb 24 14:49:34.373: INFO: Created: latency-svc-7r58h
Feb 24 14:49:34.569: INFO: Got endpoints: latency-svc-7r58h [2.54502959s]
Feb 24 14:49:34.583: INFO: Created: latency-svc-f2wp4
Feb 24 14:49:34.590: INFO: Got endpoints: latency-svc-f2wp4 [2.336379588s]
Feb 24 14:49:34.892: INFO: Created: latency-svc-h98rx
Feb 24 14:49:34.912: INFO: Got endpoints: latency-svc-h98rx [2.642618771s]
Feb 24 14:49:35.117: INFO: Created: latency-svc-kvqvl
Feb 24 14:49:35.146: INFO: Got endpoints: latency-svc-kvqvl [2.664893343s]
Feb 24 14:49:35.211: INFO: Created: latency-svc-d8c2j
Feb 24 14:49:35.411: INFO: Got endpoints: latency-svc-d8c2j [2.688372971s]
Feb 24 14:49:35.484: INFO: Created: latency-svc-jrrr9
Feb 24 14:49:35.490: INFO: Got endpoints: latency-svc-jrrr9 [2.742944437s]
Feb 24 14:49:35.691: INFO: Created: latency-svc-qnz8b
Feb 24 14:49:35.702: INFO: Got endpoints: latency-svc-qnz8b [2.814204402s]
Feb 24 14:49:35.878: INFO: Created: latency-svc-fmsjp
Feb 24 14:49:35.900: INFO: Got endpoints: latency-svc-fmsjp [2.856397453s]
Feb 24 14:49:36.089: INFO: Created: latency-svc-frxvb
Feb 24 14:49:36.114: INFO: Got endpoints: latency-svc-frxvb [3.004457316s]
Feb 24 14:49:36.400: INFO: Created: latency-svc-26zsq
Feb 24 14:49:36.400: INFO: Got endpoints: latency-svc-26zsq [3.080703445s]
Feb 24 14:49:37.685: INFO: Created: latency-svc-tmfx2
Feb 24 14:49:37.709: INFO: Got endpoints: latency-svc-tmfx2 [4.209609149s]
Feb 24 14:49:37.850: INFO: Created: latency-svc-xxxkp
Feb 24 14:49:37.935: INFO: Created: latency-svc-k77m9
Feb 24 14:49:37.936: INFO: Got endpoints: latency-svc-xxxkp [4.351778867s]
Feb 24 14:49:37.940: INFO: Got endpoints: latency-svc-k77m9 [4.173755939s]
Feb 24 14:49:38.088: INFO: Created: latency-svc-wj889
Feb 24 14:49:38.105: INFO: Got endpoints: latency-svc-wj889 [3.993341049s]
Feb 24 14:49:38.185: INFO: Created: latency-svc-mpcxt
Feb 24 14:49:38.334: INFO: Got endpoints: latency-svc-mpcxt [4.017794474s]
Feb 24 14:49:38.376: INFO: Created: latency-svc-jh8xj
Feb 24 14:49:38.389: INFO: Got endpoints: latency-svc-jh8xj [3.819339021s]
Feb 24 14:49:38.551: INFO: Created: latency-svc-sbf55
Feb 24 14:49:38.569: INFO: Got endpoints: latency-svc-sbf55 [3.979688716s]
Feb 24 14:49:38.636: INFO: Created: latency-svc-dp9ln
Feb 24 14:49:38.962: INFO: Got endpoints: latency-svc-dp9ln [4.049523284s]
Feb 24 14:49:39.634: INFO: Created: latency-svc-nsw97
Feb 24 14:49:39.642: INFO: Got endpoints: latency-svc-nsw97 [4.495854988s]
Feb 24 14:49:39.874: INFO: Created: latency-svc-dvc5s
Feb 24 14:49:39.881: INFO: Got endpoints: latency-svc-dvc5s [4.470603202s]
Feb 24 14:49:40.146: INFO: Created: latency-svc-qc9c2
Feb 24 14:49:40.303: INFO: Got endpoints: latency-svc-qc9c2 [4.81324512s]
Feb 24 14:49:40.325: INFO: Created: latency-svc-shxm7
Feb 24 14:49:40.354: INFO: Got endpoints: latency-svc-shxm7 [4.652811664s]
Feb 24 14:49:40.540: INFO: Created: latency-svc-4zmdh
Feb 24 14:49:40.560: INFO: Got endpoints: latency-svc-4zmdh [4.659510547s]
Feb 24 14:49:40.861: INFO: Created: latency-svc-p5kw5
Feb 24 14:49:41.113: INFO: Got endpoints: latency-svc-p5kw5 [4.998177391s]
Feb 24 14:49:41.284: INFO: Created: latency-svc-jr6z4
Feb 24 14:49:41.303: INFO: Got endpoints: latency-svc-jr6z4 [4.902415215s]
Feb 24 14:49:41.487: INFO: Created: latency-svc-8v6ld
Feb 24 14:49:41.493: INFO: Got endpoints: latency-svc-8v6ld [3.783069406s]
Feb 24 14:49:41.555: INFO: Created: latency-svc-z9t8h
Feb 24 14:49:41.574: INFO: Got endpoints: latency-svc-z9t8h [3.638136062s]
Feb 24 14:49:41.716: INFO: Created: latency-svc-knxc9
Feb 24 14:49:41.723: INFO: Got endpoints: latency-svc-knxc9 [3.782844739s]
Feb 24 14:49:41.861: INFO: Created: latency-svc-qzn7c
Feb 24 14:49:41.943: INFO: Got endpoints: latency-svc-qzn7c [3.837641412s]
Feb 24 14:49:41.949: INFO: Created: latency-svc-x47xp
Feb 24 14:49:42.062: INFO: Got endpoints: latency-svc-x47xp [3.727945311s]
Feb 24 14:49:42.101: INFO: Created: latency-svc-r2ppl
Feb 24 14:49:42.155: INFO: Got endpoints: latency-svc-r2ppl [3.766457112s]
Feb 24 14:49:42.258: INFO: Created: latency-svc-n7jkj
Feb 24 14:49:42.262: INFO: Got endpoints: latency-svc-n7jkj [3.692473669s]
Feb 24 14:49:42.345: INFO: Created: latency-svc-4rkw8
Feb 24 14:49:42.354: INFO: Got endpoints: latency-svc-4rkw8 [3.391376133s]
Feb 24 14:49:42.508: INFO: Created: latency-svc-6m2bc
Feb 24 14:49:42.522: INFO: Got endpoints: latency-svc-6m2bc [2.879713182s]
Feb 24 14:49:42.595: INFO: Created: latency-svc-527bt
Feb 24 14:49:42.721: INFO: Got endpoints: latency-svc-527bt [2.839926147s]
Feb 24 14:49:42.723: INFO: Created: latency-svc-djvjq
Feb 24 14:49:42.753: INFO: Got endpoints: latency-svc-djvjq [2.44916698s]
Feb 24 14:49:42.915: INFO: Created: latency-svc-72r5f
Feb 24 14:49:42.926: INFO: Got endpoints: latency-svc-72r5f [2.57173158s]
Feb 24 14:49:43.002: INFO: Created: latency-svc-7d59m
Feb 24 14:49:43.128: INFO: Got endpoints: latency-svc-7d59m [2.567809855s]
Feb 24 14:49:43.163: INFO: Created: latency-svc-s7q6j
Feb 24 14:49:43.177: INFO: Got endpoints: latency-svc-s7q6j [2.064357949s]
Feb 24 14:49:43.372: INFO: Created: latency-svc-69djh
Feb 24 14:49:43.385: INFO: Got endpoints: latency-svc-69djh [2.081879606s]
Feb 24 14:49:43.463: INFO: Created: latency-svc-gnbt9
Feb 24 14:49:43.553: INFO: Got endpoints: latency-svc-gnbt9 [2.059806131s]
Feb 24 14:49:43.619: INFO: Created: latency-svc-vb22x
Feb 24 14:49:43.629: INFO: Got endpoints: latency-svc-vb22x [2.054689336s]
Feb 24 14:49:43.813: INFO: Created: latency-svc-d24vb
Feb 24 14:49:43.822: INFO: Got endpoints: latency-svc-d24vb [2.098373367s]
Feb 24 14:49:43.968: INFO: Created: latency-svc-5l9fb
Feb 24 14:49:45.039: INFO: Got endpoints: latency-svc-5l9fb [3.095742433s]
Feb 24 14:49:45.042: INFO: Created: latency-svc-b2nhm
Feb 24 14:49:45.072: INFO: Got endpoints: latency-svc-b2nhm [1.249871463s]
Feb 24 14:49:45.269: INFO: Created: latency-svc-768v2
Feb 24 14:49:45.514: INFO: Got endpoints: latency-svc-768v2 [3.451542295s]
Feb 24 14:49:45.526: INFO: Created: latency-svc-tsgjb
Feb 24 14:49:45.558: INFO: Got endpoints: latency-svc-tsgjb [3.402761854s]
Feb 24 14:49:45.704: INFO: Created: latency-svc-2hzf6
Feb 24 14:49:45.704: INFO: Got endpoints: latency-svc-2hzf6 [3.441755045s]
Feb 24 14:49:45.899: INFO: Created: latency-svc-vz9fh
Feb 24 14:49:45.909: INFO: Got endpoints: latency-svc-vz9fh [3.555425117s]
Feb 24 14:49:45.975: INFO: Created: latency-svc-8gvfh
Feb 24 14:49:46.086: INFO: Got endpoints: latency-svc-8gvfh [3.563928021s]
Feb 24 14:49:46.114: INFO: Created: latency-svc-bnwjb
Feb 24 14:49:46.149: INFO: Got endpoints: latency-svc-bnwjb [3.427641327s]
Feb 24 14:49:46.349: INFO: Created: latency-svc-9ctgz
Feb 24 14:49:46.366: INFO: Got endpoints: latency-svc-9ctgz [3.612859644s]
Feb 24 14:49:46.571: INFO: Created: latency-svc-jr4nt
Feb 24 14:49:46.608: INFO: Got endpoints: latency-svc-jr4nt [3.681062172s]
Feb 24 14:49:46.613: INFO: Created: latency-svc-vb2nk
Feb 24 14:49:46.620: INFO: Got endpoints: latency-svc-vb2nk [3.491851298s]
Feb 24 14:49:46.814: INFO: Created: latency-svc-grwfd
Feb 24 14:49:46.817: INFO: Got endpoints: latency-svc-grwfd [3.639838667s]
Feb 24 14:49:46.886: INFO: Created: latency-svc-p8x4l
Feb 24 14:49:46.896: INFO: Got endpoints: latency-svc-p8x4l [3.510177921s]
Feb 24 14:49:47.036: INFO: Created: latency-svc-txnmx
Feb 24 14:49:47.048: INFO: Got endpoints: latency-svc-txnmx [3.495615384s]
Feb 24 14:49:47.129: INFO: Created: latency-svc-jmljn
Feb 24 14:49:47.256: INFO: Got endpoints: latency-svc-jmljn [3.627172338s]
Feb 24 14:49:47.295: INFO: Created: latency-svc-m48n8
Feb 24 14:49:47.329: INFO: Got endpoints: latency-svc-m48n8 [2.289453724s]
Feb 24 14:49:47.513: INFO: Created: latency-svc-qcxhv
Feb 24 14:49:47.516: INFO: Got endpoints: latency-svc-qcxhv [2.443878177s]
Feb 24 14:49:47.603: INFO: Created: latency-svc-4fxg6
Feb 24 14:49:47.708: INFO: Got endpoints: latency-svc-4fxg6 [2.193368112s]
Feb 24 14:49:47.759: INFO: Created: latency-svc-w4mgc
Feb 24 14:49:47.805: INFO: Got endpoints: latency-svc-w4mgc [2.246565842s]
Feb 24 14:49:47.958: INFO: Created: latency-svc-bwlzg
Feb 24 14:49:47.972: INFO: Got endpoints: latency-svc-bwlzg [2.268381319s]
Feb 24 14:49:48.178: INFO: Created: latency-svc-sqbs6
Feb 24 14:49:48.191: INFO: Got endpoints: latency-svc-sqbs6 [2.281790224s]
Feb 24 14:49:48.255: INFO: Created: latency-svc-65d6m
Feb 24 14:49:48.416: INFO: Got endpoints: latency-svc-65d6m [2.329862012s]
Feb 24 14:49:48.426: INFO: Created: latency-svc-btqhq
Feb 24 14:49:48.440: INFO: Got endpoints: latency-svc-btqhq [2.29053239s]
Feb 24 14:49:48.717: INFO: Created: latency-svc-4lpz9
Feb 24 14:49:48.722: INFO: Got endpoints: latency-svc-4lpz9 [2.356313828s]
Feb 24 14:49:48.801: INFO: Created: latency-svc-cw929
Feb 24 14:49:48.811: INFO: Got endpoints: latency-svc-cw929 [2.203216435s]
Feb 24 14:49:49.031: INFO: Created: latency-svc-4725j
Feb 24 14:49:49.042: INFO: Got endpoints: latency-svc-4725j [2.421705328s]
Feb 24 14:49:49.273: INFO: Created: latency-svc-4p6cm
Feb 24 14:49:49.342: INFO: Got endpoints: latency-svc-4p6cm [2.524721235s]
Feb 24 14:49:49.602: INFO: Created: latency-svc-klqgx
Feb 24 14:49:49.615: INFO: Got endpoints: latency-svc-klqgx [2.719033594s]
Feb 24 14:49:49.702: INFO: Created: latency-svc-bkdl9
Feb 24 14:49:49.843: INFO: Got endpoints: latency-svc-bkdl9 [2.794168642s]
Feb 24 14:49:49.879: INFO: Created: latency-svc-fhw8h
Feb 24 14:49:49.892: INFO: Got endpoints: latency-svc-fhw8h [2.6351823s]
Feb 24 14:49:50.097: INFO: Created: latency-svc-nrsvq
Feb 24 14:49:50.107: INFO: Got endpoints: latency-svc-nrsvq [2.778563013s]
Feb 24 14:49:50.183: INFO: Created: latency-svc-v7j4j
Feb 24 14:49:50.378: INFO: Got endpoints: latency-svc-v7j4j [2.861818439s]
Feb 24 14:49:50.393: INFO: Created: latency-svc-lwfz2
Feb 24 14:49:50.409: INFO: Got endpoints: latency-svc-lwfz2 [2.701040457s]
Feb 24 14:49:50.468: INFO: Created: latency-svc-ds7zg
Feb 24 14:49:50.760: INFO: Got endpoints: latency-svc-ds7zg [2.955356988s]
Feb 24 14:49:50.786: INFO: Created: latency-svc-8xw9f
Feb 24 14:49:50.834: INFO: Got endpoints: latency-svc-8xw9f [2.862061114s]
Feb 24 14:49:51.004: INFO: Created: latency-svc-9t8jx
Feb 24 14:49:51.018: INFO: Got endpoints: latency-svc-9t8jx [2.82639894s]
Feb 24 14:49:51.088: INFO: Created: latency-svc-2jqgw
Feb 24 14:49:51.252: INFO: Got endpoints: latency-svc-2jqgw [2.835492553s]
Feb 24 14:49:51.268: INFO: Created: latency-svc-zpk7q
Feb 24 14:49:51.283: INFO: Got endpoints: latency-svc-zpk7q [2.843098123s]
Feb 24 14:49:51.499: INFO: Created: latency-svc-t8w5r
Feb 24 14:49:51.586: INFO: Got endpoints: latency-svc-t8w5r [2.863870727s]
Feb 24 14:49:51.607: INFO: Created: latency-svc-hrw2d
Feb 24 14:49:51.768: INFO: Got endpoints: latency-svc-hrw2d [2.957065626s]
Feb 24 14:49:51.830: INFO: Created: latency-svc-gmrcd
Feb 24 14:49:51.856: INFO: Got endpoints: latency-svc-gmrcd [2.814409902s]
Feb 24 14:49:52.055: INFO: Created: latency-svc-fq9zf
Feb 24 14:49:52.102: INFO: Got endpoints: latency-svc-fq9zf [2.759814689s]
Feb 24 14:49:52.141: INFO: Created: latency-svc-pvg9v
Feb 24 14:49:52.343: INFO: Got endpoints: latency-svc-pvg9v [2.728386372s]
Feb 24 14:49:52.399: INFO: Created: latency-svc-sjn5b
Feb 24 14:49:52.415: INFO: Got endpoints: latency-svc-sjn5b [2.571805014s]
Feb 24 14:49:52.638: INFO: Created: latency-svc-7dk75
Feb 24 14:49:52.648: INFO: Got endpoints: latency-svc-7dk75 [2.755830896s]
Feb 24 14:49:52.724: INFO: Created: latency-svc-vzvpz
Feb 24 14:49:52.909: INFO: Got endpoints: latency-svc-vzvpz [2.801549685s]
Feb 24 14:49:52.915: INFO: Created: latency-svc-55ljg
Feb 24 14:49:52.927: INFO: Got endpoints: latency-svc-55ljg [2.54843388s]
Feb 24 14:49:53.105: INFO: Created: latency-svc-9lmcc
Feb 24 14:49:53.120: INFO: Got endpoints: latency-svc-9lmcc [2.710679505s]
Feb 24 14:49:53.176: INFO: Created: latency-svc-pcxbt
Feb 24 14:49:53.184: INFO: Got endpoints: latency-svc-pcxbt [2.423648703s]
Feb 24 14:49:53.427: INFO: Created: latency-svc-xng4g
Feb 24 14:49:53.495: INFO: Got endpoints: latency-svc-xng4g [2.660189681s]
Feb 24 14:49:53.498: INFO: Created: latency-svc-vwzhr
Feb 24 14:49:53.700: INFO: Got endpoints: latency-svc-vwzhr [2.681273387s]
Feb 24 14:49:53.758: INFO: Created: latency-svc-s4bn9
Feb 24 14:49:53.768: INFO: Got endpoints: latency-svc-s4bn9 [2.516371881s]
Feb 24 14:49:54.023: INFO: Created: latency-svc-57v9g
Feb 24 14:49:54.046: INFO: Got endpoints: latency-svc-57v9g [2.762409558s]
Feb 24 14:49:54.088: INFO: Created: latency-svc-42hfs
Feb 24 14:49:54.237: INFO: Got endpoints: latency-svc-42hfs [2.650497055s]
Feb 24 14:49:54.259: INFO: Created: latency-svc-nmsl5
Feb 24 14:49:54.283: INFO: Got endpoints: latency-svc-nmsl5 [2.514118742s]
Feb 24 14:49:54.450: INFO: Created: latency-svc-7bkn6
Feb 24 14:49:54.462: INFO: Got endpoints: latency-svc-7bkn6 [2.605483285s]
Feb 24 14:49:54.528: INFO: Created: latency-svc-7sd9m
Feb 24 14:49:54.537: INFO: Got endpoints: latency-svc-7sd9m [2.434594056s]
Feb 24 14:49:54.775: INFO: Created: latency-svc-28tj4
Feb 24 14:49:54.969: INFO: Got endpoints: latency-svc-28tj4 [2.625655396s]
Feb 24 14:49:55.014: INFO: Created: latency-svc-8jrcl
Feb 24 14:49:55.018: INFO: Got endpoints: latency-svc-8jrcl [2.60260357s]
Feb 24 14:49:55.183: INFO: Created: latency-svc-8g6dl
Feb 24 14:49:55.211: INFO: Got endpoints: latency-svc-8g6dl [2.563575122s]
Feb 24 14:49:55.295: INFO: Created: latency-svc-9wmx4
Feb 24 14:49:55.474: INFO: Got endpoints: latency-svc-9wmx4 [2.565253611s]
Feb 24 14:49:55.700: INFO: Created: latency-svc-zzrqv
Feb 24 14:49:55.720: INFO: Got endpoints: latency-svc-zzrqv [2.793666006s]
Feb 24 14:49:55.795: INFO: Created: latency-svc-9fmzq
Feb 24 14:49:55.957: INFO: Got endpoints: latency-svc-9fmzq [2.837059732s]
Feb 24 14:49:55.986: INFO: Created: latency-svc-zk58v
Feb 24 14:49:55.993: INFO: Got endpoints: latency-svc-zk58v [2.808337595s]
Feb 24 14:49:56.323: INFO: Created: latency-svc-8lzk2
Feb 24 14:49:56.345: INFO: Got endpoints: latency-svc-8lzk2 [2.850225944s]
Feb 24 14:49:56.403: INFO: Created: latency-svc-j6cxh
Feb 24 14:49:56.511: INFO: Got endpoints: latency-svc-j6cxh [2.811058249s]
Feb 24 14:49:56.566: INFO: Created: latency-svc-8vwtr
Feb 24 14:49:56.581: INFO: Got endpoints: latency-svc-8vwtr [2.812038146s]
Feb 24 14:49:56.858: INFO: Created: latency-svc-2wjkw
Feb 24 14:49:56.867: INFO: Got endpoints: latency-svc-2wjkw [2.820639955s]
Feb 24 14:49:56.951: INFO: Created: latency-svc-qlfgm
Feb 24 14:49:57.063: INFO: Got endpoints: latency-svc-qlfgm [2.825908308s]
Feb 24 14:49:57.127: INFO: Created: latency-svc-x4vgf
Feb 24 14:49:57.137: INFO: Got endpoints: latency-svc-x4vgf [2.854094481s]
Feb 24 14:49:57.358: INFO: Created: latency-svc-tv2jz
Feb 24 14:49:57.371: INFO: Got endpoints: latency-svc-tv2jz [2.90927885s]
Feb 24 14:49:57.616: INFO: Created: latency-svc-wt4fp
Feb 24 14:49:57.654: INFO: Got endpoints: latency-svc-wt4fp [3.116757847s]
Feb 24 14:49:57.985: INFO: Created: latency-svc-wzhss
Feb 24 14:49:57.999: INFO: Got endpoints: latency-svc-wzhss [3.029635237s]
Feb 24 14:49:58.198: INFO: Created: latency-svc-dctmt
Feb 24 14:49:58.211: INFO: Got endpoints: latency-svc-dctmt [3.192933529s]
Feb 24 14:49:58.524: INFO: Created: latency-svc-xcwqj
Feb 24 14:49:58.524: INFO: Got endpoints: latency-svc-xcwqj [3.312794316s]
Feb 24 14:49:58.586: INFO: Created: latency-svc-xzs8v
Feb 24 14:49:58.812: INFO: Got endpoints: latency-svc-xzs8v [3.337070777s]
Feb 24 14:49:58.857: INFO: Created: latency-svc-v7m7v
Feb 24 14:49:58.857: INFO: Got endpoints: latency-svc-v7m7v [3.136740933s]
Feb 24 14:49:59.045: INFO: Created: latency-svc-l49cn
Feb 24 14:49:59.100: INFO: Got endpoints: latency-svc-l49cn [3.143316482s]
Feb 24 14:49:59.282: INFO: Created: latency-svc-qr84g
Feb 24 14:49:59.320: INFO: Got endpoints: latency-svc-qr84g [3.327261778s]
Feb 24 14:49:59.468: INFO: Created: latency-svc-r2c94
Feb 24 14:49:59.477: INFO: Got endpoints: latency-svc-r2c94 [3.131677088s]
Feb 24 14:49:59.578: INFO: Created: latency-svc-blmvk
Feb 24 14:49:59.631: INFO: Got endpoints: latency-svc-blmvk [3.119351148s]
Feb 24 14:49:59.697: INFO: Created: latency-svc-m69pj
Feb 24 14:49:59.709: INFO: Got endpoints: latency-svc-m69pj [3.127920326s]
Feb 24 14:49:59.917: INFO: Created: latency-svc-72rlq
Feb 24 14:50:00.002: INFO: Got endpoints: latency-svc-72rlq [3.135305246s]
Feb 24 14:50:00.009: INFO: Created: latency-svc-tkgrw
Feb 24 14:50:00.917: INFO: Got endpoints: latency-svc-tkgrw [3.854098899s]
Feb 24 14:50:01.020: INFO: Created: latency-svc-gjlf6
Feb 24 14:50:01.146: INFO: Got endpoints: latency-svc-gjlf6 [4.008927858s]
Feb 24 14:50:01.217: INFO: Created: latency-svc-4z7mj
Feb 24 14:50:01.229: INFO: Got endpoints: latency-svc-4z7mj [3.857728224s]
Feb 24 14:50:01.367: INFO: Created: latency-svc-xqdj4
Feb 24 14:50:01.379: INFO: Got endpoints: latency-svc-xqdj4 [3.725032165s]
Feb 24 14:50:01.587: INFO: Created: latency-svc-8blgr
Feb 24 14:50:01.599: INFO: Got endpoints: latency-svc-8blgr [3.599679376s]
Feb 24 14:50:01.666: INFO: Created: latency-svc-4mvhq
Feb 24 14:50:01.819: INFO: Got endpoints: latency-svc-4mvhq [3.608057006s]
Feb 24 14:50:01.825: INFO: Created: latency-svc-csr48
Feb 24 14:50:01.838: INFO: Got endpoints: latency-svc-csr48 [3.313176151s]
Feb 24 14:50:02.058: INFO: Created: latency-svc-w8tkj
Feb 24 14:50:02.063: INFO: Got endpoints: latency-svc-w8tkj [3.251235291s]
Feb 24 14:50:02.252: INFO: Created: latency-svc-sjmdn
Feb 24 14:50:02.259: INFO: Got endpoints: latency-svc-sjmdn [3.40162592s]
Feb 24 14:50:02.341: INFO: Created: latency-svc-9s6tl
Feb 24 14:50:02.441: INFO: Got endpoints: latency-svc-9s6tl [3.33997704s]
Feb 24 14:50:02.448: INFO: Created: latency-svc-5sct4
Feb 24 14:50:02.526: INFO: Got endpoints: latency-svc-5sct4 [3.205978999s]
Feb 24 14:50:02.526: INFO: Created: latency-svc-wx9qs
Feb 24 14:50:02.716: INFO: Got endpoints: latency-svc-wx9qs [3.238872101s]
Feb 24 14:50:02.732: INFO: Created: latency-svc-7slxk
Feb 24 14:50:02.756: INFO: Got endpoints: latency-svc-7slxk [3.125759826s]
Feb 24 14:50:03.199: INFO: Created: latency-svc-hdjp4
Feb 24 14:50:03.208: INFO: Got endpoints: latency-svc-hdjp4 [3.499499881s]
Feb 24 14:50:03.283: INFO: Created: latency-svc-zzp8x
Feb 24 14:50:03.399: INFO: Got endpoints: latency-svc-zzp8x [3.396348961s]
Feb 24 14:50:03.436: INFO: Created: latency-svc-h99h4
Feb 24 14:50:03.444: INFO: Got endpoints: latency-svc-h99h4 [2.526968912s]
Feb 24 14:50:03.604: INFO: Created: latency-svc-8s2q7
Feb 24 14:50:03.692: INFO: Created: latency-svc-65m99
Feb 24 14:50:03.692: INFO: Got endpoints: latency-svc-8s2q7 [2.546258961s]
Feb 24 14:50:03.813: INFO: Got endpoints: latency-svc-65m99 [2.583085525s]
Feb 24 14:50:03.828: INFO: Created: latency-svc-9668q
Feb 24 14:50:03.845: INFO: Got endpoints: latency-svc-9668q [2.466304696s]
Feb 24 14:50:03.910: INFO: Created: latency-svc-clt6d
Feb 24 14:50:04.039: INFO: Got endpoints: latency-svc-clt6d [2.439925399s]
Feb 24 14:50:04.106: INFO: Created: latency-svc-d5z5k
Feb 24 14:50:04.126: INFO: Got endpoints: latency-svc-d5z5k [2.306593514s]
Feb 24 14:50:04.242: INFO: Created: latency-svc-rlbzf
Feb 24 14:50:04.257: INFO: Got endpoints: latency-svc-rlbzf [2.41882307s]
Feb 24 14:50:04.475: INFO: Created: latency-svc-hh2pm
Feb 24 14:50:04.518: INFO: Created: latency-svc-xqm6j
Feb 24 14:50:04.519: INFO: Got endpoints: latency-svc-hh2pm [2.455303339s]
Feb 24 14:50:04.537: INFO: Got endpoints: latency-svc-xqm6j [2.278244039s]
Feb 24 14:50:04.687: INFO: Created: latency-svc-l2dqb
Feb 24 14:50:04.694: INFO: Got endpoints: latency-svc-l2dqb [2.253018685s]
Feb 24 14:50:04.869: INFO: Created: latency-svc-4hdq7
Feb 24 14:50:04.942: INFO: Got endpoints: latency-svc-4hdq7 [2.41509762s]
Feb 24 14:50:04.949: INFO: Created: latency-svc-qxnpg
Feb 24 14:50:05.067: INFO: Got endpoints: latency-svc-qxnpg [2.350562261s]
Feb 24 14:50:05.104: INFO: Created: latency-svc-k9hlx
Feb 24 14:50:05.119: INFO: Got endpoints: latency-svc-k9hlx [2.362553439s]
Feb 24 14:50:05.276: INFO: Created: latency-svc-2zbqk
Feb 24 14:50:05.287: INFO: Got endpoints: latency-svc-2zbqk [2.078058842s]
Feb 24 14:50:05.518: INFO: Created: latency-svc-sg6fj
Feb 24 14:50:05.526: INFO: Got endpoints: latency-svc-sg6fj [2.127663391s]
Feb 24 14:50:05.616: INFO: Created: latency-svc-s9c65
Feb 24 14:50:05.738: INFO: Got endpoints: latency-svc-s9c65 [2.293366983s]
Feb 24 14:50:05.825: INFO: Created: latency-svc-cz59z
Feb 24 14:50:05.957: INFO: Got endpoints: latency-svc-cz59z [2.264263969s]
Feb 24 14:50:05.997: INFO: Created: latency-svc-622dz
Feb 24 14:50:06.003: INFO: Got endpoints: latency-svc-622dz [2.190681021s]
Feb 24 14:50:06.229: INFO: Created: latency-svc-7qkmp
Feb 24 14:50:06.282: INFO: Got endpoints: latency-svc-7qkmp [2.43693348s]
Feb 24 14:50:06.285: INFO: Created: latency-svc-gq444
Feb 24 14:50:06.308: INFO: Got endpoints: latency-svc-gq444 [2.268532977s]
Feb 24 14:50:06.465: INFO: Created: latency-svc-x7nmn
Feb 24 14:50:06.539: INFO: Got endpoints: latency-svc-x7nmn [2.412562242s]
Feb 24 14:50:06.545: INFO: Created: latency-svc-qcr2c
Feb 24 14:50:06.647: INFO: Got endpoints: latency-svc-qcr2c [2.389803061s]
Feb 24 14:50:06.728: INFO: Created: latency-svc-5fmxm
Feb 24 14:50:06.734: INFO: Got endpoints: latency-svc-5fmxm [2.215833537s]
Feb 24 14:50:06.909: INFO: Created: latency-svc-9p72j
Feb 24 14:50:06.915: INFO: Got endpoints: latency-svc-9p72j [2.377159165s]
Feb 24 14:50:07.108: INFO: Created: latency-svc-9dgqz
Feb 24 14:50:07.167: INFO: Got endpoints: latency-svc-9dgqz [2.473469746s]
Feb 24 14:50:07.306: INFO: Created: latency-svc-l4l55
Feb 24 14:50:07.315: INFO: Got endpoints: latency-svc-l4l55 [2.372920193s]
Feb 24 14:50:07.373: INFO: Created: latency-svc-hfcpk
Feb 24 14:50:07.383: INFO: Got endpoints: latency-svc-hfcpk [2.315814653s]
Feb 24 14:50:07.579: INFO: Created: latency-svc-2vc6x
Feb 24 14:50:07.637: INFO: Got endpoints: latency-svc-2vc6x [2.518207305s]
Feb 24 14:50:07.644: INFO: Created: latency-svc-wzk4p
Feb 24 14:50:07.842: INFO: Got endpoints: latency-svc-wzk4p [2.555141494s]
Feb 24 14:50:07.859: INFO: Created: latency-svc-kjshg
Feb 24 14:50:07.881: INFO: Got endpoints: latency-svc-kjshg [2.354438974s]
Feb 24 14:50:07.949: INFO: Created: latency-svc-5859l
Feb 24 14:50:08.097: INFO: Got endpoints: latency-svc-5859l [2.359393939s]
Feb 24 14:50:08.185: INFO: Created: latency-svc-g7xsx
Feb 24 14:50:09.348: INFO: Got endpoints: latency-svc-g7xsx [3.391222536s]
Feb 24 14:50:09.349: INFO: Latencies: [185.143603ms 410.18ms 568.218949ms 649.256104ms 837.975286ms 1.045150574s 1.249871463s 1.321584282s 1.361573675s 1.542913418s 1.756854547s 2.054689336s 2.059806131s 2.064357949s 2.078058842s 2.081879606s 2.098373367s 2.127663391s 2.143493753s 2.190681021s 2.193368112s 2.203216435s 2.215833537s 2.246565842s 2.253018685s 2.264263969s 2.268381319s 2.268532977s 2.277457563s 2.278244039s 2.281790224s 2.289453724s 2.29053239s 2.293366983s 2.306593514s 2.315814653s 2.329862012s 2.336379588s 2.350562261s 2.354438974s 2.356313828s 2.359393939s 2.362553439s 2.372920193s 2.377159165s 2.389803061s 2.412562242s 2.41509762s 2.41882307s 2.421705328s 2.423648703s 2.434594056s 2.43693348s 2.439925399s 2.443878177s 2.44916698s 2.450894014s 2.455303339s 2.466304696s 2.473469746s 2.479903164s 2.493041155s 2.506558401s 2.514118742s 2.516371881s 2.518207305s 2.521396514s 2.524038315s 2.524721235s 2.526968912s 2.54502959s 2.546258961s 2.547078257s 2.54843388s 2.555141494s 2.563575122s 2.565253611s 2.567809855s 2.57173158s 2.571805014s 2.583085525s 2.60260357s 2.605483285s 2.621023602s 2.625655396s 2.6351823s 2.642618771s 2.650497055s 2.660189681s 2.661409224s 2.664893343s 2.681273387s 2.687678585s 2.688372971s 2.701040457s 2.710679505s 2.719033594s 2.721404581s 2.728386372s 2.735455178s 2.742944437s 2.752745005s 2.755830896s 2.759814689s 2.762409558s 2.778563013s 2.78001978s 2.793666006s 2.794168642s 2.796190972s 2.801549685s 2.808337595s 2.811058249s 2.812038146s 2.814204402s 2.814409902s 2.820639955s 2.825908308s 2.82639894s 2.835492553s 2.837059732s 2.839926147s 2.843098123s 2.850225944s 2.854094481s 2.856397453s 2.861818439s 2.862061114s 2.863870727s 2.879713182s 2.90927885s 2.955356988s 2.957065626s 3.004457316s 3.029635237s 3.080703445s 3.095742433s 3.116757847s 3.119351148s 3.125759826s 3.127920326s 3.131677088s 3.135305246s 3.136740933s 3.143316482s 3.192933529s 3.205978999s 3.238872101s 3.251235291s 3.312794316s 3.313176151s 3.327261778s 3.337070777s 3.33997704s 3.391222536s 3.391376133s 3.396348961s 3.40162592s 3.402761854s 3.427641327s 3.441755045s 3.451542295s 3.491851298s 3.495615384s 3.499499881s 3.510177921s 3.555425117s 3.563928021s 3.599679376s 3.608057006s 3.612859644s 3.627172338s 3.638136062s 3.639838667s 3.681062172s 3.692473669s 3.725032165s 3.727945311s 3.766457112s 3.782844739s 3.783069406s 3.819339021s 3.837641412s 3.854098899s 3.857728224s 3.979688716s 3.993341049s 4.008927858s 4.017794474s 4.049523284s 4.173755939s 4.209609149s 4.351778867s 4.470603202s 4.495854988s 4.652811664s 4.659510547s 4.81324512s 4.902415215s 4.998177391s]
Feb 24 14:50:09.349: INFO: 50 %ile: 2.742944437s
Feb 24 14:50:09.349: INFO: 90 %ile: 3.783069406s
Feb 24 14:50:09.349: INFO: 99 %ile: 4.902415215s
Feb 24 14:50:09.349: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:50:09.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-4722" for this suite.
Feb 24 14:51:37.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:51:37.554: INFO: namespace svc-latency-4722 deletion completed in 1m28.192297991s

• [SLOW TEST:142.439 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:51:37.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Feb 24 14:51:49.839: INFO: Pod pod-hostip-be72ca4e-19a6-436e-a328-866cff999450 has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:51:49.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9358" for this suite.
Feb 24 14:52:11.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:52:12.089: INFO: namespace pods-9358 deletion completed in 22.241824316s

• [SLOW TEST:34.534 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:52:12.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb 24 14:52:24.477: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-6fc84d49-dfa3-4772-ba31-6527f5a94237,GenerateName:,Namespace:events-5024,SelfLink:/api/v1/namespaces/events-5024/pods/send-events-6fc84d49-dfa3-4772-ba31-6527f5a94237,UID:10d0ba1e-7454-4a80-9bed-c7c7798a7610,ResourceVersion:25589757,Generation:0,CreationTimestamp:2020-02-24 14:52:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 295439636,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gxjq6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gxjq6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-gxjq6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001df38e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001df3900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 14:52:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 14:52:24 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 14:52:24 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 14:52:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-24 14:52:12 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-24 14:52:23 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://c85c02f335531c13695109a694ce9c182aa375616b763c5c3497d0c46a1af2e3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Feb 24 14:52:26.496: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb 24 14:52:28.513: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:52:28.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-5024" for this suite.
Feb 24 14:53:22.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:53:22.825: INFO: namespace events-5024 deletion completed in 54.16094216s

• [SLOW TEST:70.734 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:53:22.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-607.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-607.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-607.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-607.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-607.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-607.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-607.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-607.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-607.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-607.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-607.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-607.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-607.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 46.140.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.140.46_udp@PTR;check="$$(dig +tcp +noall +answer +search 46.140.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.140.46_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-607.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-607.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-607.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-607.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-607.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-607.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-607.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-607.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-607.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-607.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-607.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-607.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-607.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 46.140.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.140.46_udp@PTR;check="$$(dig +tcp +noall +answer +search 46.140.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.140.46_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 24 14:53:46.298: INFO: Unable to read wheezy_udp@dns-test-service.dns-607.svc.cluster.local from pod dns-607/dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56: the server could not find the requested resource (get pods dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56)
Feb 24 14:53:46.314: INFO: Unable to read wheezy_tcp@dns-test-service.dns-607.svc.cluster.local from pod dns-607/dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56: the server could not find the requested resource (get pods dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56)
Feb 24 14:53:46.325: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-607.svc.cluster.local from pod dns-607/dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56: the server could not find the requested resource (get pods dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56)
Feb 24 14:53:46.331: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-607.svc.cluster.local from pod dns-607/dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56: the server could not find the requested resource (get pods dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56)
Feb 24 14:53:46.338: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-607.svc.cluster.local from pod dns-607/dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56: the server could not find the requested resource (get pods dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56)
Feb 24 14:53:46.348: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-607.svc.cluster.local from pod dns-607/dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56: the server could not find the requested resource (get pods dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56)
Feb 24 14:53:46.353: INFO: Unable to read wheezy_udp@PodARecord from pod dns-607/dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56: the server could not find the requested resource (get pods dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56)
Feb 24 14:53:46.357: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-607/dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56: the server could not find the requested resource (get pods dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56)
Feb 24 14:53:46.362: INFO: Unable to read 10.96.140.46_udp@PTR from pod dns-607/dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56: the server could not find the requested resource (get pods dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56)
Feb 24 14:53:46.370: INFO: Unable to read 10.96.140.46_tcp@PTR from pod dns-607/dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56: the server could not find the requested resource (get pods dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56)
Feb 24 14:53:46.378: INFO: Unable to read jessie_udp@dns-test-service.dns-607.svc.cluster.local from pod dns-607/dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56: the server could not find the requested resource (get pods dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56)
Feb 24 14:53:46.384: INFO: Unable to read jessie_tcp@dns-test-service.dns-607.svc.cluster.local from pod dns-607/dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56: the server could not find the requested resource (get pods dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56)
Feb 24 14:53:46.390: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-607.svc.cluster.local from pod dns-607/dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56: the server could not find the requested resource (get pods dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56)
Feb 24 14:53:46.396: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-607.svc.cluster.local from pod dns-607/dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56: the server could not find the requested resource (get pods dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56)
Feb 24 14:53:46.401: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-607.svc.cluster.local from pod dns-607/dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56: the server could not find the requested resource (get pods dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56)
Feb 24 14:53:46.404: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-607.svc.cluster.local from pod dns-607/dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56: the server could not find the requested resource (get pods dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56)
Feb 24 14:53:46.410: INFO: Unable to read jessie_udp@PodARecord from pod dns-607/dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56: the server could not find the requested resource (get pods dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56)
Feb 24 14:53:46.415: INFO: Unable to read jessie_tcp@PodARecord from pod dns-607/dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56: the server could not find the requested resource (get pods dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56)
Feb 24 14:53:46.420: INFO: Unable to read 10.96.140.46_udp@PTR from pod dns-607/dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56: the server could not find the requested resource (get pods dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56)
Feb 24 14:53:46.431: INFO: Unable to read 10.96.140.46_tcp@PTR from pod dns-607/dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56: the server could not find the requested resource (get pods dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56)
Feb 24 14:53:46.431: INFO: Lookups using dns-607/dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56 failed for: [wheezy_udp@dns-test-service.dns-607.svc.cluster.local wheezy_tcp@dns-test-service.dns-607.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-607.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-607.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-607.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-607.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.96.140.46_udp@PTR 10.96.140.46_tcp@PTR jessie_udp@dns-test-service.dns-607.svc.cluster.local jessie_tcp@dns-test-service.dns-607.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-607.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-607.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-607.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-607.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.96.140.46_udp@PTR 10.96.140.46_tcp@PTR]

Feb 24 14:53:51.641: INFO: DNS probes using dns-607/dns-test-5c8dc2cc-bbb4-4f02-9549-66d97b244c56 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:53:52.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-607" for this suite.
Feb 24 14:53:58.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:53:58.737: INFO: namespace dns-607 deletion completed in 6.380899903s

• [SLOW TEST:35.912 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:53:58.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 24 14:53:58.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8158'
Feb 24 14:54:01.783: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 24 14:54:01.784: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Feb 24 14:54:01.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-8158'
Feb 24 14:54:02.184: INFO: stderr: ""
Feb 24 14:54:02.184: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:54:02.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8158" for this suite.
Feb 24 14:54:10.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:54:10.317: INFO: namespace kubectl-8158 deletion completed in 8.123299394s

• [SLOW TEST:11.579 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:54:10.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 24 14:54:10.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb 24 14:54:10.781: INFO: stderr: ""
Feb 24 14:54:10.781: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:54:10.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5265" for this suite.
Feb 24 14:54:16.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:54:16.989: INFO: namespace kubectl-5265 deletion completed in 6.195915343s

• [SLOW TEST:6.671 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:54:16.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Feb 24 14:54:17.159: INFO: Waiting up to 5m0s for pod "client-containers-2d1bcaef-7406-4d5f-9c0e-2f3ac2980533" in namespace "containers-4542" to be "success or failure"
Feb 24 14:54:17.187: INFO: Pod "client-containers-2d1bcaef-7406-4d5f-9c0e-2f3ac2980533": Phase="Pending", Reason="", readiness=false. Elapsed: 27.729859ms
Feb 24 14:54:19.196: INFO: Pod "client-containers-2d1bcaef-7406-4d5f-9c0e-2f3ac2980533": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036537365s
Feb 24 14:54:21.203: INFO: Pod "client-containers-2d1bcaef-7406-4d5f-9c0e-2f3ac2980533": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043876477s
Feb 24 14:54:23.211: INFO: Pod "client-containers-2d1bcaef-7406-4d5f-9c0e-2f3ac2980533": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052084039s
Feb 24 14:54:25.221: INFO: Pod "client-containers-2d1bcaef-7406-4d5f-9c0e-2f3ac2980533": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06146254s
Feb 24 14:54:27.234: INFO: Pod "client-containers-2d1bcaef-7406-4d5f-9c0e-2f3ac2980533": Phase="Pending", Reason="", readiness=false. Elapsed: 10.075091804s
Feb 24 14:54:29.244: INFO: Pod "client-containers-2d1bcaef-7406-4d5f-9c0e-2f3ac2980533": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.084579883s
STEP: Saw pod success
Feb 24 14:54:29.244: INFO: Pod "client-containers-2d1bcaef-7406-4d5f-9c0e-2f3ac2980533" satisfied condition "success or failure"
Feb 24 14:54:29.249: INFO: Trying to get logs from node iruya-node pod client-containers-2d1bcaef-7406-4d5f-9c0e-2f3ac2980533 container test-container: 
STEP: delete the pod
Feb 24 14:54:29.363: INFO: Waiting for pod client-containers-2d1bcaef-7406-4d5f-9c0e-2f3ac2980533 to disappear
Feb 24 14:54:29.461: INFO: Pod client-containers-2d1bcaef-7406-4d5f-9c0e-2f3ac2980533 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:54:29.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4542" for this suite.
Feb 24 14:54:35.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:54:35.708: INFO: namespace containers-4542 deletion completed in 6.232074322s

• [SLOW TEST:18.719 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:54:35.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 24 14:54:35.929: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f4585a5-f58c-4060-a2e8-c020db9c2fa8" in namespace "projected-1720" to be "success or failure"
Feb 24 14:54:35.949: INFO: Pod "downwardapi-volume-9f4585a5-f58c-4060-a2e8-c020db9c2fa8": Phase="Pending", Reason="", readiness=false. Elapsed: 20.084886ms
Feb 24 14:54:37.959: INFO: Pod "downwardapi-volume-9f4585a5-f58c-4060-a2e8-c020db9c2fa8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029300374s
Feb 24 14:54:39.965: INFO: Pod "downwardapi-volume-9f4585a5-f58c-4060-a2e8-c020db9c2fa8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036165984s
Feb 24 14:54:41.981: INFO: Pod "downwardapi-volume-9f4585a5-f58c-4060-a2e8-c020db9c2fa8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051590423s
Feb 24 14:54:44.027: INFO: Pod "downwardapi-volume-9f4585a5-f58c-4060-a2e8-c020db9c2fa8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.097698547s
Feb 24 14:54:46.038: INFO: Pod "downwardapi-volume-9f4585a5-f58c-4060-a2e8-c020db9c2fa8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.108708254s
Feb 24 14:54:48.048: INFO: Pod "downwardapi-volume-9f4585a5-f58c-4060-a2e8-c020db9c2fa8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.118428604s
Feb 24 14:54:50.064: INFO: Pod "downwardapi-volume-9f4585a5-f58c-4060-a2e8-c020db9c2fa8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.13465584s
Feb 24 14:54:52.071: INFO: Pod "downwardapi-volume-9f4585a5-f58c-4060-a2e8-c020db9c2fa8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.141584949s
STEP: Saw pod success
Feb 24 14:54:52.071: INFO: Pod "downwardapi-volume-9f4585a5-f58c-4060-a2e8-c020db9c2fa8" satisfied condition "success or failure"
Feb 24 14:54:52.074: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-9f4585a5-f58c-4060-a2e8-c020db9c2fa8 container client-container: 
STEP: delete the pod
Feb 24 14:54:52.211: INFO: Waiting for pod downwardapi-volume-9f4585a5-f58c-4060-a2e8-c020db9c2fa8 to disappear
Feb 24 14:54:52.215: INFO: Pod downwardapi-volume-9f4585a5-f58c-4060-a2e8-c020db9c2fa8 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:54:52.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1720" for this suite.
Feb 24 14:54:58.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:54:58.727: INFO: namespace projected-1720 deletion completed in 6.490935807s

• [SLOW TEST:23.018 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:54:58.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 24 14:54:58.989: INFO: Waiting up to 5m0s for pod "downwardapi-volume-43a12556-6564-4644-bdb9-62923e4a8b79" in namespace "downward-api-7871" to be "success or failure"
Feb 24 14:54:59.114: INFO: Pod "downwardapi-volume-43a12556-6564-4644-bdb9-62923e4a8b79": Phase="Pending", Reason="", readiness=false. Elapsed: 124.887634ms
Feb 24 14:55:01.123: INFO: Pod "downwardapi-volume-43a12556-6564-4644-bdb9-62923e4a8b79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133203719s
Feb 24 14:55:03.141: INFO: Pod "downwardapi-volume-43a12556-6564-4644-bdb9-62923e4a8b79": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151184871s
Feb 24 14:55:05.150: INFO: Pod "downwardapi-volume-43a12556-6564-4644-bdb9-62923e4a8b79": Phase="Pending", Reason="", readiness=false. Elapsed: 6.160180055s
Feb 24 14:55:07.156: INFO: Pod "downwardapi-volume-43a12556-6564-4644-bdb9-62923e4a8b79": Phase="Pending", Reason="", readiness=false. Elapsed: 8.166839865s
Feb 24 14:55:09.164: INFO: Pod "downwardapi-volume-43a12556-6564-4644-bdb9-62923e4a8b79": Phase="Pending", Reason="", readiness=false. Elapsed: 10.174694876s
Feb 24 14:55:11.172: INFO: Pod "downwardapi-volume-43a12556-6564-4644-bdb9-62923e4a8b79": Phase="Pending", Reason="", readiness=false. Elapsed: 12.182210579s
Feb 24 14:55:13.181: INFO: Pod "downwardapi-volume-43a12556-6564-4644-bdb9-62923e4a8b79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.191823582s
STEP: Saw pod success
Feb 24 14:55:13.181: INFO: Pod "downwardapi-volume-43a12556-6564-4644-bdb9-62923e4a8b79" satisfied condition "success or failure"
Feb 24 14:55:13.185: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-43a12556-6564-4644-bdb9-62923e4a8b79 container client-container: 
STEP: delete the pod
Feb 24 14:55:13.445: INFO: Waiting for pod downwardapi-volume-43a12556-6564-4644-bdb9-62923e4a8b79 to disappear
Feb 24 14:55:13.572: INFO: Pod downwardapi-volume-43a12556-6564-4644-bdb9-62923e4a8b79 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:55:13.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7871" for this suite.
Feb 24 14:55:19.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:55:19.738: INFO: namespace downward-api-7871 deletion completed in 6.157813523s

• [SLOW TEST:21.010 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:55:19.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 24 14:55:19.898: INFO: Waiting up to 5m0s for pod "pod-1c91e7c9-9176-42c0-8dbc-75d2ef09f7e9" in namespace "emptydir-6498" to be "success or failure"
Feb 24 14:55:19.914: INFO: Pod "pod-1c91e7c9-9176-42c0-8dbc-75d2ef09f7e9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.327091ms
Feb 24 14:55:21.925: INFO: Pod "pod-1c91e7c9-9176-42c0-8dbc-75d2ef09f7e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027038706s
Feb 24 14:55:23.938: INFO: Pod "pod-1c91e7c9-9176-42c0-8dbc-75d2ef09f7e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04024309s
Feb 24 14:55:25.947: INFO: Pod "pod-1c91e7c9-9176-42c0-8dbc-75d2ef09f7e9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048278212s
Feb 24 14:55:27.955: INFO: Pod "pod-1c91e7c9-9176-42c0-8dbc-75d2ef09f7e9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056451652s
Feb 24 14:55:29.963: INFO: Pod "pod-1c91e7c9-9176-42c0-8dbc-75d2ef09f7e9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.064997966s
Feb 24 14:55:31.971: INFO: Pod "pod-1c91e7c9-9176-42c0-8dbc-75d2ef09f7e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.072859056s
STEP: Saw pod success
Feb 24 14:55:31.971: INFO: Pod "pod-1c91e7c9-9176-42c0-8dbc-75d2ef09f7e9" satisfied condition "success or failure"
Feb 24 14:55:31.977: INFO: Trying to get logs from node iruya-node pod pod-1c91e7c9-9176-42c0-8dbc-75d2ef09f7e9 container test-container: 
STEP: delete the pod
Feb 24 14:55:32.155: INFO: Waiting for pod pod-1c91e7c9-9176-42c0-8dbc-75d2ef09f7e9 to disappear
Feb 24 14:55:32.170: INFO: Pod pod-1c91e7c9-9176-42c0-8dbc-75d2ef09f7e9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:55:32.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6498" for this suite.
Feb 24 14:55:38.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:55:38.433: INFO: namespace emptydir-6498 deletion completed in 6.255646272s

• [SLOW TEST:18.694 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:55:38.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 24 14:55:50.986: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:55:51.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1709" for this suite.
Feb 24 14:55:57.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:55:57.179: INFO: namespace container-runtime-1709 deletion completed in 6.149540467s

• [SLOW TEST:18.746 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:55:57.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb 24 14:56:07.860: INFO: 10 pods remaining
Feb 24 14:56:07.860: INFO: 0 pods has nil DeletionTimestamp
Feb 24 14:56:07.860: INFO: 
Feb 24 14:56:08.226: INFO: 0 pods remaining
Feb 24 14:56:08.226: INFO: 0 pods has nil DeletionTimestamp
Feb 24 14:56:08.226: INFO: 
STEP: Gathering metrics
W0224 14:56:09.187271       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 24 14:56:09.187: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:56:09.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3543" for this suite.
Feb 24 14:56:29.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:56:29.548: INFO: namespace gc-3543 deletion completed in 20.356210309s

• [SLOW TEST:32.368 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:56:29.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6551
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb 24 14:56:29.966: INFO: Found 0 stateful pods, waiting for 3
Feb 24 14:56:39.975: INFO: Found 1 stateful pods, waiting for 3
Feb 24 14:56:50.059: INFO: Found 2 stateful pods, waiting for 3
Feb 24 14:56:59.973: INFO: Found 2 stateful pods, waiting for 3
Feb 24 14:57:09.977: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 24 14:57:09.977: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 24 14:57:09.977: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 24 14:57:19.985: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 24 14:57:19.985: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 24 14:57:19.985: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 24 14:57:20.025: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb 24 14:57:31.147: INFO: Updating stateful set ss2
Feb 24 14:57:31.238: INFO: Waiting for Pod statefulset-6551/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb 24 14:57:41.692: INFO: Found 2 stateful pods, waiting for 3
Feb 24 14:57:51.701: INFO: Found 2 stateful pods, waiting for 3
Feb 24 14:58:02.442: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 24 14:58:02.442: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 24 14:58:02.442: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 24 14:58:11.708: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 24 14:58:11.708: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 24 14:58:11.708: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb 24 14:58:11.759: INFO: Updating stateful set ss2
Feb 24 14:58:11.875: INFO: Waiting for Pod statefulset-6551/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 24 14:58:23.186: INFO: Updating stateful set ss2
Feb 24 14:58:23.419: INFO: Waiting for StatefulSet statefulset-6551/ss2 to complete update
Feb 24 14:58:23.419: INFO: Waiting for Pod statefulset-6551/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 24 14:58:33.831: INFO: Waiting for StatefulSet statefulset-6551/ss2 to complete update
Feb 24 14:58:33.831: INFO: Waiting for Pod statefulset-6551/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 24 14:58:43.432: INFO: Waiting for StatefulSet statefulset-6551/ss2 to complete update
Feb 24 14:58:43.432: INFO: Waiting for Pod statefulset-6551/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 24 14:58:53.443: INFO: Waiting for StatefulSet statefulset-6551/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 24 14:59:03.435: INFO: Deleting all statefulset in ns statefulset-6551
Feb 24 14:59:03.441: INFO: Scaling statefulset ss2 to 0
Feb 24 14:59:45.344: INFO: Waiting for statefulset status.replicas updated to 0
Feb 24 14:59:45.350: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:59:45.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6551" for this suite.
Feb 24 14:59:51.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 14:59:51.622: INFO: namespace statefulset-6551 deletion completed in 6.239920927s

• [SLOW TEST:202.075 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 14:59:51.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 24 14:59:51.734: INFO: Waiting up to 5m0s for pod "pod-e0d8111e-b053-42de-bff2-af11d208bf78" in namespace "emptydir-4600" to be "success or failure"
Feb 24 14:59:51.749: INFO: Pod "pod-e0d8111e-b053-42de-bff2-af11d208bf78": Phase="Pending", Reason="", readiness=false. Elapsed: 14.464498ms
Feb 24 14:59:53.764: INFO: Pod "pod-e0d8111e-b053-42de-bff2-af11d208bf78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030105292s
Feb 24 14:59:55.770: INFO: Pod "pod-e0d8111e-b053-42de-bff2-af11d208bf78": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036162893s
Feb 24 14:59:57.790: INFO: Pod "pod-e0d8111e-b053-42de-bff2-af11d208bf78": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055596047s
Feb 24 14:59:59.800: INFO: Pod "pod-e0d8111e-b053-42de-bff2-af11d208bf78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066124057s
STEP: Saw pod success
Feb 24 14:59:59.800: INFO: Pod "pod-e0d8111e-b053-42de-bff2-af11d208bf78" satisfied condition "success or failure"
Feb 24 14:59:59.805: INFO: Trying to get logs from node iruya-node pod pod-e0d8111e-b053-42de-bff2-af11d208bf78 container test-container: 
STEP: delete the pod
Feb 24 14:59:59.893: INFO: Waiting for pod pod-e0d8111e-b053-42de-bff2-af11d208bf78 to disappear
Feb 24 14:59:59.901: INFO: Pod pod-e0d8111e-b053-42de-bff2-af11d208bf78 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 14:59:59.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4600" for this suite.
Feb 24 15:00:06.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:00:06.130: INFO: namespace emptydir-4600 deletion completed in 6.223594181s

• [SLOW TEST:14.507 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:00:06.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 24 15:00:06.293: INFO: Number of nodes with available pods: 0
Feb 24 15:00:06.293: INFO: Node iruya-node is running more than one daemon pod
Feb 24 15:00:08.174: INFO: Number of nodes with available pods: 0
Feb 24 15:00:08.174: INFO: Node iruya-node is running more than one daemon pod
Feb 24 15:00:08.772: INFO: Number of nodes with available pods: 0
Feb 24 15:00:08.772: INFO: Node iruya-node is running more than one daemon pod
Feb 24 15:00:09.570: INFO: Number of nodes with available pods: 0
Feb 24 15:00:09.571: INFO: Node iruya-node is running more than one daemon pod
Feb 24 15:00:10.316: INFO: Number of nodes with available pods: 0
Feb 24 15:00:10.316: INFO: Node iruya-node is running more than one daemon pod
Feb 24 15:00:11.315: INFO: Number of nodes with available pods: 0
Feb 24 15:00:11.315: INFO: Node iruya-node is running more than one daemon pod
Feb 24 15:00:12.477: INFO: Number of nodes with available pods: 0
Feb 24 15:00:12.477: INFO: Node iruya-node is running more than one daemon pod
Feb 24 15:00:13.943: INFO: Number of nodes with available pods: 0
Feb 24 15:00:13.943: INFO: Node iruya-node is running more than one daemon pod
Feb 24 15:00:14.316: INFO: Number of nodes with available pods: 0
Feb 24 15:00:14.316: INFO: Node iruya-node is running more than one daemon pod
Feb 24 15:00:15.323: INFO: Number of nodes with available pods: 0
Feb 24 15:00:15.323: INFO: Node iruya-node is running more than one daemon pod
Feb 24 15:00:16.316: INFO: Number of nodes with available pods: 1
Feb 24 15:00:16.316: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 24 15:00:17.309: INFO: Number of nodes with available pods: 2
Feb 24 15:00:17.309: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb 24 15:00:17.372: INFO: Number of nodes with available pods: 1
Feb 24 15:00:17.372: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 24 15:00:18.387: INFO: Number of nodes with available pods: 1
Feb 24 15:00:18.387: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 24 15:00:19.387: INFO: Number of nodes with available pods: 1
Feb 24 15:00:19.387: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 24 15:00:20.403: INFO: Number of nodes with available pods: 1
Feb 24 15:00:20.403: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 24 15:00:21.386: INFO: Number of nodes with available pods: 1
Feb 24 15:00:21.386: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 24 15:00:22.387: INFO: Number of nodes with available pods: 1
Feb 24 15:00:22.387: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 24 15:00:23.387: INFO: Number of nodes with available pods: 1
Feb 24 15:00:23.387: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 24 15:00:24.388: INFO: Number of nodes with available pods: 1
Feb 24 15:00:24.388: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 24 15:00:25.385: INFO: Number of nodes with available pods: 1
Feb 24 15:00:25.385: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 24 15:00:26.388: INFO: Number of nodes with available pods: 1
Feb 24 15:00:26.388: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 24 15:00:27.386: INFO: Number of nodes with available pods: 1
Feb 24 15:00:27.386: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 24 15:00:28.385: INFO: Number of nodes with available pods: 1
Feb 24 15:00:28.385: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 24 15:00:29.387: INFO: Number of nodes with available pods: 1
Feb 24 15:00:29.387: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 24 15:00:30.384: INFO: Number of nodes with available pods: 1
Feb 24 15:00:30.384: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 24 15:00:31.388: INFO: Number of nodes with available pods: 1
Feb 24 15:00:31.388: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 24 15:00:32.794: INFO: Number of nodes with available pods: 1
Feb 24 15:00:32.794: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 24 15:00:33.404: INFO: Number of nodes with available pods: 1
Feb 24 15:00:33.404: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 24 15:00:34.389: INFO: Number of nodes with available pods: 1
Feb 24 15:00:34.389: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 24 15:00:35.388: INFO: Number of nodes with available pods: 1
Feb 24 15:00:35.389: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 24 15:00:36.387: INFO: Number of nodes with available pods: 2
Feb 24 15:00:36.387: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-139, will wait for the garbage collector to delete the pods
Feb 24 15:00:36.464: INFO: Deleting DaemonSet.extensions daemon-set took: 18.346247ms
Feb 24 15:00:36.765: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.559441ms
Feb 24 15:00:47.876: INFO: Number of nodes with available pods: 0
Feb 24 15:00:47.876: INFO: Number of running nodes: 0, number of available pods: 0
Feb 24 15:00:47.883: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-139/daemonsets","resourceVersion":"25591061"},"items":null}

Feb 24 15:00:47.889: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-139/pods","resourceVersion":"25591061"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:00:47.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-139" for this suite.
Feb 24 15:00:53.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:00:54.012: INFO: namespace daemonsets-139 deletion completed in 6.096677695s

• [SLOW TEST:47.881 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:00:54.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 24 15:01:02.697: INFO: Successfully updated pod "annotationupdate5d0d2cc0-b5e1-456a-8338-d4458b9f81ab"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:01:04.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6124" for this suite.
Feb 24 15:01:26.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:01:27.063: INFO: namespace downward-api-6124 deletion completed in 22.192926783s

• [SLOW TEST:33.051 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:01:27.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:01:57.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-1535" for this suite.
Feb 24 15:02:03.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:02:03.592: INFO: namespace namespaces-1535 deletion completed in 6.14875526s
STEP: Destroying namespace "nsdeletetest-9324" for this suite.
Feb 24 15:02:03.595: INFO: Namespace nsdeletetest-9324 was already deleted
STEP: Destroying namespace "nsdeletetest-290" for this suite.
Feb 24 15:02:09.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:02:09.782: INFO: namespace nsdeletetest-290 deletion completed in 6.187063892s

• [SLOW TEST:42.719 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:02:09.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 24 15:02:09.869: INFO: Waiting up to 5m0s for pod "downward-api-c4efd754-8848-4b80-ae91-fedf05aaf91d" in namespace "downward-api-3514" to be "success or failure"
Feb 24 15:02:09.954: INFO: Pod "downward-api-c4efd754-8848-4b80-ae91-fedf05aaf91d": Phase="Pending", Reason="", readiness=false. Elapsed: 85.198203ms
Feb 24 15:02:11.980: INFO: Pod "downward-api-c4efd754-8848-4b80-ae91-fedf05aaf91d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110991717s
Feb 24 15:02:13.999: INFO: Pod "downward-api-c4efd754-8848-4b80-ae91-fedf05aaf91d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129506474s
Feb 24 15:02:16.007: INFO: Pod "downward-api-c4efd754-8848-4b80-ae91-fedf05aaf91d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137573242s
Feb 24 15:02:18.076: INFO: Pod "downward-api-c4efd754-8848-4b80-ae91-fedf05aaf91d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.206571366s
Feb 24 15:02:20.120: INFO: Pod "downward-api-c4efd754-8848-4b80-ae91-fedf05aaf91d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.250791785s
STEP: Saw pod success
Feb 24 15:02:20.120: INFO: Pod "downward-api-c4efd754-8848-4b80-ae91-fedf05aaf91d" satisfied condition "success or failure"
Feb 24 15:02:20.124: INFO: Trying to get logs from node iruya-node pod downward-api-c4efd754-8848-4b80-ae91-fedf05aaf91d container dapi-container: 
STEP: delete the pod
Feb 24 15:02:20.336: INFO: Waiting for pod downward-api-c4efd754-8848-4b80-ae91-fedf05aaf91d to disappear
Feb 24 15:02:20.344: INFO: Pod downward-api-c4efd754-8848-4b80-ae91-fedf05aaf91d no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:02:20.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3514" for this suite.
Feb 24 15:02:26.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:02:26.582: INFO: namespace downward-api-3514 deletion completed in 6.228993345s

• [SLOW TEST:16.799 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:02:26.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-3b400689-af74-4388-9906-a04dc65878f7
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:02:26.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1622" for this suite.
Feb 24 15:02:32.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:02:32.936: INFO: namespace secrets-1622 deletion completed in 6.198276724s

• [SLOW TEST:6.352 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:02:32.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-7479
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-7479
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7479
Feb 24 15:02:33.145: INFO: Found 0 stateful pods, waiting for 1
Feb 24 15:02:43.152: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb 24 15:02:43.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7479 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 24 15:02:44.055: INFO: stderr: "I0224 15:02:43.404170    3019 log.go:172] (0xc0009ee420) (0xc0004046e0) Create stream\nI0224 15:02:43.404464    3019 log.go:172] (0xc0009ee420) (0xc0004046e0) Stream added, broadcasting: 1\nI0224 15:02:43.416288    3019 log.go:172] (0xc0009ee420) Reply frame received for 1\nI0224 15:02:43.416350    3019 log.go:172] (0xc0009ee420) (0xc0005f63c0) Create stream\nI0224 15:02:43.416363    3019 log.go:172] (0xc0009ee420) (0xc0005f63c0) Stream added, broadcasting: 3\nI0224 15:02:43.418084    3019 log.go:172] (0xc0009ee420) Reply frame received for 3\nI0224 15:02:43.418106    3019 log.go:172] (0xc0009ee420) (0xc0005f6460) Create stream\nI0224 15:02:43.418115    3019 log.go:172] (0xc0009ee420) (0xc0005f6460) Stream added, broadcasting: 5\nI0224 15:02:43.420158    3019 log.go:172] (0xc0009ee420) Reply frame received for 5\nI0224 15:02:43.573981    3019 log.go:172] (0xc0009ee420) Data frame received for 5\nI0224 15:02:43.574073    3019 log.go:172] (0xc0005f6460) (5) Data frame handling\nI0224 15:02:43.574114    3019 log.go:172] (0xc0005f6460) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0224 15:02:43.679668    3019 log.go:172] (0xc0009ee420) Data frame received for 3\nI0224 15:02:43.679799    3019 log.go:172] (0xc0005f63c0) (3) Data frame handling\nI0224 15:02:43.679883    3019 log.go:172] (0xc0005f63c0) (3) Data frame sent\nI0224 15:02:44.009968    3019 log.go:172] (0xc0009ee420) Data frame received for 1\nI0224 15:02:44.010157    3019 log.go:172] (0xc0004046e0) (1) Data frame handling\nI0224 15:02:44.010243    3019 log.go:172] (0xc0004046e0) (1) Data frame sent\nI0224 15:02:44.010298    3019 log.go:172] (0xc0009ee420) (0xc0004046e0) Stream removed, broadcasting: 1\nI0224 15:02:44.030175    3019 log.go:172] (0xc0009ee420) (0xc0005f63c0) Stream removed, broadcasting: 3\nI0224 15:02:44.030979    3019 log.go:172] (0xc0009ee420) (0xc0005f6460) Stream removed, broadcasting: 5\nI0224 15:02:44.031082    3019 log.go:172] (0xc0009ee420) (0xc0004046e0) Stream removed, broadcasting: 1\nI0224 15:02:44.031095    3019 log.go:172] (0xc0009ee420) (0xc0005f63c0) Stream removed, broadcasting: 3\nI0224 15:02:44.031144    3019 log.go:172] (0xc0009ee420) (0xc0005f6460) Stream removed, broadcasting: 5\n"
Feb 24 15:02:44.055: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 24 15:02:44.056: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 24 15:02:44.083: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 24 15:02:44.083: INFO: Waiting for statefulset status.replicas updated to 0
Feb 24 15:02:44.138: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999584s
Feb 24 15:02:45.146: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.971497251s
Feb 24 15:02:46.154: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.963375246s
Feb 24 15:02:47.164: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.954931341s
Feb 24 15:02:48.176: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.944725326s
Feb 24 15:02:49.198: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.932728819s
Feb 24 15:02:50.217: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.91120041s
Feb 24 15:02:51.227: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.892328159s
Feb 24 15:02:52.233: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.881856588s
Feb 24 15:02:53.240: INFO: Verifying statefulset ss doesn't scale past 1 for another 875.692852ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7479
Feb 24 15:02:54.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7479 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 24 15:02:54.750: INFO: stderr: "I0224 15:02:54.418675    3042 log.go:172] (0xc0008de8f0) (0xc0008d6b40) Create stream\nI0224 15:02:54.418801    3042 log.go:172] (0xc0008de8f0) (0xc0008d6b40) Stream added, broadcasting: 1\nI0224 15:02:54.431838    3042 log.go:172] (0xc0008de8f0) Reply frame received for 1\nI0224 15:02:54.431869    3042 log.go:172] (0xc0008de8f0) (0xc0008d6000) Create stream\nI0224 15:02:54.431878    3042 log.go:172] (0xc0008de8f0) (0xc0008d6000) Stream added, broadcasting: 3\nI0224 15:02:54.434266    3042 log.go:172] (0xc0008de8f0) Reply frame received for 3\nI0224 15:02:54.434303    3042 log.go:172] (0xc0008de8f0) (0xc00012c1e0) Create stream\nI0224 15:02:54.434321    3042 log.go:172] (0xc0008de8f0) (0xc00012c1e0) Stream added, broadcasting: 5\nI0224 15:02:54.435309    3042 log.go:172] (0xc0008de8f0) Reply frame received for 5\nI0224 15:02:54.619780    3042 log.go:172] (0xc0008de8f0) Data frame received for 3\nI0224 15:02:54.619805    3042 log.go:172] (0xc0008d6000) (3) Data frame handling\nI0224 15:02:54.619812    3042 log.go:172] (0xc0008d6000) (3) Data frame sent\nI0224 15:02:54.619830    3042 log.go:172] (0xc0008de8f0) Data frame received for 5\nI0224 15:02:54.619838    3042 log.go:172] (0xc00012c1e0) (5) Data frame handling\nI0224 15:02:54.619853    3042 log.go:172] (0xc00012c1e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0224 15:02:54.739928    3042 log.go:172] (0xc0008de8f0) (0xc0008d6000) Stream removed, broadcasting: 3\nI0224 15:02:54.740050    3042 log.go:172] (0xc0008de8f0) Data frame received for 1\nI0224 15:02:54.740072    3042 log.go:172] (0xc0008d6b40) (1) Data frame handling\nI0224 15:02:54.740099    3042 log.go:172] (0xc0008de8f0) (0xc00012c1e0) Stream removed, broadcasting: 5\nI0224 15:02:54.740184    3042 log.go:172] (0xc0008d6b40) (1) Data frame sent\nI0224 15:02:54.740232    3042 log.go:172] (0xc0008de8f0) (0xc0008d6b40) Stream removed, broadcasting: 1\nI0224 15:02:54.740256    3042 log.go:172] (0xc0008de8f0) Go away received\nI0224 15:02:54.740660    3042 log.go:172] (0xc0008de8f0) (0xc0008d6b40) Stream removed, broadcasting: 1\nI0224 15:02:54.740675    3042 log.go:172] (0xc0008de8f0) (0xc0008d6000) Stream removed, broadcasting: 3\nI0224 15:02:54.740687    3042 log.go:172] (0xc0008de8f0) (0xc00012c1e0) Stream removed, broadcasting: 5\n"
Feb 24 15:02:54.750: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 24 15:02:54.750: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 24 15:02:54.756: INFO: Found 1 stateful pods, waiting for 3
Feb 24 15:03:04.766: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 24 15:03:04.766: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 24 15:03:04.766: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 24 15:03:14.763: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 24 15:03:14.763: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 24 15:03:14.763: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb 24 15:03:14.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7479 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 24 15:03:15.248: INFO: stderr: "I0224 15:03:14.951196    3058 log.go:172] (0xc0008786e0) (0xc000826aa0) Create stream\nI0224 15:03:14.951338    3058 log.go:172] (0xc0008786e0) (0xc000826aa0) Stream added, broadcasting: 1\nI0224 15:03:14.965272    3058 log.go:172] (0xc0008786e0) Reply frame received for 1\nI0224 15:03:14.965297    3058 log.go:172] (0xc0008786e0) (0xc000826000) Create stream\nI0224 15:03:14.965303    3058 log.go:172] (0xc0008786e0) (0xc000826000) Stream added, broadcasting: 3\nI0224 15:03:14.966928    3058 log.go:172] (0xc0008786e0) Reply frame received for 3\nI0224 15:03:14.966972    3058 log.go:172] (0xc0008786e0) (0xc0005fc500) Create stream\nI0224 15:03:14.966989    3058 log.go:172] (0xc0008786e0) (0xc0005fc500) Stream added, broadcasting: 5\nI0224 15:03:14.968324    3058 log.go:172] (0xc0008786e0) Reply frame received for 5\nI0224 15:03:15.113975    3058 log.go:172] (0xc0008786e0) Data frame received for 5\nI0224 15:03:15.114046    3058 log.go:172] (0xc0005fc500) (5) Data frame handling\nI0224 15:03:15.114087    3058 log.go:172] (0xc0005fc500) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0224 15:03:15.114250    3058 log.go:172] (0xc0008786e0) Data frame received for 3\nI0224 15:03:15.114330    3058 log.go:172] (0xc000826000) (3) Data frame handling\nI0224 15:03:15.114381    3058 log.go:172] (0xc000826000) (3) Data frame sent\nI0224 15:03:15.239414    3058 log.go:172] (0xc0008786e0) (0xc000826000) Stream removed, broadcasting: 3\nI0224 15:03:15.239721    3058 log.go:172] (0xc0008786e0) Data frame received for 1\nI0224 15:03:15.239862    3058 log.go:172] (0xc0008786e0) (0xc0005fc500) Stream removed, broadcasting: 5\nI0224 15:03:15.240083    3058 log.go:172] (0xc000826aa0) (1) Data frame handling\nI0224 15:03:15.240172    3058 log.go:172] (0xc000826aa0) (1) Data frame sent\nI0224 15:03:15.240184    3058 log.go:172] (0xc0008786e0) (0xc000826aa0) Stream removed, broadcasting: 1\nI0224 15:03:15.240211    3058 log.go:172] (0xc0008786e0) Go away received\nI0224 15:03:15.240813    3058 log.go:172] (0xc0008786e0) (0xc000826aa0) Stream removed, broadcasting: 1\nI0224 15:03:15.240940    3058 log.go:172] (0xc0008786e0) (0xc000826000) Stream removed, broadcasting: 3\nI0224 15:03:15.240971    3058 log.go:172] (0xc0008786e0) (0xc0005fc500) Stream removed, broadcasting: 5\n"
Feb 24 15:03:15.248: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 24 15:03:15.248: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 24 15:03:15.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7479 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 24 15:03:15.785: INFO: stderr: "I0224 15:03:15.394963    3073 log.go:172] (0xc0008ec630) (0xc0005d2a00) Create stream\nI0224 15:03:15.395063    3073 log.go:172] (0xc0008ec630) (0xc0005d2a00) Stream added, broadcasting: 1\nI0224 15:03:15.397062    3073 log.go:172] (0xc0008ec630) Reply frame received for 1\nI0224 15:03:15.397087    3073 log.go:172] (0xc0008ec630) (0xc0005d2aa0) Create stream\nI0224 15:03:15.397094    3073 log.go:172] (0xc0008ec630) (0xc0005d2aa0) Stream added, broadcasting: 3\nI0224 15:03:15.397964    3073 log.go:172] (0xc0008ec630) Reply frame received for 3\nI0224 15:03:15.397990    3073 log.go:172] (0xc0008ec630) (0xc0005d2b40) Create stream\nI0224 15:03:15.398001    3073 log.go:172] (0xc0008ec630) (0xc0005d2b40) Stream added, broadcasting: 5\nI0224 15:03:15.398919    3073 log.go:172] (0xc0008ec630) Reply frame received for 5\nI0224 15:03:15.527872    3073 log.go:172] (0xc0008ec630) Data frame received for 5\nI0224 15:03:15.527909    3073 log.go:172] (0xc0005d2b40) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0224 15:03:15.528021    3073 log.go:172] (0xc0005d2b40) (5) Data frame sent\nI0224 15:03:15.631328    3073 log.go:172] (0xc0008ec630) Data frame received for 3\nI0224 15:03:15.631365    3073 log.go:172] (0xc0005d2aa0) (3) Data frame handling\nI0224 15:03:15.631386    3073 log.go:172] (0xc0005d2aa0) (3) Data frame sent\nI0224 15:03:15.775035    3073 log.go:172] (0xc0008ec630) Data frame received for 1\nI0224 15:03:15.775136    3073 log.go:172] (0xc0005d2a00) (1) Data frame handling\nI0224 15:03:15.775177    3073 log.go:172] (0xc0008ec630) (0xc0005d2aa0) Stream removed, broadcasting: 3\nI0224 15:03:15.775248    3073 log.go:172] (0xc0005d2a00) (1) Data frame sent\nI0224 15:03:15.775280    3073 log.go:172] (0xc0008ec630) (0xc0005d2a00) Stream removed, broadcasting: 1\nI0224 15:03:15.775728    3073 log.go:172] (0xc0008ec630) (0xc0005d2b40) Stream removed, broadcasting: 5\nI0224 15:03:15.775843    3073 log.go:172] (0xc0008ec630) Go away received\nI0224 15:03:15.776080    3073 log.go:172] (0xc0008ec630) (0xc0005d2a00) Stream removed, broadcasting: 1\nI0224 15:03:15.776122    3073 log.go:172] (0xc0008ec630) (0xc0005d2aa0) Stream removed, broadcasting: 3\nI0224 15:03:15.777069    3073 log.go:172] (0xc0008ec630) (0xc0005d2b40) Stream removed, broadcasting: 5\n"
Feb 24 15:03:15.785: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 24 15:03:15.785: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 24 15:03:15.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7479 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 24 15:03:16.421: INFO: stderr: "I0224 15:03:16.119411    3092 log.go:172] (0xc00090c370) (0xc00064c6e0) Create stream\nI0224 15:03:16.119558    3092 log.go:172] (0xc00090c370) (0xc00064c6e0) Stream added, broadcasting: 1\nI0224 15:03:16.132108    3092 log.go:172] (0xc00090c370) Reply frame received for 1\nI0224 15:03:16.132146    3092 log.go:172] (0xc00090c370) (0xc00064c780) Create stream\nI0224 15:03:16.132152    3092 log.go:172] (0xc00090c370) (0xc00064c780) Stream added, broadcasting: 3\nI0224 15:03:16.134454    3092 log.go:172] (0xc00090c370) Reply frame received for 3\nI0224 15:03:16.134479    3092 log.go:172] (0xc00090c370) (0xc000914000) Create stream\nI0224 15:03:16.134488    3092 log.go:172] (0xc00090c370) (0xc000914000) Stream added, broadcasting: 5\nI0224 15:03:16.135682    3092 log.go:172] (0xc00090c370) Reply frame received for 5\nI0224 15:03:16.231374    3092 log.go:172] (0xc00090c370) Data frame received for 5\nI0224 15:03:16.231450    3092 log.go:172] (0xc000914000) (5) Data frame handling\nI0224 15:03:16.231495    3092 log.go:172] (0xc000914000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0224 15:03:16.275265    3092 log.go:172] (0xc00090c370) Data frame received for 3\nI0224 15:03:16.275375    3092 log.go:172] (0xc00064c780) (3) Data frame handling\nI0224 15:03:16.275419    3092 log.go:172] (0xc00064c780) (3) Data frame sent\nI0224 15:03:16.409516    3092 log.go:172] (0xc00090c370) (0xc00064c780) Stream removed, broadcasting: 3\nI0224 15:03:16.409700    3092 log.go:172] (0xc00090c370) Data frame received for 1\nI0224 15:03:16.409744    3092 log.go:172] (0xc00064c6e0) (1) Data frame handling\nI0224 15:03:16.409798    3092 log.go:172] (0xc00064c6e0) (1) Data frame sent\nI0224 15:03:16.409819    3092 log.go:172] (0xc00090c370) (0xc000914000) Stream removed, broadcasting: 5\nI0224 15:03:16.409880    3092 log.go:172] (0xc00090c370) (0xc00064c6e0) Stream removed, broadcasting: 1\nI0224 15:03:16.411308    3092 log.go:172] (0xc00090c370) (0xc00064c6e0) Stream removed, broadcasting: 1\nI0224 15:03:16.411396    3092 log.go:172] (0xc00090c370) (0xc00064c780) Stream removed, broadcasting: 3\nI0224 15:03:16.411416    3092 log.go:172] (0xc00090c370) (0xc000914000) Stream removed, broadcasting: 5\nI0224 15:03:16.411759    3092 log.go:172] (0xc00090c370) Go away received\n"
Feb 24 15:03:16.421: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 24 15:03:16.421: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 24 15:03:16.422: INFO: Waiting for statefulset status.replicas updated to 0
Feb 24 15:03:16.427: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb 24 15:03:26.445: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 24 15:03:26.446: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 24 15:03:26.446: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 24 15:03:26.484: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999164s
Feb 24 15:03:27.494: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.974457102s
Feb 24 15:03:28.510: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.964632443s
Feb 24 15:03:29.520: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.948523962s
Feb 24 15:03:30.540: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.938424766s
Feb 24 15:03:31.553: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.918570084s
Feb 24 15:03:32.759: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.905862539s
Feb 24 15:03:33.771: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.70011271s
Feb 24 15:03:34.786: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.687490184s
Feb 24 15:03:35.797: INFO: Verifying statefulset ss doesn't scale past 3 for another 672.41455ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7479
Feb 24 15:03:36.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7479 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 24 15:03:37.296: INFO: stderr: "I0224 15:03:36.972123    3112 log.go:172] (0xc0008bbce0) (0xc0008a7d60) Create stream\nI0224 15:03:36.972296    3112 log.go:172] (0xc0008bbce0) (0xc0008a7d60) Stream added, broadcasting: 1\nI0224 15:03:36.984632    3112 log.go:172] (0xc0008bbce0) Reply frame received for 1\nI0224 15:03:36.984711    3112 log.go:172] (0xc0008bbce0) (0xc0001bfae0) Create stream\nI0224 15:03:36.984728    3112 log.go:172] (0xc0008bbce0) (0xc0001bfae0) Stream added, broadcasting: 3\nI0224 15:03:36.987210    3112 log.go:172] (0xc0008bbce0) Reply frame received for 3\nI0224 15:03:36.987273    3112 log.go:172] (0xc0008bbce0) (0xc0007f20a0) Create stream\nI0224 15:03:36.987288    3112 log.go:172] (0xc0008bbce0) (0xc0007f20a0) Stream added, broadcasting: 5\nI0224 15:03:36.990058    3112 log.go:172] (0xc0008bbce0) Reply frame received for 5\nI0224 15:03:37.157122    3112 log.go:172] (0xc0008bbce0) Data frame received for 5\nI0224 15:03:37.157250    3112 log.go:172] (0xc0007f20a0) (5) Data frame handling\nI0224 15:03:37.157276    3112 log.go:172] (0xc0007f20a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0224 15:03:37.158729    3112 log.go:172] (0xc0008bbce0) Data frame received for 3\nI0224 15:03:37.158761    3112 log.go:172] (0xc0001bfae0) (3) Data frame handling\nI0224 15:03:37.158779    3112 log.go:172] (0xc0001bfae0) (3) Data frame sent\nI0224 15:03:37.286978    3112 log.go:172] (0xc0008bbce0) Data frame received for 1\nI0224 15:03:37.287193    3112 log.go:172] (0xc0008bbce0) (0xc0001bfae0) Stream removed, broadcasting: 3\nI0224 15:03:37.287261    3112 log.go:172] (0xc0008a7d60) (1) Data frame handling\nI0224 15:03:37.287308    3112 log.go:172] (0xc0008a7d60) (1) Data frame sent\nI0224 15:03:37.287372    3112 log.go:172] (0xc0008bbce0) (0xc0007f20a0) Stream removed, broadcasting: 5\nI0224 15:03:37.287447    3112 log.go:172] (0xc0008bbce0) (0xc0008a7d60) Stream removed, broadcasting: 1\nI0224 15:03:37.287480    3112 log.go:172] (0xc0008bbce0) Go away received\nI0224 15:03:37.287925    3112 log.go:172] (0xc0008bbce0) (0xc0008a7d60) Stream removed, broadcasting: 1\nI0224 15:03:37.287975    3112 log.go:172] (0xc0008bbce0) (0xc0001bfae0) Stream removed, broadcasting: 3\nI0224 15:03:37.288000    3112 log.go:172] (0xc0008bbce0) (0xc0007f20a0) Stream removed, broadcasting: 5\n"
Feb 24 15:03:37.297: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 24 15:03:37.297: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 24 15:03:37.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7479 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 24 15:03:37.599: INFO: stderr: "I0224 15:03:37.435998    3129 log.go:172] (0xc0007d8370) (0xc0002c0640) Create stream\nI0224 15:03:37.436239    3129 log.go:172] (0xc0007d8370) (0xc0002c0640) Stream added, broadcasting: 1\nI0224 15:03:37.439668    3129 log.go:172] (0xc0007d8370) Reply frame received for 1\nI0224 15:03:37.439709    3129 log.go:172] (0xc0007d8370) (0xc0002c06e0) Create stream\nI0224 15:03:37.439719    3129 log.go:172] (0xc0007d8370) (0xc0002c06e0) Stream added, broadcasting: 3\nI0224 15:03:37.440754    3129 log.go:172] (0xc0007d8370) Reply frame received for 3\nI0224 15:03:37.440783    3129 log.go:172] (0xc0007d8370) (0xc000740000) Create stream\nI0224 15:03:37.440800    3129 log.go:172] (0xc0007d8370) (0xc000740000) Stream added, broadcasting: 5\nI0224 15:03:37.441800    3129 log.go:172] (0xc0007d8370) Reply frame received for 5\nI0224 15:03:37.515061    3129 log.go:172] (0xc0007d8370) Data frame received for 3\nI0224 15:03:37.515119    3129 log.go:172] (0xc0002c06e0) (3) Data frame handling\nI0224 15:03:37.515168    3129 log.go:172] (0xc0002c06e0) (3) Data frame sent\nI0224 15:03:37.515332    3129 log.go:172] (0xc0007d8370) Data frame received for 5\nI0224 15:03:37.515348    3129 log.go:172] (0xc000740000) (5) Data frame handling\nI0224 15:03:37.515363    3129 log.go:172] (0xc000740000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0224 15:03:37.594338    3129 log.go:172] (0xc0007d8370) Data frame received for 1\nI0224 15:03:37.594458    3129 log.go:172] (0xc0002c0640) (1) Data frame handling\nI0224 15:03:37.594494    3129 log.go:172] (0xc0002c0640) (1) Data frame sent\nI0224 15:03:37.594542    3129 log.go:172] (0xc0007d8370) (0xc0002c06e0) Stream removed, broadcasting: 3\nI0224 15:03:37.594614    3129 log.go:172] (0xc0007d8370) (0xc0002c0640) Stream removed, broadcasting: 1\nI0224 15:03:37.594653    3129 log.go:172] (0xc0007d8370) (0xc000740000) Stream removed, broadcasting: 5\nI0224 15:03:37.594671    3129 log.go:172] (0xc0007d8370) Go away received\nI0224 15:03:37.594863    3129 log.go:172] (0xc0007d8370) (0xc0002c0640) Stream removed, broadcasting: 1\nI0224 15:03:37.594879    3129 log.go:172] (0xc0007d8370) (0xc0002c06e0) Stream removed, broadcasting: 3\nI0224 15:03:37.594887    3129 log.go:172] (0xc0007d8370) (0xc000740000) Stream removed, broadcasting: 5\n"
Feb 24 15:03:37.599: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 24 15:03:37.599: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 24 15:03:37.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7479 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 24 15:03:38.156: INFO: stderr: "I0224 15:03:37.785395    3145 log.go:172] (0xc000117080) (0xc00060ad20) Create stream\nI0224 15:03:37.785602    3145 log.go:172] (0xc000117080) (0xc00060ad20) Stream added, broadcasting: 1\nI0224 15:03:37.792033    3145 log.go:172] (0xc000117080) Reply frame received for 1\nI0224 15:03:37.792079    3145 log.go:172] (0xc000117080) (0xc00086c000) Create stream\nI0224 15:03:37.792112    3145 log.go:172] (0xc000117080) (0xc00086c000) Stream added, broadcasting: 3\nI0224 15:03:37.793230    3145 log.go:172] (0xc000117080) Reply frame received for 3\nI0224 15:03:37.793259    3145 log.go:172] (0xc000117080) (0xc00086c0a0) Create stream\nI0224 15:03:37.793268    3145 log.go:172] (0xc000117080) (0xc00086c0a0) Stream added, broadcasting: 5\nI0224 15:03:37.796953    3145 log.go:172] (0xc000117080) Reply frame received for 5\nI0224 15:03:37.975166    3145 log.go:172] (0xc000117080) Data frame received for 5\nI0224 15:03:37.975541    3145 log.go:172] (0xc00086c0a0) (5) Data frame handling\nI0224 15:03:37.975589    3145 log.go:172] (0xc00086c0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0224 15:03:37.976595    3145 log.go:172] (0xc000117080) Data frame received for 3\nI0224 15:03:37.976632    3145 log.go:172] (0xc00086c000) (3) Data frame handling\nI0224 15:03:37.976664    3145 log.go:172] (0xc00086c000) (3) Data frame sent\nI0224 15:03:38.145900    3145 log.go:172] (0xc000117080) (0xc00086c0a0) Stream removed, broadcasting: 5\nI0224 15:03:38.146141    3145 log.go:172] (0xc000117080) Data frame received for 1\nI0224 15:03:38.146198    3145 log.go:172] (0xc00060ad20) (1) Data frame handling\nI0224 15:03:38.146235    3145 log.go:172] (0xc00060ad20) (1) Data frame sent\nI0224 15:03:38.146750    3145 log.go:172] (0xc000117080) (0xc00060ad20) Stream removed, broadcasting: 1\nI0224 15:03:38.146981    3145 log.go:172] (0xc000117080) (0xc00086c000) Stream removed, broadcasting: 3\nI0224 15:03:38.147035    3145 log.go:172] (0xc000117080) Go away received\nI0224 15:03:38.147639    3145 log.go:172] (0xc000117080) (0xc00060ad20) Stream removed, broadcasting: 1\nI0224 15:03:38.147664    3145 log.go:172] (0xc000117080) (0xc00086c000) Stream removed, broadcasting: 3\nI0224 15:03:38.147676    3145 log.go:172] (0xc000117080) (0xc00086c0a0) Stream removed, broadcasting: 5\n"
Feb 24 15:03:38.156: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 24 15:03:38.156: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 24 15:03:38.156: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 24 15:03:58.185: INFO: Deleting all statefulset in ns statefulset-7479
Feb 24 15:03:58.191: INFO: Scaling statefulset ss to 0
Feb 24 15:03:58.206: INFO: Waiting for statefulset status.replicas updated to 0
Feb 24 15:03:58.211: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:03:58.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7479" for this suite.
Feb 24 15:04:04.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:04:04.429: INFO: namespace statefulset-7479 deletion completed in 6.172393825s

• [SLOW TEST:91.493 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:04:04.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb 24 15:04:04.571: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8924,SelfLink:/api/v1/namespaces/watch-8924/configmaps/e2e-watch-test-label-changed,UID:db41ca74-bf16-4b92-b8f2-31f6c9ea757e,ResourceVersion:25591641,Generation:0,CreationTimestamp:2020-02-24 15:04:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 24 15:04:04.572: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8924,SelfLink:/api/v1/namespaces/watch-8924/configmaps/e2e-watch-test-label-changed,UID:db41ca74-bf16-4b92-b8f2-31f6c9ea757e,ResourceVersion:25591642,Generation:0,CreationTimestamp:2020-02-24 15:04:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 24 15:04:04.572: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8924,SelfLink:/api/v1/namespaces/watch-8924/configmaps/e2e-watch-test-label-changed,UID:db41ca74-bf16-4b92-b8f2-31f6c9ea757e,ResourceVersion:25591644,Generation:0,CreationTimestamp:2020-02-24 15:04:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb 24 15:04:14.720: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8924,SelfLink:/api/v1/namespaces/watch-8924/configmaps/e2e-watch-test-label-changed,UID:db41ca74-bf16-4b92-b8f2-31f6c9ea757e,ResourceVersion:25591659,Generation:0,CreationTimestamp:2020-02-24 15:04:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 24 15:04:14.720: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8924,SelfLink:/api/v1/namespaces/watch-8924/configmaps/e2e-watch-test-label-changed,UID:db41ca74-bf16-4b92-b8f2-31f6c9ea757e,ResourceVersion:25591661,Generation:0,CreationTimestamp:2020-02-24 15:04:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Feb 24 15:04:14.720: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8924,SelfLink:/api/v1/namespaces/watch-8924/configmaps/e2e-watch-test-label-changed,UID:db41ca74-bf16-4b92-b8f2-31f6c9ea757e,ResourceVersion:25591662,Generation:0,CreationTimestamp:2020-02-24 15:04:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:04:14.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8924" for this suite.
Feb 24 15:04:20.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:04:20.876: INFO: namespace watch-8924 deletion completed in 6.151028424s

• [SLOW TEST:16.446 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:04:20.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-76570e89-9201-47a7-a425-0342c42fc753
STEP: Creating a pod to test consume secrets
Feb 24 15:04:21.000: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3f23b13b-be26-43f2-a7b3-93c093249779" in namespace "projected-3759" to be "success or failure"
Feb 24 15:04:21.024: INFO: Pod "pod-projected-secrets-3f23b13b-be26-43f2-a7b3-93c093249779": Phase="Pending", Reason="", readiness=false. Elapsed: 24.068758ms
Feb 24 15:04:23.031: INFO: Pod "pod-projected-secrets-3f23b13b-be26-43f2-a7b3-93c093249779": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031543835s
Feb 24 15:04:25.043: INFO: Pod "pod-projected-secrets-3f23b13b-be26-43f2-a7b3-93c093249779": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043407337s
Feb 24 15:04:27.055: INFO: Pod "pod-projected-secrets-3f23b13b-be26-43f2-a7b3-93c093249779": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054868167s
Feb 24 15:04:29.061: INFO: Pod "pod-projected-secrets-3f23b13b-be26-43f2-a7b3-93c093249779": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061635192s
STEP: Saw pod success
Feb 24 15:04:29.061: INFO: Pod "pod-projected-secrets-3f23b13b-be26-43f2-a7b3-93c093249779" satisfied condition "success or failure"
Feb 24 15:04:29.066: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-3f23b13b-be26-43f2-a7b3-93c093249779 container projected-secret-volume-test: 
STEP: delete the pod
Feb 24 15:04:29.151: INFO: Waiting for pod pod-projected-secrets-3f23b13b-be26-43f2-a7b3-93c093249779 to disappear
Feb 24 15:04:29.160: INFO: Pod pod-projected-secrets-3f23b13b-be26-43f2-a7b3-93c093249779 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:04:29.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3759" for this suite.
Feb 24 15:04:35.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:04:35.302: INFO: namespace projected-3759 deletion completed in 6.133867369s

• [SLOW TEST:14.426 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:04:35.302: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:05:28.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5666" for this suite.
Feb 24 15:05:34.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:05:34.648: INFO: namespace container-runtime-5666 deletion completed in 6.156449054s

• [SLOW TEST:59.346 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:05:34.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 24 15:05:34.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:05:42.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1565" for this suite.
Feb 24 15:06:24.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:06:25.093: INFO: namespace pods-1565 deletion completed in 42.2036508s

• [SLOW TEST:50.445 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:06:25.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 24 15:06:41.255: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 24 15:06:41.263: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 24 15:06:43.263: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 24 15:06:43.269: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 24 15:06:45.263: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 24 15:06:45.272: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 24 15:06:47.263: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 24 15:06:47.273: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 24 15:06:49.263: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 24 15:06:49.280: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 24 15:06:51.263: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 24 15:06:51.270: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 24 15:06:53.263: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 24 15:06:53.273: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 24 15:06:55.263: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 24 15:06:55.275: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 24 15:06:57.263: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 24 15:06:57.273: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 24 15:06:59.263: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 24 15:06:59.279: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 24 15:07:01.263: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 24 15:07:01.273: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 24 15:07:03.263: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 24 15:07:03.274: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:07:03.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9481" for this suite.
Feb 24 15:07:25.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:07:25.472: INFO: namespace container-lifecycle-hook-9481 deletion completed in 22.148479211s

• [SLOW TEST:60.379 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:07:25.475: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Feb 24 15:07:25.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7930'
Feb 24 15:07:28.293: INFO: stderr: ""
Feb 24 15:07:28.293: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 24 15:07:28.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7930'
Feb 24 15:07:28.570: INFO: stderr: ""
Feb 24 15:07:28.570: INFO: stdout: "update-demo-nautilus-g64jp update-demo-nautilus-jdv4s "
Feb 24 15:07:28.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g64jp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7930'
Feb 24 15:07:28.724: INFO: stderr: ""
Feb 24 15:07:28.724: INFO: stdout: ""
Feb 24 15:07:28.724: INFO: update-demo-nautilus-g64jp is created but not running
Feb 24 15:07:33.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7930'
Feb 24 15:07:33.833: INFO: stderr: ""
Feb 24 15:07:33.833: INFO: stdout: "update-demo-nautilus-g64jp update-demo-nautilus-jdv4s "
Feb 24 15:07:33.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g64jp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7930'
Feb 24 15:07:33.932: INFO: stderr: ""
Feb 24 15:07:33.932: INFO: stdout: ""
Feb 24 15:07:33.932: INFO: update-demo-nautilus-g64jp is created but not running
Feb 24 15:07:38.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7930'
Feb 24 15:07:39.098: INFO: stderr: ""
Feb 24 15:07:39.098: INFO: stdout: "update-demo-nautilus-g64jp update-demo-nautilus-jdv4s "
Feb 24 15:07:39.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g64jp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7930'
Feb 24 15:07:39.199: INFO: stderr: ""
Feb 24 15:07:39.199: INFO: stdout: "true"
Feb 24 15:07:39.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g64jp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7930'
Feb 24 15:07:39.312: INFO: stderr: ""
Feb 24 15:07:39.312: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 24 15:07:39.312: INFO: validating pod update-demo-nautilus-g64jp
Feb 24 15:07:39.322: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 24 15:07:39.322: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 24 15:07:39.322: INFO: update-demo-nautilus-g64jp is verified up and running
Feb 24 15:07:39.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jdv4s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7930'
Feb 24 15:07:39.426: INFO: stderr: ""
Feb 24 15:07:39.426: INFO: stdout: "true"
Feb 24 15:07:39.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jdv4s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7930'
Feb 24 15:07:39.533: INFO: stderr: ""
Feb 24 15:07:39.533: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 24 15:07:39.533: INFO: validating pod update-demo-nautilus-jdv4s
Feb 24 15:07:39.552: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 24 15:07:39.552: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 24 15:07:39.552: INFO: update-demo-nautilus-jdv4s is verified up and running
STEP: using delete to clean up resources
Feb 24 15:07:39.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7930'
Feb 24 15:07:39.664: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 24 15:07:39.664: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 24 15:07:39.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7930'
Feb 24 15:07:39.794: INFO: stderr: "No resources found.\n"
Feb 24 15:07:39.794: INFO: stdout: ""
Feb 24 15:07:39.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7930 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 24 15:07:39.894: INFO: stderr: ""
Feb 24 15:07:39.895: INFO: stdout: "update-demo-nautilus-g64jp\nupdate-demo-nautilus-jdv4s\n"
Feb 24 15:07:40.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7930'
Feb 24 15:07:40.518: INFO: stderr: "No resources found.\n"
Feb 24 15:07:40.518: INFO: stdout: ""
Feb 24 15:07:40.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7930 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 24 15:07:40.627: INFO: stderr: ""
Feb 24 15:07:40.627: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:07:40.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7930" for this suite.
Feb 24 15:08:02.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:08:02.759: INFO: namespace kubectl-7930 deletion completed in 22.122490685s

• [SLOW TEST:37.284 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:08:02.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Feb 24 15:08:02.874: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:08:03.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2985" for this suite.
Feb 24 15:08:09.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:08:09.340: INFO: namespace kubectl-2985 deletion completed in 6.331238396s

• [SLOW TEST:6.581 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:08:09.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 24 15:08:09.442: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:08:10.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2825" for this suite.
Feb 24 15:08:16.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:08:16.760: INFO: namespace custom-resource-definition-2825 deletion completed in 6.210015061s

• [SLOW TEST:7.419 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:08:16.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-b759be7e-6de6-4748-9840-1968d472bc15 in namespace container-probe-256
Feb 24 15:08:24.947: INFO: Started pod busybox-b759be7e-6de6-4748-9840-1968d472bc15 in namespace container-probe-256
STEP: checking the pod's current state and verifying that restartCount is present
Feb 24 15:08:24.955: INFO: Initial restart count of pod busybox-b759be7e-6de6-4748-9840-1968d472bc15 is 0
Feb 24 15:09:21.738: INFO: Restart count of pod container-probe-256/busybox-b759be7e-6de6-4748-9840-1968d472bc15 is now 1 (56.783037956s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:09:21.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-256" for this suite.
Feb 24 15:09:27.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:09:28.187: INFO: namespace container-probe-256 deletion completed in 6.374720891s

• [SLOW TEST:71.426 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:09:28.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 24 15:09:28.245: INFO: Creating ReplicaSet my-hostname-basic-1b0213cd-c7a5-42ae-8fc5-b375508ee51a
Feb 24 15:09:28.307: INFO: Pod name my-hostname-basic-1b0213cd-c7a5-42ae-8fc5-b375508ee51a: Found 0 pods out of 1
Feb 24 15:09:33.589: INFO: Pod name my-hostname-basic-1b0213cd-c7a5-42ae-8fc5-b375508ee51a: Found 1 pods out of 1
Feb 24 15:09:33.589: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-1b0213cd-c7a5-42ae-8fc5-b375508ee51a" is running
Feb 24 15:09:35.606: INFO: Pod "my-hostname-basic-1b0213cd-c7a5-42ae-8fc5-b375508ee51a-t54ss" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-24 15:09:28 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-24 15:09:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-1b0213cd-c7a5-42ae-8fc5-b375508ee51a]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-24 15:09:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-1b0213cd-c7a5-42ae-8fc5-b375508ee51a]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-24 15:09:28 +0000 UTC Reason: Message:}])
Feb 24 15:09:35.606: INFO: Trying to dial the pod
Feb 24 15:09:40.666: INFO: Controller my-hostname-basic-1b0213cd-c7a5-42ae-8fc5-b375508ee51a: Got expected result from replica 1 [my-hostname-basic-1b0213cd-c7a5-42ae-8fc5-b375508ee51a-t54ss]: "my-hostname-basic-1b0213cd-c7a5-42ae-8fc5-b375508ee51a-t54ss", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:09:40.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1877" for this suite.
Feb 24 15:09:46.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:09:46.878: INFO: namespace replicaset-1877 deletion completed in 6.204983854s

• [SLOW TEST:18.690 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:09:46.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 24 15:09:46.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-6784'
Feb 24 15:09:47.099: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 24 15:09:47.099: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Feb 24 15:09:49.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-6784'
Feb 24 15:09:49.317: INFO: stderr: ""
Feb 24 15:09:49.318: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:09:49.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6784" for this suite.
Feb 24 15:09:55.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:09:55.528: INFO: namespace kubectl-6784 deletion completed in 6.206948706s

• [SLOW TEST:8.650 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:09:55.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Feb 24 15:09:55.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb 24 15:09:55.844: INFO: stderr: ""
Feb 24 15:09:55.844: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:09:55.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6265" for this suite.
Feb 24 15:10:01.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:10:02.068: INFO: namespace kubectl-6265 deletion completed in 6.217217936s

• [SLOW TEST:6.539 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:10:02.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-6d24d68d-eaf9-437b-b093-2ac60391b490
Feb 24 15:10:02.232: INFO: Pod name my-hostname-basic-6d24d68d-eaf9-437b-b093-2ac60391b490: Found 0 pods out of 1
Feb 24 15:10:07.273: INFO: Pod name my-hostname-basic-6d24d68d-eaf9-437b-b093-2ac60391b490: Found 1 pods out of 1
Feb 24 15:10:07.273: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-6d24d68d-eaf9-437b-b093-2ac60391b490" are running
Feb 24 15:10:11.292: INFO: Pod "my-hostname-basic-6d24d68d-eaf9-437b-b093-2ac60391b490-9fn5w" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-24 15:10:02 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-24 15:10:02 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-6d24d68d-eaf9-437b-b093-2ac60391b490]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-24 15:10:02 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-6d24d68d-eaf9-437b-b093-2ac60391b490]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-24 15:10:02 +0000 UTC Reason: Message:}])
Feb 24 15:10:11.292: INFO: Trying to dial the pod
Feb 24 15:10:16.360: INFO: Controller my-hostname-basic-6d24d68d-eaf9-437b-b093-2ac60391b490: Got expected result from replica 1 [my-hostname-basic-6d24d68d-eaf9-437b-b093-2ac60391b490-9fn5w]: "my-hostname-basic-6d24d68d-eaf9-437b-b093-2ac60391b490-9fn5w", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:10:16.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7903" for this suite.
Feb 24 15:10:22.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:10:22.664: INFO: namespace replication-controller-7903 deletion completed in 6.250701719s

• [SLOW TEST:20.596 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:10:22.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 24 15:10:22.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4132'
Feb 24 15:10:23.428: INFO: stderr: ""
Feb 24 15:10:23.428: INFO: stdout: "replicationcontroller/redis-master created\n"
Feb 24 15:10:23.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4132'
Feb 24 15:10:23.925: INFO: stderr: ""
Feb 24 15:10:23.925: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 24 15:10:24.933: INFO: Selector matched 1 pods for map[app:redis]
Feb 24 15:10:24.933: INFO: Found 0 / 1
Feb 24 15:10:25.986: INFO: Selector matched 1 pods for map[app:redis]
Feb 24 15:10:25.986: INFO: Found 0 / 1
Feb 24 15:10:26.939: INFO: Selector matched 1 pods for map[app:redis]
Feb 24 15:10:26.940: INFO: Found 0 / 1
Feb 24 15:10:27.934: INFO: Selector matched 1 pods for map[app:redis]
Feb 24 15:10:27.934: INFO: Found 0 / 1
Feb 24 15:10:28.950: INFO: Selector matched 1 pods for map[app:redis]
Feb 24 15:10:28.950: INFO: Found 0 / 1
Feb 24 15:10:29.939: INFO: Selector matched 1 pods for map[app:redis]
Feb 24 15:10:29.940: INFO: Found 0 / 1
Feb 24 15:10:30.937: INFO: Selector matched 1 pods for map[app:redis]
Feb 24 15:10:30.937: INFO: Found 0 / 1
Feb 24 15:10:31.934: INFO: Selector matched 1 pods for map[app:redis]
Feb 24 15:10:31.934: INFO: Found 1 / 1
Feb 24 15:10:31.934: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 24 15:10:31.939: INFO: Selector matched 1 pods for map[app:redis]
Feb 24 15:10:31.939: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 24 15:10:31.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-mbbtv --namespace=kubectl-4132'
Feb 24 15:10:32.143: INFO: stderr: ""
Feb 24 15:10:32.143: INFO: stdout: "Name:           redis-master-mbbtv\nNamespace:      kubectl-4132\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Mon, 24 Feb 2020 15:10:23 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://6589ab22950849a64f5f8c8ca4d8258c0baed9ebd83a32c792b4384e28b37c64\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Mon, 24 Feb 2020 15:10:30 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qmwhh (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-qmwhh:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-qmwhh\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  9s    default-scheduler    Successfully assigned kubectl-4132/redis-master-mbbtv to iruya-node\n  Normal  Pulled     5s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    3s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    2s    kubelet, iruya-node  Started container redis-master\n"
Feb 24 15:10:32.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-4132'
Feb 24 15:10:32.334: INFO: stderr: ""
Feb 24 15:10:32.334: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-4132\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  9s    replication-controller  Created pod: redis-master-mbbtv\n"
Feb 24 15:10:32.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-4132'
Feb 24 15:10:32.468: INFO: stderr: ""
Feb 24 15:10:32.468: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-4132\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.106.217.20\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Feb 24 15:10:32.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Feb 24 15:10:32.623: INFO: stderr: ""
Feb 24 15:10:32.623: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Mon, 24 Feb 2020 15:10:30 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Mon, 24 Feb 2020 15:10:30 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Mon, 24 Feb 2020 15:10:30 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Mon, 24 Feb 2020 15:10:30 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         204d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         135d\n  kubectl-4132               redis-master-mbbtv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Feb 24 15:10:32.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-4132'
Feb 24 15:10:32.711: INFO: stderr: ""
Feb 24 15:10:32.711: INFO: stdout: "Name:         kubectl-4132\nLabels:       e2e-framework=kubectl\n              e2e-run=f462c093-9538-4cab-9220-52741c5b49ff\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:10:32.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4132" for this suite.
Feb 24 15:10:56.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:10:56.877: INFO: namespace kubectl-4132 deletion completed in 24.160651534s

• [SLOW TEST:34.212 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:10:56.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-5180084a-146e-46b1-b5ad-51c1ca99b742
STEP: Creating a pod to test consume secrets
Feb 24 15:10:57.003: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0df04dae-1abd-4e7e-bce3-b3c9ab161eef" in namespace "projected-2161" to be "success or failure"
Feb 24 15:10:57.015: INFO: Pod "pod-projected-secrets-0df04dae-1abd-4e7e-bce3-b3c9ab161eef": Phase="Pending", Reason="", readiness=false. Elapsed: 12.475773ms
Feb 24 15:10:59.029: INFO: Pod "pod-projected-secrets-0df04dae-1abd-4e7e-bce3-b3c9ab161eef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026447704s
Feb 24 15:11:01.036: INFO: Pod "pod-projected-secrets-0df04dae-1abd-4e7e-bce3-b3c9ab161eef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033088491s
Feb 24 15:11:03.051: INFO: Pod "pod-projected-secrets-0df04dae-1abd-4e7e-bce3-b3c9ab161eef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047867545s
Feb 24 15:11:05.064: INFO: Pod "pod-projected-secrets-0df04dae-1abd-4e7e-bce3-b3c9ab161eef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061153604s
STEP: Saw pod success
Feb 24 15:11:05.064: INFO: Pod "pod-projected-secrets-0df04dae-1abd-4e7e-bce3-b3c9ab161eef" satisfied condition "success or failure"
Feb 24 15:11:05.070: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-0df04dae-1abd-4e7e-bce3-b3c9ab161eef container projected-secret-volume-test: 
STEP: delete the pod
Feb 24 15:11:05.310: INFO: Waiting for pod pod-projected-secrets-0df04dae-1abd-4e7e-bce3-b3c9ab161eef to disappear
Feb 24 15:11:05.325: INFO: Pod pod-projected-secrets-0df04dae-1abd-4e7e-bce3-b3c9ab161eef no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:11:05.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2161" for this suite.
Feb 24 15:11:11.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:11:11.517: INFO: namespace projected-2161 deletion completed in 6.176615416s

• [SLOW TEST:14.640 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:11:11.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 24 15:11:11.645: INFO: Waiting up to 5m0s for pod "downwardapi-volume-801dd6bd-6178-472a-876b-109df84ed0c8" in namespace "downward-api-7765" to be "success or failure"
Feb 24 15:11:11.654: INFO: Pod "downwardapi-volume-801dd6bd-6178-472a-876b-109df84ed0c8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.992891ms
Feb 24 15:11:13.664: INFO: Pod "downwardapi-volume-801dd6bd-6178-472a-876b-109df84ed0c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018521612s
Feb 24 15:11:15.677: INFO: Pod "downwardapi-volume-801dd6bd-6178-472a-876b-109df84ed0c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031252922s
Feb 24 15:11:18.171: INFO: Pod "downwardapi-volume-801dd6bd-6178-472a-876b-109df84ed0c8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.525860822s
Feb 24 15:11:20.179: INFO: Pod "downwardapi-volume-801dd6bd-6178-472a-876b-109df84ed0c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.533763594s
STEP: Saw pod success
Feb 24 15:11:20.179: INFO: Pod "downwardapi-volume-801dd6bd-6178-472a-876b-109df84ed0c8" satisfied condition "success or failure"
Feb 24 15:11:20.184: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-801dd6bd-6178-472a-876b-109df84ed0c8 container client-container: 
STEP: delete the pod
Feb 24 15:11:20.293: INFO: Waiting for pod downwardapi-volume-801dd6bd-6178-472a-876b-109df84ed0c8 to disappear
Feb 24 15:11:20.308: INFO: Pod downwardapi-volume-801dd6bd-6178-472a-876b-109df84ed0c8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:11:20.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7765" for this suite.
Feb 24 15:11:26.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:11:26.516: INFO: namespace downward-api-7765 deletion completed in 6.194017985s

• [SLOW TEST:14.999 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:11:26.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-e82b5e62-9156-4ee1-9945-e7c8d420a234
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:11:36.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4314" for this suite.
Feb 24 15:11:58.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:11:58.872: INFO: namespace configmap-4314 deletion completed in 22.1424001s

• [SLOW TEST:32.356 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:11:58.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 24 15:11:58.954: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c7b75620-9c1b-4571-8e2d-d4dbe860d707" in namespace "projected-3403" to be "success or failure"
Feb 24 15:11:58.969: INFO: Pod "downwardapi-volume-c7b75620-9c1b-4571-8e2d-d4dbe860d707": Phase="Pending", Reason="", readiness=false. Elapsed: 14.707512ms
Feb 24 15:12:00.978: INFO: Pod "downwardapi-volume-c7b75620-9c1b-4571-8e2d-d4dbe860d707": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024130049s
Feb 24 15:12:02.989: INFO: Pod "downwardapi-volume-c7b75620-9c1b-4571-8e2d-d4dbe860d707": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035464185s
Feb 24 15:12:04.998: INFO: Pod "downwardapi-volume-c7b75620-9c1b-4571-8e2d-d4dbe860d707": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04431904s
Feb 24 15:12:07.009: INFO: Pod "downwardapi-volume-c7b75620-9c1b-4571-8e2d-d4dbe860d707": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05522314s
STEP: Saw pod success
Feb 24 15:12:07.009: INFO: Pod "downwardapi-volume-c7b75620-9c1b-4571-8e2d-d4dbe860d707" satisfied condition "success or failure"
Feb 24 15:12:07.016: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c7b75620-9c1b-4571-8e2d-d4dbe860d707 container client-container: 
STEP: delete the pod
Feb 24 15:12:07.141: INFO: Waiting for pod downwardapi-volume-c7b75620-9c1b-4571-8e2d-d4dbe860d707 to disappear
Feb 24 15:12:07.150: INFO: Pod downwardapi-volume-c7b75620-9c1b-4571-8e2d-d4dbe860d707 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:12:07.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3403" for this suite.
Feb 24 15:12:13.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:12:13.304: INFO: namespace projected-3403 deletion completed in 6.146613124s

• [SLOW TEST:14.431 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:12:13.304: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 24 15:12:13.501: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 24.663075ms)
Feb 24 15:12:13.541: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 40.178882ms)
Feb 24 15:12:13.592: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 50.932024ms)
Feb 24 15:12:13.675: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 82.567388ms)
Feb 24 15:12:13.699: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 24.227293ms)
Feb 24 15:12:13.725: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 25.075635ms)
Feb 24 15:12:13.757: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 31.89756ms)
Feb 24 15:12:13.774: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.19971ms)
Feb 24 15:12:13.789: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.952307ms)
Feb 24 15:12:13.799: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.412489ms)
Feb 24 15:12:13.805: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.375122ms)
Feb 24 15:12:13.810: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.932158ms)
Feb 24 15:12:13.815: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.281244ms)
Feb 24 15:12:13.822: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.475163ms)
Feb 24 15:12:13.828: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.237807ms)
Feb 24 15:12:13.837: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.505178ms)
Feb 24 15:12:13.841: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.702133ms)
Feb 24 15:12:13.846: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.930254ms)
Feb 24 15:12:13.852: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.916511ms)
Feb 24 15:12:13.860: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.559205ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:12:13.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5744" for this suite.
Feb 24 15:12:19.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:12:20.062: INFO: namespace proxy-5744 deletion completed in 6.193120052s

• [SLOW TEST:6.758 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:12:20.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Feb 24 15:12:20.207: INFO: Waiting up to 5m0s for pod "client-containers-4153d552-f319-4c4b-b389-ad21b92a0d81" in namespace "containers-7233" to be "success or failure"
Feb 24 15:12:20.220: INFO: Pod "client-containers-4153d552-f319-4c4b-b389-ad21b92a0d81": Phase="Pending", Reason="", readiness=false. Elapsed: 12.295967ms
Feb 24 15:12:22.225: INFO: Pod "client-containers-4153d552-f319-4c4b-b389-ad21b92a0d81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017982607s
Feb 24 15:12:24.238: INFO: Pod "client-containers-4153d552-f319-4c4b-b389-ad21b92a0d81": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030080746s
Feb 24 15:12:26.247: INFO: Pod "client-containers-4153d552-f319-4c4b-b389-ad21b92a0d81": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039552912s
Feb 24 15:12:28.260: INFO: Pod "client-containers-4153d552-f319-4c4b-b389-ad21b92a0d81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052699218s
STEP: Saw pod success
Feb 24 15:12:28.260: INFO: Pod "client-containers-4153d552-f319-4c4b-b389-ad21b92a0d81" satisfied condition "success or failure"
Feb 24 15:12:28.264: INFO: Trying to get logs from node iruya-node pod client-containers-4153d552-f319-4c4b-b389-ad21b92a0d81 container test-container: 
STEP: delete the pod
Feb 24 15:12:28.404: INFO: Waiting for pod client-containers-4153d552-f319-4c4b-b389-ad21b92a0d81 to disappear
Feb 24 15:12:28.419: INFO: Pod client-containers-4153d552-f319-4c4b-b389-ad21b92a0d81 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:12:28.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7233" for this suite.
Feb 24 15:12:34.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:12:34.553: INFO: namespace containers-7233 deletion completed in 6.126373982s

• [SLOW TEST:14.492 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:12:34.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-bdfdbefa-175d-45e8-b945-d3297c2709d4
STEP: Creating a pod to test consume configMaps
Feb 24 15:12:34.666: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-641b2ef5-1d3f-4793-b6c6-2e3d5f908e73" in namespace "projected-9338" to be "success or failure"
Feb 24 15:12:34.674: INFO: Pod "pod-projected-configmaps-641b2ef5-1d3f-4793-b6c6-2e3d5f908e73": Phase="Pending", Reason="", readiness=false. Elapsed: 8.483925ms
Feb 24 15:12:36.720: INFO: Pod "pod-projected-configmaps-641b2ef5-1d3f-4793-b6c6-2e3d5f908e73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054561623s
Feb 24 15:12:38.740: INFO: Pod "pod-projected-configmaps-641b2ef5-1d3f-4793-b6c6-2e3d5f908e73": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074351728s
Feb 24 15:12:40.753: INFO: Pod "pod-projected-configmaps-641b2ef5-1d3f-4793-b6c6-2e3d5f908e73": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087525687s
Feb 24 15:12:42.760: INFO: Pod "pod-projected-configmaps-641b2ef5-1d3f-4793-b6c6-2e3d5f908e73": Phase="Pending", Reason="", readiness=false. Elapsed: 8.094464699s
Feb 24 15:12:44.767: INFO: Pod "pod-projected-configmaps-641b2ef5-1d3f-4793-b6c6-2e3d5f908e73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.101331604s
STEP: Saw pod success
Feb 24 15:12:44.767: INFO: Pod "pod-projected-configmaps-641b2ef5-1d3f-4793-b6c6-2e3d5f908e73" satisfied condition "success or failure"
Feb 24 15:12:44.771: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-641b2ef5-1d3f-4793-b6c6-2e3d5f908e73 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 24 15:12:44.812: INFO: Waiting for pod pod-projected-configmaps-641b2ef5-1d3f-4793-b6c6-2e3d5f908e73 to disappear
Feb 24 15:12:44.816: INFO: Pod pod-projected-configmaps-641b2ef5-1d3f-4793-b6c6-2e3d5f908e73 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:12:44.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9338" for this suite.
Feb 24 15:12:50.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:12:51.059: INFO: namespace projected-9338 deletion completed in 6.23543498s

• [SLOW TEST:16.505 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:12:51.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 24 15:12:51.129: INFO: Waiting up to 5m0s for pod "downward-api-af85dd38-0f03-459f-a041-f4001b881f05" in namespace "downward-api-6857" to be "success or failure"
Feb 24 15:12:51.137: INFO: Pod "downward-api-af85dd38-0f03-459f-a041-f4001b881f05": Phase="Pending", Reason="", readiness=false. Elapsed: 7.228526ms
Feb 24 15:12:53.143: INFO: Pod "downward-api-af85dd38-0f03-459f-a041-f4001b881f05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013882232s
Feb 24 15:12:55.154: INFO: Pod "downward-api-af85dd38-0f03-459f-a041-f4001b881f05": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024207867s
Feb 24 15:12:57.162: INFO: Pod "downward-api-af85dd38-0f03-459f-a041-f4001b881f05": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032149572s
Feb 24 15:12:59.173: INFO: Pod "downward-api-af85dd38-0f03-459f-a041-f4001b881f05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04339771s
STEP: Saw pod success
Feb 24 15:12:59.173: INFO: Pod "downward-api-af85dd38-0f03-459f-a041-f4001b881f05" satisfied condition "success or failure"
Feb 24 15:12:59.178: INFO: Trying to get logs from node iruya-node pod downward-api-af85dd38-0f03-459f-a041-f4001b881f05 container dapi-container: 
STEP: delete the pod
Feb 24 15:12:59.279: INFO: Waiting for pod downward-api-af85dd38-0f03-459f-a041-f4001b881f05 to disappear
Feb 24 15:12:59.292: INFO: Pod downward-api-af85dd38-0f03-459f-a041-f4001b881f05 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:12:59.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6857" for this suite.
Feb 24 15:13:07.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:13:07.488: INFO: namespace downward-api-6857 deletion completed in 8.185431221s

• [SLOW TEST:16.429 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:13:07.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Feb 24 15:13:07.661: INFO: Waiting up to 5m0s for pod "client-containers-7cf29f2e-1c79-4821-9464-44aab55fdbc8" in namespace "containers-3246" to be "success or failure"
Feb 24 15:13:07.691: INFO: Pod "client-containers-7cf29f2e-1c79-4821-9464-44aab55fdbc8": Phase="Pending", Reason="", readiness=false. Elapsed: 30.203325ms
Feb 24 15:13:09.702: INFO: Pod "client-containers-7cf29f2e-1c79-4821-9464-44aab55fdbc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040357575s
Feb 24 15:13:11.712: INFO: Pod "client-containers-7cf29f2e-1c79-4821-9464-44aab55fdbc8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050528662s
Feb 24 15:13:13.729: INFO: Pod "client-containers-7cf29f2e-1c79-4821-9464-44aab55fdbc8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067784155s
Feb 24 15:13:15.737: INFO: Pod "client-containers-7cf29f2e-1c79-4821-9464-44aab55fdbc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.075900812s
STEP: Saw pod success
Feb 24 15:13:15.737: INFO: Pod "client-containers-7cf29f2e-1c79-4821-9464-44aab55fdbc8" satisfied condition "success or failure"
Feb 24 15:13:15.741: INFO: Trying to get logs from node iruya-node pod client-containers-7cf29f2e-1c79-4821-9464-44aab55fdbc8 container test-container: 
STEP: delete the pod
Feb 24 15:13:15.836: INFO: Waiting for pod client-containers-7cf29f2e-1c79-4821-9464-44aab55fdbc8 to disappear
Feb 24 15:13:15.846: INFO: Pod client-containers-7cf29f2e-1c79-4821-9464-44aab55fdbc8 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:13:15.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3246" for this suite.
Feb 24 15:13:21.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:13:22.056: INFO: namespace containers-3246 deletion completed in 6.199800037s

• [SLOW TEST:14.568 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:13:22.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 24 15:16:21.482: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:16:21.528: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:16:23.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:16:23.538: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:16:25.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:16:25.538: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:16:27.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:16:27.553: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:16:29.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:16:29.541: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:16:31.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:16:31.534: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:16:33.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:16:33.534: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:16:35.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:16:35.536: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:16:37.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:16:37.537: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:16:39.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:16:39.536: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:16:41.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:16:41.536: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:16:43.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:16:43.535: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:16:45.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:16:45.537: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:16:47.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:16:47.539: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:16:49.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:16:49.548: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:16:51.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:16:51.536: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:16:53.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:16:53.537: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:16:55.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:16:55.536: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:16:57.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:16:57.548: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:16:59.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:16:59.538: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:17:01.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:17:01.542: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:17:03.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:17:03.536: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:17:05.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:17:05.541: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:17:07.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:17:07.537: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:17:09.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:17:09.538: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:17:11.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:17:11.547: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:17:13.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:17:13.536: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:17:15.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:17:15.536: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:17:17.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:17:17.537: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:17:19.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:17:19.542: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:17:21.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:17:21.535: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:17:23.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:17:23.534: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:17:25.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:17:25.536: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:17:27.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:17:27.537: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:17:29.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:17:29.535: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:17:31.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:17:31.535: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:17:33.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:17:33.537: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:17:35.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:17:35.539: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:17:37.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:17:37.536: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:17:39.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:17:39.544: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:17:41.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:17:41.537: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:17:43.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:17:43.543: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:17:45.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:17:45.537: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:17:47.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:17:47.535: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:17:49.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:17:49.538: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:17:51.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:17:51.536: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:17:53.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:17:53.536: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:17:55.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:17:55.535: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:17:57.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:17:57.535: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:17:59.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:17:59.538: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:18:01.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:18:01.536: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:18:03.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:18:03.536: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:18:05.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:18:05.535: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:18:07.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:18:07.534: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:18:09.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:18:09.539: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:18:11.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:18:11.540: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:18:13.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:18:13.536: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:18:15.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:18:15.537: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 15:18:17.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 15:18:17.534: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:18:17.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9578" for this suite.
Feb 24 15:18:39.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:18:39.687: INFO: namespace container-lifecycle-hook-9578 deletion completed in 22.148442573s

• [SLOW TEST:317.631 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:18:39.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 24 15:18:39.773: INFO: Waiting up to 5m0s for pod "pod-65feb547-2cac-4e5f-8830-66203cd59434" in namespace "emptydir-752" to be "success or failure"
Feb 24 15:18:39.791: INFO: Pod "pod-65feb547-2cac-4e5f-8830-66203cd59434": Phase="Pending", Reason="", readiness=false. Elapsed: 17.831726ms
Feb 24 15:18:41.800: INFO: Pod "pod-65feb547-2cac-4e5f-8830-66203cd59434": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026709094s
Feb 24 15:18:43.811: INFO: Pod "pod-65feb547-2cac-4e5f-8830-66203cd59434": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038111517s
Feb 24 15:18:45.819: INFO: Pod "pod-65feb547-2cac-4e5f-8830-66203cd59434": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045602495s
Feb 24 15:18:47.827: INFO: Pod "pod-65feb547-2cac-4e5f-8830-66203cd59434": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054366258s
STEP: Saw pod success
Feb 24 15:18:47.827: INFO: Pod "pod-65feb547-2cac-4e5f-8830-66203cd59434" satisfied condition "success or failure"
Feb 24 15:18:47.831: INFO: Trying to get logs from node iruya-node pod pod-65feb547-2cac-4e5f-8830-66203cd59434 container test-container: 
STEP: delete the pod
Feb 24 15:18:47.883: INFO: Waiting for pod pod-65feb547-2cac-4e5f-8830-66203cd59434 to disappear
Feb 24 15:18:47.909: INFO: Pod pod-65feb547-2cac-4e5f-8830-66203cd59434 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:18:47.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-752" for this suite.
Feb 24 15:18:54.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:18:54.132: INFO: namespace emptydir-752 deletion completed in 6.208376776s

• [SLOW TEST:14.444 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 24 15:18:54.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-9f86bbf6-2beb-4c47-afb1-52db7ccdbf2a
STEP: Creating a pod to test consume secrets
Feb 24 15:18:54.260: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2b126d4b-bccb-42a7-b1f1-1a838c6a0ea7" in namespace "projected-1666" to be "success or failure"
Feb 24 15:18:54.270: INFO: Pod "pod-projected-secrets-2b126d4b-bccb-42a7-b1f1-1a838c6a0ea7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.058836ms
Feb 24 15:18:56.278: INFO: Pod "pod-projected-secrets-2b126d4b-bccb-42a7-b1f1-1a838c6a0ea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018199749s
Feb 24 15:18:58.283: INFO: Pod "pod-projected-secrets-2b126d4b-bccb-42a7-b1f1-1a838c6a0ea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023718285s
Feb 24 15:19:00.314: INFO: Pod "pod-projected-secrets-2b126d4b-bccb-42a7-b1f1-1a838c6a0ea7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053847133s
Feb 24 15:19:02.321: INFO: Pod "pod-projected-secrets-2b126d4b-bccb-42a7-b1f1-1a838c6a0ea7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061069858s
STEP: Saw pod success
Feb 24 15:19:02.321: INFO: Pod "pod-projected-secrets-2b126d4b-bccb-42a7-b1f1-1a838c6a0ea7" satisfied condition "success or failure"
Feb 24 15:19:02.324: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-2b126d4b-bccb-42a7-b1f1-1a838c6a0ea7 container projected-secret-volume-test: 
STEP: delete the pod
Feb 24 15:19:02.526: INFO: Waiting for pod pod-projected-secrets-2b126d4b-bccb-42a7-b1f1-1a838c6a0ea7 to disappear
Feb 24 15:19:02.603: INFO: Pod pod-projected-secrets-2b126d4b-bccb-42a7-b1f1-1a838c6a0ea7 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 24 15:19:02.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1666" for this suite.
Feb 24 15:19:08.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 24 15:19:08.757: INFO: namespace projected-1666 deletion completed in 6.146478908s

• [SLOW TEST:14.624 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSFeb 24 15:19:08.757: INFO: Running AfterSuite actions on all nodes
Feb 24 15:19:08.757: INFO: Running AfterSuite actions on node 1
Feb 24 15:19:08.757: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8587.334 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS